> they'll quickly be wondering if ChatGPT is worth this cost
They should be, and the answer is obviously no—at least to them. No political or business leader has outlined a concrete, plausible path to the sort of vague UBI utopia that's been promised for "regular folks" in the bullish scenario (AGI, ASI, etc.), nor have they convincingly argued that this isn't an insane bubble that's going to cripple our economy when AGI doesn't happen—a scenario that's looking more and more likely every day.
There is no upside and only downside; whether we're heading for sci-fi apocalypse or economic catastrophe, the malignant lunatics pushing this technology expect to be insulated from consequences whether they end up owning the future light-cone of humanity or simply enjoying the cushion of their vast wealth while the majority suffers the consequences of an economic crash a few rich men caused by betting it all, even what wasn't theirs to bet.
Everybody should be fighting this tooth and nail. Even if these technologies are useful (I believe they are), and even if they can be made into profitable products and sustainable businesses, what's happening now isn't related to any of that.
There's also potentially a substantial opportunity cost re: parity with China in the near to mid term even if we don't actually end up cancelling the next gen destroyer in favor of this thing: https://youtu.be/qvUbx9TvOwk
Private entities surveil you to make money off you or protect their property. Law enforcement surveils you to arrest you and charge you with crimes. These are not the same, and that's why some people care more about surveillance by law enforcement.
As an example, see the recent case of the woman who was arrested simply for driving through a town at the same time as a robbery occurred. That sort of thing is why people care.
If the data collection is performed by a private entity and then sold to the government, that is government surveillance. I agree that this is more widespread than Flock and other big names. However, Flock and its ilk currently stand to do far more damage in practice. They offer integrated turnkey solutions that are available to practically any law enforcement, from shithead chud officers in tiny shithole towns to the NYPD and all its grand history of institutionalized misconduct, and we are already seeing the effects of that.
See, also, the recent case of a teenager who was arrested because a Flock camera or similar thought a Doritos bag in his pocket was a gun. I'll let you guess what color his skin was.
The thing is every thing I listed is also used by law enforcement. There is nothing stopping them from turning everything into a dragnet. We already know they use ring cameras, cell phones, tower data, etc to build a dragnet. Flock is just another player.
To be honest flock seem like the perfect distraction from the larger surveillance state we live in. I feel like most of the writing I have seen on this acts like this some new, disgusting, pervasive thing. The truth is law enforcement has been using everything available because there’s nothing stopping them from subpoenaing or straight buying the data.
The larger problem is law enforcement needs to be curtailed (good luck unless we bust their union which the pro-union left won’t do), and then cameras need to be removed from phones and homes.
> you shouldn't have given away your work for free.
Almost none of the original work I've ever posted online has been "given away for free", because it was protected by copyright law that AI companies are brazenly ignoring, except where they make huge deals with megacorporations (eg openai and disney) because they do in fact know what they're doing is not fair use. That's true whether or not I posted it in a context where I expected compensation.
> Almost none of the original work I've ever posted online has been "given away for free", because it was protected by copyright law that AI companies are brazenly ignoring.
I just don't think the AI is doing anything differently than a human does. It "learns" and then "generates". As long as the "generates" part is actually connecting dots on its own and not just copy & pasting protected material then I don't see why we should consider it any different from when a human does it.
And really, almost nothing is original anyway. You think you wrote an original song? You didn't. You just added a thin layer over top of years of other people's layers. Music has converged over time to all sound very similar (same instruments, same rhythms, same notes, same scales, same chords, same progressions, same vocal techniques, and so on). If you had never heard music before and tried to write a truly original song, you can bet that it would not sound anything like any of the music we listen to today.
Coding, art, writing...really any creative endeavor, for the most part works the same way.
Conjecture on the functional similarities between LLMs and humans isn't relevant here, nor are sophomoric musings on the nature of originality in creative endeavors. LLMs are software products whose creation involves the unauthorized reproduction, storage, and transformation of countless copyright-protected works—all problematic, even if we ignore the potential for infringing outputs—and it is simple to argue that, as a commercial application whose creators openly tout their potential to displace human creators, LLMs fail all four fair use "tests".
I don't know what he did, but I gave gemini-cli the url and asked for a script. The LLMs are pretty good at this sort of simple but tedious implementation.
True if you think the images have no value, nor the time I saved by "outsourcing" the work, but writing the kind of trivial web scraper I've written N times before somehow does.
Releasing anything as "GPT-6" which doesn't provide a generational leap in performance would be a PR nightmare for them, especially after the underwhelming release of GPT-5.
I don't think it really matters what's under the hood. People expect model "versions" to be indexed on performance.
People who believe in baseless conspiracy theories have to convince themselves that people who don't are operating in the same epistemic mode, picking and choosing what to believe in order to reinforce their prior beliefs, because the alternative is admitting that those people are operating in a superior epistemic mode where they base their beliefs on most or all of the available evidence (including, in this case, the fact that the """vaxxed""" people they know are all still upright and apparently unharmed after years of predictions to the contrary).
Your comment is a manifestation of this defense mechanism. As real evidence piles up that you've been wrong, you retreat into these bizarre imaginary scenarios in which you've been right the whole time, and by projecting that scenario onto others you imagine yourself vindicated. But the rest of us just think you're nuts.
> Do you think those non-techies are sympathetic to the Microsofties and Amazonians?
As somebody who has lived in Seattle for over 20 years and spent about 1/3 of it working in big tech (but not either of those companies), no, I don't really think so. There is a lot of resentment, for the same reasons as everywhere else: a substantial big tech presence puts anyone who can't get on the train at a significant economic disadvantage.
It kinda seems like you're conflating Microsoft with Seattle in general. From the outside, what you say about Microsoft specifically seems to be 100% true: their leadership has gone fucking nuts and their irrational AI obsession is putting stifling pressure on leaf level employees. They seem convinced that their human workforce is now a temporary inconvenience. But is this representative of Seattle tech as a whole? I'm not sure. True, morale at Amazon is likely also suffering due to recent layoffs that were at least partly blamed on AI.
Anecdotally, I work at a different FAANMG+whatever company in Seattle that I feel has actually done a pretty good job with AI internally: providing tools that we aren't forced to use (i.e. they add selectable functionality without disrupting existing workflows), not tying ratings/comp to AI usage (seriously how fucking stupid are they over in Redmond?), and generally letting adoption proceed organically. The result is that people have room to experiment with it and actually use it where it adds real value, which is a nonzero but frankly much narrower slice than a lot of """technologists""" and """thought leaders""" are telling us.
Maybe since Microsoft and Amazon are the lion's share (are they?) of big tech employment in Seattle, your point stands. But I think you could present it with a bit of a broader view, though of course that would require more research on your part.
Also, I'd be shocked if there wasn't a serious groundswell of anti-AI sentiment in SF and everywhere else with a significant tech industry presence. I suspect you are suffering from a bit of bias due to running in differently-aligned circles in SF vs. Seattle.
I think probably the safest place to be right now emotionally is a smaller company. Something about the hype right now is making Microsoft/Amazon act worse. Be curious to hear what specifically your company is doing to give people agency.
> Be curious to hear what specifically your company is doing to give people agency.
Wrt. AI specifically, I guess we are simply a) not using AI as an excuse to lay off scores of employees (at least, not yet) and b) not squeezing the employees who remain with arbitrary requirements that they use shitty AI tools in their work. More generally, participation in design work and independent execution are encouraged at all levels. At least in my part of the company, there simply isn't the same kind of miserable, paranoid atmosphere I hear about at MS and Amazon these days. I am not aware of any rigidly enforced quota for PIPing people. Etc.
Generally, it feels like our leadership isn't afflicted with the same kind of desperate FOMO fever other SMEGMAs are suffering from. Of course, I don't mean to imply there haven't been layoffs in the post free money era, or that some people don't end up on shitty teams with bad managers who make them miserable, or that there isn't the usual corporate bullshit, etc.
Why do you think potentially self-incriminating self-surveillance is "crazy" when you also think lying to the cops and other involved parties about what happened is bad? If you believe it's important to tell the truth in these situations, you should have no problem providing your own recordings of a collision, regardless of who is at fault.
Or is your point just about the cost of the dashcam being "crazy"? In that case, hypothetically, what if your insurance company cut you a check to buy a dashcam of your own choice and install it on your car?
I think they're saying "I don't want to self-incriminate so I don't want to put myself in a situation where I have to lie". I'm not sure it's entirely consistent, but I also don't think it's entirely inconsistent.
If you believe you are at fault in a collision where police, insurance, etc. are involved, they are going to ask for your statement, and at that point you will be forced to choose between lying or admitting fault. If you're glad that no dashcam footage exists, presumably you are going to lie about what happened! I don't see why this is any different than popping the SD card out of your dashcam and lying about that too—you're still lying, and for the same reason: to evade responsibility for a collision you caused.
I think this is a pretty black and white and simple view of things, fault is not always 100% clear, and CLAIMING fault is different from explaining what happened _from your perspective_, and letting the other driver do the same. But I'm not actually speaking about simple fault in a basic traffic collision.
Obviously 99.999% of traffic collisions never get this far, but I'm speaking more of the world of courtroom legal drama where you'd rather not have your in-car conversations recorded, or the fact that you drove around the block of the house where the murder occurred at 3am.
I think there's a huge asymmetry between the upside of the dash cam and the downside of self-surveillance. I'm much more likely to be in a fender bender than accused of murder, but I also _simply don't care_ if the police say I'm at-fault when I don't think I was, driving my insurance rates up for a few years. But I'm deeply uncomfortable with the idea of recording myself 24x7 whenever I'm in my car.
> I think this is a pretty black and white and simple view of things, fault is not always 100% clear, and CLAIMING fault is different from explaining what happened _from your perspective_, and letting the other driver do the same. But I'm not actually speaking about simple fault in a basic traffic collision.
Seems like having video (and GPS speed, etc.) can only make it clearer who (which may include both parties) is at fault? I still don't see how that can be a bad thing if you also aren't interested in lying about what happened.
> I think there's a huge asymmetry between the upside of the dash cam and the downside of self-surveillance.
I almost addressed the generalized surveillance angle in my original comment, but didn't since it seemed that your comment was focused exclusively on the context of having been in a traffic collision.
Addressing it now, I guess I am just not too worried about this angle when my dashcam simply records videos onto an SD card that I have complete control over. If I was a person likely to be targeted by my authoritarian government, I would probably think twice about having such an unencrypted SD card sitting around where it might be swept up in a bogus search and used to gin up additional bogus charges against me, but that is currently not my situation. Really, I can only imagine the video evidence collected by my dashcam being used to exonerate me in a scenario like the one you describe, e.g. if an LPR tagged me on the block where the murder happened but my dashcam clearly showed that I was just passing through.
In fact, this exact thing recently happened (https://www.cbsnews.com/colorado/news/flock-cameras-lead-col...) to a woman who was falsely accused of theft based on LPR data and used her Rivian's dashcam recordings (among other data) to get the police to drop the charges. It's insane that this happened in the first place, but that's beside the point here.
Of course, people using cloud-based dashcams are certainly exposing themselves to dragnet surveillance—which I do have a problem with simply on principle—but the data on my dashcam's SD card are fundamentally inaccessible to law enforcement until they obtain it in a physical search of my car.
They should be, and the answer is obviously no—at least to them. No political or business leader has outlined a concrete, plausible path to the sort of vague UBI utopia that's been promised for "regular folks" in the bullish scenario (AGI, ASI, etc.), nor have they convincingly argued that this isn't an insane bubble that's going to cripple our economy when AGI doesn't happen—a scenario that's looking more and more likely every day.
There is no upside and only downside; whether we're heading for sci-fi apocalypse or economic catastrophe, the malignant lunatics pushing this technology expect to be insulated from consequences whether they end up owning the future light-cone of humanity or simply enjoying the cushion of their vast wealth while the majority suffers the consequences of an economic crash a few rich men caused by betting it all, even what wasn't theirs to bet.
Everybody should be fighting this tooth and nail. Even if these technologies are useful (I believe they are), and even if they can be made into profitable products and sustainable businesses, what's happening now isn't related to any of that.
reply