The question I keep asking myself is "how feasible will any of this be when the VC money runs out?" Right now tokens are crazy cheap. Will the continue to be?
This is something I've been waiting to hear as well. We hear about how jobs will be eliminated, and occasionally we hear about how that means there will be time for other things that we want to do, but it kind of seems like AI is already doing all of the things that we want to do. And then, of course, there's the question of how the rest of us are going to provide for ourselves if none of us have jobs. Those at the top already seem quite reticent to share with the rest of us. I can't imagine that's going to get better if we don't provide any value to them that a computer can't do for cheaper.
> occasionally we hear about how that means there will be time for other things that we want to do, but it kind of seems like AI is already doing all of the things that we want to do.
That's been the promise of every technology. Computers were supposed to make us so productive that that we could all work less and spend time with our families or whatever. Instead productivity went through the roof freeing most people to do even more work for our masters who started demanding more from us even outside the office while real wages stagnated. AI isn't going to make our lives any more carefree than any other technology. It'll just make a small number of extremely wealthy people even richer.
Thankfully, what passes for AI these days is pretty shitty at doing even basic tasks and so it'll be a while before we're all replaced. In the meantime, expect disruptions as companies experiment with letting staff go and replacing them with AI, get disappointed in the results, and hire people back at lower wages. Also expect a lot of companies you depend on to screw you over because their stupid AI did something it shouldn't have and suddenly it's your problem to deal with.
I've been hearing about how $latest_technology is going to eliminate jobs for 40 years. It hasn't happened yet.
Which jobs, exactly, is AI going to eliminate? It's not useful for anything. It doesn't do anything useful. It's just mashing random patterns together to make something that approximates human-readable language.
I am so, so, so tired of hearing this argument. At a minimum, AI provides efficiency gains. Skilled engineers can now produce more code. This puts downward pressure on jobs. We’re not going to eliminate every software engineering job, but the options are to build more software or to hire fewer engineers. I am not convinced that software has a growing market (it’s already everywhere), so that implies downward pressure. The same is true for customer support, photography, video production (ads), paralegal work, pharma, and basically any job that involves filing paperwork.
Eliminating jobs has absolutely happened. How many jobs exist today for newspaper printing? Photograph development? Film development? Call switchboard operation? Technology absolutely eats jobs. There have been more jobs created over time, but the current economic situation makes large scale jobs adjustment work less well.
AI cannot provide customer support. It cannot answer questions.
> photography, video production (ads)
AI cannot take photographs or make videos. Or at least, not ones that look like utter trash.
> paralegal work, pharma, and basically any job that involves filing paperwork.
Right, so you'd be happy with a random number generator with a list of words picking what medication you're supposed to get, or preparing your court case?
AI is useless, and always will be. It is not "intelligence", it's crude pattern matching - a big Eliza bot.
I am so, so, so tired of hearing this argument. At a minimum, switching from assembly language to high-level programming languages provided efficiency gains. Skilled engineers were able to produce more code. This put upward pressure on jobs. The demand for new software is effectively infinite.
Unlike higher level programing languages AI doesn't actually make programmers more efficient (https://arxiv.org/abs/2507.09089). Many people who are great programmers and love programing aren't interested in having their role reduced to being QA where they just review the bad code AI designed and wrote all day long.
In a hypothetical world where AI is actually decent enough to be any good at writing software, the demand for software being infinite won't save even one programmer's job because zero programmers will be needed to create any of it. Everyone who needs software will just ask AI to do it for them. Zero programing jobs needed ever again.
Pretending 16 samples is authoritative is absolutely hilarious and wild, copium this pure could kill someone.
Also working on a codebase you already know biases results in the first place -- they missed out on what has become a cornerstone of this stuff for AISWE people like me: repo tours; tree-sitter feeds the codebase to the LLM and I get to find all the stuff in the code I care about by either a single well formatted meta prompt or by just asking questions when I need to.
I'll concede one thing to the authors of the study, Claude Code is not that great. Everyone I know has moved on since before July. I personally am hacking on my own fork of Qwen CLI (which is itself a Gemini fork) and it does most of what I want with the models of my choice which I swap out depending on what I'm doing. Sometimes they're local on my 4090 and sometimes I use a frontier or larger openweights model hosted somewhere else. If you're expecting a code assistant to drop in your lap and just immediately experience all of its benefits you'll be disappointed. This is not something anyone can offer without just prescribing a stack or workflow. You need to make it your own.
The study is about dropping just 16 people into a tooling they're unfamiliar with, have no mechanical sympathy for, and aren't likely to shape and mold it to their own needs.
You want conclusive evidence go make friends with people who hack their own tooling. Basically everyone I hang out with has extended BMAD, written their own agents.md for specific tasks, make their own slash commands, "skills" (convenient name and PR hijacking of a common practice but whatever, thanks for MCP I guess). Literally what kind of dev are you if you're not hacking your own tools???
You got four ingredients here you have to keep in mind when thinking about this stuff: the model, the context, the prompt, and the tooling. If you're not intervening to set up the best combination of each for each workflow you are doing then you are just letting someone else determine how that workflow goes.
Universal function approximators that can speak english got invented and nobody wants to talk to them is not the scifi future I was hoping for when I was longing for statistical language modeling to lead to code generation back in 2014 as a young NLP practitioner learning Python for the first time.
If you can't make it work fine, maybe it's not for you, but I would probably turn violent if you tried to take this stuff from me.
> the options are to build more software or to hire fewer engineers.
To be cheeky, there are at least three possibilities you are writing off here: we build _less_ software, we hire _more_ engineers, or things just kinda stay the same.
More on all of these later.
> I am not convinced that software has a growing market
Analysis of market dynamics in response to major technological shocks is reading tea leaves. These are chaotic systems with significant nonlinearities.
The rise of the ATM is a classic example. An obvious but naive predicted result would be fewer employed bank tellers. After all, they're automated _teller_ machines.
However, the opposite happened. ATMs drastically reduced the cost of running a bank branch (which previously required manually counting lots of cash). More branches, fewer tellers per branch... but the net result was _more_ tellers employed thirty years later. [1]
They are, of course, now doing very different things.
Let's now spitball some of those other scenarios above:
- Less "software" gets written. LLMs fundamentally change how people interact with computers. More people just create bespoke programs to do what they want instead of turning to traditional software vendors.
- More engineers get hired. The business of writing software by hand is mostly automated. Engineers shift focus to quality or other newly prioritized business goals, possibly enabled by automating LLMs instead of e.g
traditional end to end tests.
- Things employment and software wise stay mostly the same. If software engineers are still ultimately needed to check the output of these things the net effect could just be they spend a bit less time typing raw code. They might work a bit less; attempts to turn everyone into a "LLM tech lead" that manages multiple concurrent LLMs could go poorly. Engineers might mostly take the efficiency gains for themselves as recovered free-ish (HN / Reddit, for example) time.
Or, let's be real, the technology could just mostly be a bust. The odds of that are not zero.
And finally, let's consider the scenario you dismiss ("more software"). It's entirely possible that making something cheaper drastically increases the demand for it. The bar for "quality software" could dramatically raise due to competition between increasingly llm-enhanced firms.
I won't represent any of these scenarios as _likely_, but they all seem plausible to me. There are too many moving parts in the software economy to make any serious prediction on how this will all pan out.
1. https://www.economist.com/democracy-in-america/2011/06/15/ar...
(while researching this, I noticed a recent twist to this classic story. Teller employment actually _has_ been declining in the 2020s, as has the total number of ATMs. I can't find any research into this, but a likely culprit is yet another technological shock: the rise of mobile banking and payment apps)
I mean, I certainly hope you're right. But it's really hard for a dummy like me to tell how much of this hype is real. There seems to be more money bet on this thing than anything prior. It seems like there's no good outcome here, whether the tech works or not.
The interesting thing about it is that the signs suggest that 'the rich' are prepping for such an outcome ( you will see occasional article here and there about bunkers being bought ). Naturally, if one was to suggest that maybe we could try working towards some sort of semblance of 'new new deal', they would be called some sort of crazy person, who is a communist and hates democracy ( as opposed to simply trying to save the system from imploding ).
Then why bother? Why not go all the way, try to find a way to a new, better system, rather than gambling that these people who so totally hate you would one day become willing to compromise in order to save the current one (with benefits then most of all, not you)?
Because, in real life, power re-alignment of that magnitude tends to be.. chaotic. I like my life. I also like my kid to survive long enough to fend for itself. Both of these become a big gamble if we do not work within the existing system.
I am saying this as a person who had a front seat to a something similar as a kid. It was relatively peaceful and it still managed to upend lives of millions ( because, at the end of the day, people don't really change ).
Yes, and, not only chaotic, power re-alignment almost always results in a reshuffling within the elites. When there truly is a new system, it will likely look like Cambodia did, i.e. pure shit.
> Both of these become a big gamble if we do not work within the existing system.
I guess, by "changing the system", achierius did mean going Marxist or another silliness of that sort, but generally speaking, the system can be changed by working within the system too.
Observe how we are now moving towards a system that isn't based on the Constitution but on some weird mixture of libertarian dogma, excuses for oppression and a cult of personality. It could easily go into a full fledged dictatorship entirely from within, and why not, the top players don't hide their love for it.
> I am saying this as a person who had a front seat to a something similar as a kid. It was relatively peaceful and it still managed to upend lives of millions
Weather you noticed or not, the system was changed from above, by people "within the system"... So you're OK to let that happen again but in a worse direction?
> in real life, power re-alignment of that magnitude tends to be.. chaotic.
Interestingly enough, you may want to reexamine your experience as a child to make sure there was some sort of "power re-alignment". Looking at the facts, I don't see it but I do see how chaos can be useful to those within the system.
<< you may want to reexamine your experience as a child to make sure there was some sort of "power re-alignment"
I can understand every word in this sentence and yet I am having a difficult time parsing it. Is it that I did not provide vivid enough details? Is it that the experience is dismissed as that of a child? Is it merely a stepping stone to the next sentence and actually has no other meaning in your response?
It is an internet forum. I did not give any major details though my posting history is readily available for digging.
I suspect the early variants will fall into two camps:
1. Traditional garden variety human to human, computer to computer and computer to human crime stuff that happens today.
2. Human to computer (AI) crime, misdeeds and bullying. Stuff like:
- Sabotage and poison your AI agent colleague to make it look bad, inefficient, ineffectual in small, but high volume ways. Delegate all risky, bad+worse choice decision making to AI and let the algo take the reputational damage.
- Go beat up on automated bots, cars, drones, etc ... How should it feel to kick a robot dog?
For a humorous read on automation bots and AI in a dystopian world, take a look at Quality Land [0]. Really enjoyed it. As a teaser, imagine having some drones suffering from a fear of heights, hence being deemed faulty and sentenced for destruction. Do faulty bots or AI have value in this world even if they don't deliver on their original intended use?
It must be nice to live somewhere that has politicians that represent the will of the people enough to have a take like this. Where I live, your vote only counts if you have enough money.
You have no opinion on single-payer versus obamacare funded-mandate versus unfunded-mandate versus free-for-all deny-pre-existing conditions health insurance?
You have no opinion on the removal of the legislature as a branch of government, and concentration of all that power into an office held by one man?
You have no opinion on the country turning into a 'papers please, comrade' state for anyone who looks brown?
Your life isn't impacted when flights are cancelled because ATC stops getting paid?
You don't, or don't know anyone working for the federal government? You don't know anyone on EBT? Anyone who has ever done schedule 1 drugs? Your life isn't impacted when billion-dollar frauds escape prison and restitution, setting an example and roadmap for others to follow? Or when tax rates and benefits get adjusted up or down? Or when a complete quack gets put in charge of the country's healthcare and infectious disease control?
You aren't at all affected by any decade-long wars that the country's entangled in? You don't use any foreign imports? Or domestic products that rely on foreign imports?
You don't derive any value from living in a country that mostly follows the rule of law?
You must be incredibly privileged to not be affected by any of this.
---
Society is a series of jenga towers. No particular brick is load-bearing.
My peaceful, law abiding neighbors were taken away by ICE thugs in a totally unnecessary military style raid in my upper-middle-class suburb. Absolutely no due process. Their autistic, profoundly disabled child was left alone, scared and unable to understand what was happening. After over a month of detention, the neighbors were released. Turns out they weren’t so dangerous after all.
What a completely selfish and myopic view of politics. Do you not watch the news? Also a very bad reading of history, thinking all those bad things like 1930s Germany can't happen here when enough people let it happen.
Not Vietnam, Iraq, Afghanistan, Ukraine, Palestine, the actual German Holocaust, or anything else, that's for sure, right? My life was never impacted by any of those.
Because no matter who they vote for, they get this. The previous ruling party hasn't had a real primary since 2008 (and didn't even go through the motions in 2024.) H. Clinton makes a fairly good case that even that one was fixed (because they knew the best horse to bet on.)
No matter who you vote for you get Hillary Clinton's governance, though. She's become very complimentary about Trump's foreign policy.
Dunno if the concern is that "you can't find a copy of the blog post" so much as "why is this admin removing this stuff and/or making it harder to find". This seems bad to me and I don't think their headline is wrong.
I find LLMs really useful on a daily basis but I keep wondering, what's going to happen when the VC money dries up and the real cost of inference kicks in? Its relatively cheap now but its also being heavily subsidized. The usual answer is to just jam ads into your product and slowly increase the price over time (see: Netflix) but I don't know how that'll work for LLMs.
you could localhost ollama, vLLM, or something like that; Open Models are good enough for simple task, With a bit of extra effort and learning, this is usually just works for most case.
But in that situation, there may be no further updates, the future remains uncertain.
A system that rewards our worst instincts is exhibiting a failure. There are worse systems, sure. But capitalism has its own downsides and this is one of them.
1. According to who? Open AI?
2. Its current state is "basically free and containing no ads". I don't think this will remain true given that, as far as I know, the product is very much not making money.
Yes, that number is according to OpenAI. They released that 800m number at DevDay last week.
The most recent leaked annualized revenue rate was $12bn/year. They're spending a lot more than that but convincing customers to hand over $12bn is still a very strong indicator of demand. https://www.theinformation.com/articles/openai-hits-12-billi...
Part of that comes from Microsoft API deals. Part of that will most certainly come because the vast network of companies buy subscriptions to help "Open" "AI" [1].
Given the rest of circular deals, I'd also scrutinize if it applies to the revenue. The entanglement with the Microsoft investments and the fact that "Open" "AI" is a private company makes that difficult to research.
[1] In a U.S. startup, I went through three CEOs and three HR apps, which mysteriously had to change for no reason but to accommodate the new CEO's friends and their startups.
When people take the time to virtue signal "Open" "AI" or the annoyingly common M$ on here I wonder often why they are wasting their precious time on earth doing that
reply