I’ve seen those quadrants too, because I’ve come into several companies to help clean up a mess they’ve gotten into with bad code that they can no longer ignore. It is a compete certainty that we’re going to start seeing a lot more of that.
One ironic thing about LLM-generated bad code is that churning out millions of lines just makes it less likely the LLM is going to be able to manage the results, because token capacity is neither unlimited nor free.
(Note I’m not saying all LLM code is bad; but so far the fully vibecoded stuff seems bad at any nontrivial scale.)
> In the last year, token context window increased by about 100x and halved in cost at the same time.
So? It's nowhere close to solving the issue.
I'm not anti-LLM. I'm very senior at a company that's had an AI-centric primary product since before the GPT explosion. But in order to navigate what's going on now, we need to understand the strengths and weaknesses of the technology currently, as well as what it's likely to be in the near, medium, and far future.
The cost of LLMs dealing with their own generated multi-million LOC systems is very unlikely to become tractable in the near future, and possibly not even medium-term. Besides, no-one has yet demonstrated an LLM-based system for even achieving that, i.e. resolving the technical debt that it created.
One ironic thing about LLM-generated bad code is that churning out millions of lines just makes it less likely the LLM is going to be able to manage the results, because token capacity is neither unlimited nor free.
(Note I’m not saying all LLM code is bad; but so far the fully vibecoded stuff seems bad at any nontrivial scale.)