Hacker Newsnew | past | comments | ask | show | jobs | submit | atleastoptimal's commentslogin

Writing style wise, 3.1 seems very verbose, but somehow less creative compared to 3.

Before AI people already used preexisting projects for hackathons.

Very effective way of conveying information.

I think a major factor is the increase in microplastics in our diets.

https://www.sciencedirect.com/science/article/abs/pii/S18777...


i disagree. content is great. hitting back button messes stuff up for me. a long form article is preferred for me but maybe this is better for people used to swiping.

Probably because it isn't officially released yet.

The idea of the "AI bubble" popping is a collective delusion driven by a desire to see AI fail.

API access to AI models is not "subsidized", AI companies make a profit on inference. They are only losing money because they spend a lot on training the next generation of models.

It would make sense to claim the bubble pops if AI companies' valuations were pure speculation, but revenues are increasing exponentially for OpenAI/Anthropic. They are selling a product that people are buying at a price which is profitable on margin.


This is where I've landed as well. One caveat: hard to say if anyone's revenue numbers outside of those two names is reliable. Perhaps anyone who isn't the big four is at risk of bubble trouble.

PC's are 1000x faster than they were 20 years ago yet cloud services still makes a relatively larger share of Microsoft's revenue each year.

Your premise makes sense if the benefits of an AI model topped out at something that a person's personal computer could run. However scaling laws seem to have no limit yet (perhaps due to the general nature of intelligence itself not having a "limit"), thus the labs will still have a significant advantage due to scale and hosting models with a distinct comparative advantage to even the best local models.


True but that will mean a greater "winner takes all" scenario where a small cadre of 8-figure compensated hyper-managers and tastemakers will supervise armies of agents. To 99% of people who lose their job doing some easily commodifiable task, this scenario is indistinguishable from AI taking every job.

I think most of the issues with "vibe coding" is trusting the current level of LLM's with too much, as writing a hacky demo of a specific functionality is 1/10 as difficult as making a fully-fledged, dependable, scalable version of it.

Back in 2020, GPT-3 could code functional HTML from a text description, however it's only around now that AI can one-shot functional websites. Likewise, AI can one-shot a functional demo of a saas product, but they are far from being able to one-shot the entire engineering effort of a company like slack.

However, I don't see why the rate of improvement will not continue as it has. The current generation of LLM's haven't been event trained yet on NVidia's latest Blackwell chips.

I do agree that vibe-coding is like gambling, however that is besides the point that AI coding models are getting smarter at a rate that is not slowing down. Many people believe they will hit a sigmoid somewhere before they reach human intelligence, but there is no reason to believe that besides wishful thinking.


Of course - and autonomous driving is 1 year away.

I have ridden in a Waymo dozens of times with no issues. I've also used Tesla's self-driving to similar efficacy.

That's the nature of all tech, it keeps not being good enough, until it is, and then everything changes.


As an aside, I wonder if automated driving would be one year away if we did not need to worry about it killing people.

Like if the only possible issues were property damage, I kind of think it would be here. You just insure the edge cases.


That AI would be writing 90% of the code at Anthropic was not a "failed prediction". If we take Anthropic's word for it, now their agents are writing 100% of the code:

https://fortune.com/2026/01/29/100-percent-of-code-at-anthro...

Of course you can choose to believe that this is a lie and that Anthropic is hyping their own models, but it's impossible to deny the enormous revenue that the company is generating via the products they are now giving almost entirely to coding agents.


One thing I like to think about is: If these models were so powerful why would they ever sell access? They could just build endless products to sell, likely outcompeting anyone else who needs to employ humans. And if not building their own products they could be the highest value contractor ever.

If you had midas touch would you rent it out?


Well there are models that Anthropic, OpenAI and co. have access to that they haven't provided public API's for, due to both safety, and what you've cited as the competitive advantage factor. (like Openai's IMO model, though it's debatable if it represented an early version of GPT 5.1/2/3 or something else)

https://sequoiacap.com/podcast/training-data-openai-imo/

The thing however is the labs are all in competition with each other. Even if OpenAI had some special model that could give them the ability to make their own Saas and products, it is more worth it for them to sell access to the API and use the profit to scale, because otherwise their competitors will pocket that money and scale faster.

This holds as long as the money from API access to the models is worth more than the comparative advantage a lab retains from not sharing it. Because there are multiple competing labs, the comparative advantage is small (if OpenAI kept GPT-5.X to themselves, people would just use Claude and Anthropic would become bigger, same with Google).

This however may not hold forever, it is just a phenomena of labs focusing more on heavily on their models with marginal product efforts.


They need to generate revenue to continue to raise money to continue to invest in compute. Even if they have the Midas Touch it needs to be continously improved because there are three other competing Midas Touch companies working on new and improved Midas Touch's that will make their's obsolete and worthless if they stay still even for a second.

But most of their funding comes from speculative investment, not selling their services. Also, wouldn't selling their own products/services generate revenue?

Making a profitable product is so much more than just building it. I've probably made 100+ side projects in my life and only a handful has ever generated any revenue.

Arguably because the parts the AI can't do (yet?) still need a lot of human attention. Stuff like developing business models, finding market fit, selling, interacting with prospects and customers, etc.

It's not entirely surprising. You can prompt the AI to write code to pretty much any level of detail. You can tell it exactly what to output and it will copy character for character.

Of course at a certain point, you have to wonder if it would be faster to just type it than to type the prompt.

Anyways, if this was true in the sense they are trying to imply, why does Boris still have a job? If the agents are already doing 100% of the work, just have the product manager run the agents. Why are they actively hiring software developers??

https://job-boards.greenhouse.io/anthropic/jobs/4816198008


They probably still need to be able to read and distinguish good vs bad code, evaluate agent decisions, data structures, feasibility, architectural plans, etc, all of which require specific software engineering expertise, even if they don't end up touching the code directly.

But that doesn't make sense. They claim that AI is writing 100% of the code, yet if they need to be able to read and distinguish good vs bad code, evaluate agent decisions, data structures, feasibility, architectural plans, etc, that implies they are writing at least some of the code? Or else why would they ever need to do those things?

This is not the fantastic argument you think it is. 100% is only achievable if you have software engineers at the helm so there's no contradiction here.

If the AI is doing 100% of the work why would you need software engineers at the helm?

100% of the code, not 100% of the work.

what doesn't make sense? "writing" the code is implementation

you still need good swes to distinguish if the generated code is good or bad and adjust the agent and plan the system

ime opus is smart enough to oneshot medium to small features by learning the existing codebase provided you give it the business context


I wish one of those agents was smart enough to notice that their github-action which auto closes issues is broken: https://github.com/anthropics/claude-code/issues/16497. Then maybe we could get some of these bugs fixed.

Exactly. The fact that people laugh at the prediction like it's a joke when I and many others have been at 90%+ for a long time makes me question a lot of the takes here. Anyone serious about using LLMs would know it's nothing controversial to have it write most of the code.

And people claiming it's a lie are in for a rough awakening. I'm sure we will see a lot of posters on HN simply being too embarrassed to ever post again when they realize how off they were.


Why do they have so many GitHub issues then?

there r companies that scale 100x in 3 years

If you aren't scaling yourself as much then you're moving too slow


I love this and will make it my motto. Scale yourself 100x every 3 years, or you're too slow. If I manage to keep it up roughly 11 years I will finally achieve planet scale.

Only planet scale? If you're not at least galaxy-scale, are you even trying?

Does it count, if your belly size scales as much?

I am now eager to see your track record and how did you personally scale 100x in last 3 years (or ~1 000 000x in last decade)

in 2024 I made 1 new contact

In 2025 I made 100 new contacts

extrapolate. In 2026 I will make 10,000 new contacts (though I won't make them all directly, my associates will be my proxies)


And in 2030's you'll make millions of contacts per day. What does this even mean? Are you in a MLM scheme?

I could probably scale 100x with a $10-100m personal funding round

you are insane. only 0.01% of companies are like that GLOBALLY. And no: they are not even winning on the long run

Oddly enough, this is just the American Dream under exponential growth. "Someday you'll be rich as well" is just weaponized hope, and folks that follow GP's advice gobble it up because it's aspirational.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: