Hacker Newsnew | past | comments | ask | show | jobs | submit | aurareturn's commentslogin

Finally, sensible. I never understand why websites or apps had to do it. It's way easier, more scalable and cheaper for the OS to do it.

And more draconian.

"Our systems aren't foolproof because anyone can just boot Linux from USB. Hence we should enforce secure boot with proprietary keys and disable functionality for non attested PCs"

This is not far fetched. All Android vendors went down this path and now you can't even enable developer mode if you want your bank app to work to approve your bank loan.


Which just seems like a slippery slope. Since there is no friction and users are not annoyed anymore governments will just continue requiring more and more spyware to be added to all software/devices.

IMHO requiring every to submit notarized paper forms to access Facebook/whtvr would be the best solution


How is Linux going to do this?

Treating Linux as a monolith here is kind of missing the point. Desktop Linux and Android have an entirely different application model, a solution for Android would have to be applied in a significantly different manner to desktop Linux. It'd likely be folded in to play services, as was the case with the exposure notification framework during covid for example.

I don't know but as Linux powers the entire world, include 2/3rd of the world's smartphone, I'm sure they'll find a way.

Well it’s obviously technically feasible (which seems like the least relevant part) if you want to have zero privacy because every single general purpose computer has unremovable spyware builtin..

Surely you most see that this is a bureaucratic impossibility. It's not a technical issue.

You know what's really cheap and scalable? Not doing such moronic shit at all.

Don’t forget that the 8B model requires 10 of said chips to run.

And it’s a 3bit quant. So 3GB ram requirement.

If they run 8B using native 16bit quant, it will use 60 H100 sized chips.


> Don’t forget that the 8B model requires 10 of said chips to run.

Are you sure about that? If true it would definitely make it look a lot less interesting.


Their 2.4 kW is for 10 chips it seems based on the next platform article.

I assume they need all 10 chips for their 8B q3 model. Otherwise, they would have said so or they would have put a more impressive model as the demo.

https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod...


It doesn’t make any sense to think you need the whole server to run one model. It’s much more likely that each server runs 10 instances of the model

1. It doesn’t make sense in terms of architecture. It’s one chip. You can’t split one model over 10 identical hardwire chips

2. It doesn’t add up with their claims of better power efficiency. 2.4kW for one model would be really bad.


We are both wrong.

First, it is likely one chip for llama 8B q3 with 1k context size. This could fit into around 3GB of SRAM which is about the theoretical maximum for TSMC N6 reticle limit.

Second, their plan is to etch larger models across multiple connected chips. It’s physically impossible to run bigger models otherwise since 3GB SRAM is about the max you can have on an 850mm2 chip.

  followed by a frontier-class large language model running inference across a collection of HC cards by year-end under its HC2 architecture
https://mlq.ai/news/taalas-secures-169m-funding-to-develop-a...

Aren't they only using the SRAM for the KV cache? They mention that the hardwired weights have a very high density. They say about the ROM part:

> We have got this scheme for the mask ROM recall fabric – the hard-wired part – where we can store four bits away and do the multiply related to it – everything – with a single transistor. So the density is basically insane.

I'm not a hardware guy but they seem to be making a strong distinction between the techniques they're using for the weights vs KV cache

> In the current generation, our density is 8 billion parameters on the hard wired part of the chip., plus the SRAM to allow us to do KV caches, adaptations like fine tuning, and etc.


Thanks for having a brain.

Not sure who started that "split into 10 chips" claim, it's just dumb.

This is Llama 3B hardcoded (literally) on one chip. That's what the startup is about, they emphasize this multiple times.


It’s just dumb to think that one chip per model is their plan. They stated that their plan is to chain multiple chips together.

I was indeed wrong about 10 chips. I thought they would use llama 8B 16bit and a few thousand context size. It turns out, they used llama 8B 3bit with around 1k context size. That made me assume they must have chained multiple chips together since the max SRAM on TSMC n6 for reticle sized chip is only around 3GB.


It uses 10 chips for 8B model. It’d need 80 chips for an 80b model.

Each chip is the size of an H100.

So 80 H100 to run at this speed. Can’t change the model after you manufacture the chips since it’s etched into silicon.


As many others in this conversation have asked, can we have some sources on the idea that the model is spread across chips? You keep making the claim, but no one (myself included) else has any idea where that information comes from or if it is correct.

I was indeed wrong about 10 chips. I thought they would use llama 8B 16bit and a few thousand context size. It turns out, they used llama 8B 3bit with only 1k context size. That made me assume they must have chained multiple chips together since the max SRAM on TSMC n6 for reticle sized chip is only around 3GB.

I'm sure there is plenty of optimization paths left for them if they're a startup. And imho smaller models will keep getting better. And a great business model for people having to buy your chips for each new LLM release :)

One more thing. It seems like this is a Q3 quant. So only 3GB RAM requirement.

10 H100 chips for 3GB model.

I think it’s a niche of a niche at this point.

I’m not sure what optimization they can do since a transistor is a transistor.


Do we know that it needs 10 chips to run the model? Or are the servers for the API and chatbot just specced with 10 boards to distribute user load?

If you etch the bits into silicon, you then have to accommodate the bits by physical area, which is the transistor density for whatever modern process they use. This will give you a lower bound for the size of the wafers.

Edit: it seems like this is likely one chip and not 10. I assumed 8B 16bit quant with 4K or more context. This made me think that they must have chained multiple chips together since N6 850mm2 chip would only yield 3GB of SRAM max. Instead, they seem to have etched llama 8B q3 with 1k context instead which would indeed fit the chip size.

This requires 10 chips for an 8 billion q3 param model. 2.4kW.

10 reticle sized chips on TSMC N6. Basically 10x Nvidia H100 GPUs.

Model is etched onto the silicon chip. So can’t change anything about the model after the chip has been designed and manufactured.

Interesting design for niche applications.

What is a task that is extremely high value, only require a small model intelligence, require tremendous speed, is ok to run on a cloud due to power requirements, AND will be used for years without change since the model is etched into silicon?


I'm thinking the best end result would come from custom-built models. An 8 billion parameter generalized model will run really quickly while not being particularly good at anything. But the same parameter count dedicated to parsing emails, RAG summarization, or some other specialized task could be more than good enough while also running at crazy speeds.

> What is a task that is extremely high value, only require a small model intelligence, require tremendous speed, is ok to run on a cloud due to power requirements, AND will be used for years without change since the model is etched into silicon?

Video game NPCs?


Doesn’t pass the high value and require tremendous speed tests.

Video games are a huge market, and speed and cost of current models are definitely huge barriers to integrating LLMs in video games.

Speed = capacity = cost.

Alternatively, you could run far more RAG and thinking to integrate recent knowledge, I would imagine models designed for this putting less emphasis on world knowledge and more on agentic search.

Maybe; models with more embedded associations are also better at search. (Intuitively, this tracks; a model with no world knowledge has no awareness of synonyms or relations (a pure markov model), so the more knowledge a model has, the better it can search.) It’s not clear if it’s possible to build such a model, since there doesn’t seem to be a scaling cliff.

Where are those numbers from? It's not immediately clear to me that you can distribute one model across chips with this design.

> Model is etched onto the silicon chip. So can’t change anything about the model after the chip has been designed and manufactured.

Subtle detail here: the fastest turnaround that one could reasonably expect on that process is about six months. This might eventually be useful, but at the moment it seems like the model churn is huge and people insist you use this week's model for best results.


  > The first generation HC1 chip is implemented in the 6 nanometer N6 process from TSMC. Each HC1 chip has 53 billion transistors on the package, most of it very likely for ROM and SRAM memory. The HC1 card burns about 200 watts, says Bajic, and a two-socket X86 server with ten HC1 cards in it runs 2,500 watts.
https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod...

And what of that makes you assume that having a server with 10 HC1 cards is needed to run a single model version on that server?

So it lights money on fire extra fast, AI focused VCs are going to really love it then!!

Well they claim two month turnaround. Big If True. How does the six months break down in your estimation? Maybe they have found a way to reduce the turnaround time.

100x of a less good model might be better than 1 of a better model for many many applications.

This isn't ready for phones yet, but think of something like phones where people buy new ones every 3 years and even having a mediocre on-device model at that speed would be incredible for something like siri.


This depends on how much better the models will get from now in, if Claude Opus 4.6 was transformed into one of these chips and ran at a hypothetical 17k tokens/second, I'm sure that would be astounding, this depends on how much better claude Opus 5 would be compared to the current generation

Even an O3 quality model at that speed would be incredible for a great many tasks. Not everything needs to be claude code. Imagine Apple fine tuning a mid tier reasoning model on personal assistant/MacOs/IOS sorts of tasks and burning a chip onto the mac studio motherboard. Could you run claude code on it? Probably not, would it be 1000x better than Siri? absolutely.

Yeah, waiting for Apple to cut a die that can do excellent local AI.

I’m pretty sure they’d need a small data center to run a model the size of Opus.

Data tagging? 20k tok/s is at the point where I'd consider running an LLM on data from a column of a database, and these <=100 token problems provide the least chance of hallucination or stupidity.

A lot of NLP tasks could benefit from this


No one would never give such a weak model that much power over a company.

Fighting a war for rich people and oligarchs.

Nothing to do with each other. This is a general optimization. Taalas' is an ASIC that runs a tiny 8B model on SRAM.

But I wonder how Taalas' product can scale. Making a custom chip for one single tiny model is different than running any model trillions in size for a billion users.

Roughly, 53B transistors for every 8B params. For a 2T param model, you'd need 13 trillion transistor assuming scale is linear. One chip uses 2.5 kW of power? That's 4x H100 GPUs. How does it draw so much power?

If you assume that the frontier model is 1.5 trillion models, you'd need an entire N5 wafer chip to run it. And then if you need to change something in the model, you can't since it's physically printed on the chip. So this is something you do if you know you're going to use this exact model without changing anything for years.

Very interesting tech for edge inference though. Robots and self driving can make use of these in the distant future if power draw comes down drastically. 2.4kW chip running inside a robot is not realistic. Maybe a 150w chip.


The 2.5kW figure is for a server running 10 HC1 chips:

> The first generation HC1 chip is implemented in the 6 nanometer N6 process from TSMC. ... Each HC1 chip has 53 billion transistors on the package, most of it very likely for ROM and SRAM memory. The HC1 card burns about 200 watts, says Bajic, and a two-socket X86 server with ten HC1 cards in it runs 2,500 watts.

https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod...


I’m confused then. They need 10 of these to run an 8B model?

No. 250 watts to run an 8B model.

> Our second model, still based on Taalas’ first-generation silicon platform (HC1), will be a mid-sized reasoning LLM. It is expected in our labs this spring and will be integrated into our inference service shortly thereafter.

> Following this, a frontier LLM will be fabricated using our second-generation silicon platform (HC2). HC2 offers considerably higher density and even faster execution. Deployment is planned for winter.

From https://taalas.com/the-path-to-ubiquitous-ai/

Personally I think anything around the level of Sonnet 4.5 is worth burning to silicon because agentic workflows work. There are plenty of places where spending $50,000 for that makes sense (I have no idea of the pricing though)


One man's slop is another man's treasure.

I remember people saying Instagram will be filled with AI slop reels.

I'm seeing massive likes and engagement for AI reels on IG. Many of them have millions of views and hundreds of thousands of likes. Dancing bears? Dancing Trump? Cat kung fu? People seem to love them.


   then declined as sponsored results and SEO degraded things
It didn't decline because of this. It declined because of a general decade long trend of websites becoming paywalled and hidden behind a login. The best and most useful data is often inaccessible to crawlers.

In the 2000s, everything was open because of the ad driven model. Then ad blockers, mobile subscription model, and the dominance of a few apps such as Instagram and Youtube sucking up all the ad revenue made having an open web unsustainable.

How many Hacker News style open forums are left? Most open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc. The only reason HN is alive is because HN doesn't make need to make money. It's an ad for Y Combinator.

SEO only became an issue when all there is for crawlers is SEO content instead of true genuine content.


> The best and most useful data is often inaccessible to crawlers.

Interesting point.

> ost open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc

Ironically isn't one of the reasons some of those platforms started to use logins was so they could track users and better sell their information to ad people?

Obviously now there are other reasons as well - regulation, age verification etc.

Does this suggest that the AI/ad platforms need to tweak their economic model to share more of the revenue with content creators?


You can still use Reddit without logging in. In fact it's completely unlike Discord. Lots of Reddit discussions still show up in web search results.

Reddit does not expose all comments and posts to crawlers.

I seem to remember very few ads on the early web. Most sites I frequented were run by volunteers who paid out of their own pockets for webspace.

What year? 90s?

I remember ads everywhere in 2000s.


At a high level, I think there are multiple trends and angles to look at this.

AI Slop PRs: Github accounts trying to increase their clout? Someone genuinely needs that change in the PR? A competitor trying to slow down open source competition?

Maintainers: It's draining and demoralizing. Open source tooling hasn't caught up with coding agents.

Open source future: Quite clear that everyone using the same open source version is ending. In the future, open source projects will have many forks. One fork is maintained by X entity. X can be a person, an AI, a person using AI. Another is maintained by Y entity. Pick the one you trust most or what you need it to do. Then there are projects where the AI uses an open source project but makes substantial modifications on the fly without caring about opening a PR for others.

Longer term thoughts:

- Maybe open source projects in a few years won't be code. They'll just be prompts, which are just ideas and specs.

- PRs will just be modifications to the prompt.


The future of work is fewer human team members and way more AI assistants.

I think companies will need fewer engineers but there will be more companies.

Now: 100 companies who employ 1,000 engineers each

What we are transitioning to: 1000 companies who employ 10 engineers each

What will happen in the future: 10,000 companies who employ 1 engineer each

Same number of engineers.

We are about to enter an era of explosive software production, not from big tech but from small companies. I don't think this will only apply to the software industry. I expect this to apply to every industry.


It will lead to hollowing out of the substance everywhere. The constant march to more abstraction and simplicity will inevitably end up with AI doing all the work and nobody understanding what is going on underneath, turning technology into magic again. We have seen people losing touch with how things work with every single move towards abstraction, machine code -> C -> Java -> JavaScript -> async/await -> ... -> LLM code generation, producing generations of devs that are more and more detached from the metal and living in a vastly simplified landscape not understanding trade-offs of the abstractions they are using, which leads to some unsolvable problems in production that inevitably arise due to the choices made for them by the abstractions.

> nobody understanding what is going on underneath

I think many developers, especially ones who come from EE backgrounds, grossly overestimate the number of people needed who understand what is going on underneath.

“Going on underneath” is a lot of interesting and hard problems, ones that true hackers are attracted to, but I personally don’t think that it’s a good use of talented people to have 10s or 100s of thousands of people working on those problems.

Let the tech geniuses do genius work.

Meanwhile, there is a massive need for many millions of people who can solve business problems with tech abstractions. As an economy (national or global), supply is nowhere close to meeting demand in this category.


The point is that LLMs can only replicate what existed somewhere, they aren't able to invent new things. Once humans lose their edge, there won't be any AI-driven progress, just a remix of existing stuff. That was the hollowing out I mentioned. Obviously, even these days there is tech that looks like magic (EUV lito etc.) but there are at least some people that understand how it all works.

And those companies will do what? Produce products in uber-saturated markets?

Or magically 9900 more products or markets will be created, all of them successful?


Go back to a time at the start of Youtube.

Now answer these questions:

And what will those people who make videos on Youtube do? Produce videos in uber-saturated categories?

Or magically 9900 more media channels will be created, all of them successful?


To follow this up, one of my favorite channels on Youtube is Outdoor Boys. It's just a father who made videos once a week doing outdoor things like camping with his family. He has amassed billions of views on his channel. Literally a one-man operation. He does all the filming, editing himself. No marketing. His channel got so popular that he had to quit to protect his family from fame.

Many large production companies in the 2000s would have been extremely happy with that many views. They would have laughed you out of the building if you told them a single person could ever produce compelling enough video content and get at many viewers.


Serious question: but aren't there thousands of other guys doing almost the same thing and getting almost no views? Even if there are lots of new channels, there aren't going to be lots of winners

But there are many Youtubers making a decent living doing it as a one person shop or a small team. In the past, you needed to a large team with a large budget and buy in from TV/DVDs/VHS to get an audience.

https://alanspicer.com/what-percentage-of-youtubers-make-mon...

Claims 0.25% of channels makes any money at all. The amount that make a decent living is realistically even smaller, possibly < 0.1%.

To me the YouTube example seems to be the exact demonstration that markets saturate and market distribution is still a winner-takes-all kind of deal.


0.25% of channels, how many of them even want to make money?

0.25% of how many?

Average size of YouTube channel team that makes money vs TV channel team in the 2000s?


There are no such detailed numbers as far as I know. No platform (twitch, YouTube etc.) generally provides this information. Thinking bad one could assume it's because most people would realize it's one in a million who makes it.

Channels that make money consistently also have teams behind. Sure, probably they are smaller then TV studios, but TV studios do also other jobs compared to youtubers.

Anyway, these are the only numbers available. If there are numbers that show that masses of individuals can make a living in a market with so many competitors like YouTube I am happy to look at them. Until then, I will observe what is known for almost everything: a small % takes the vast majority of resources.


YouTube is a platform, it's not a product. And in this case, created a new market. A market in which, by the way, still very few people (relative to those who try) are successful. In fact I wouldn't be surprised if the percentage would be much smaller than 10%.

A quick search leads to different answer, but https://alanspicer.com/what-percentage-of-youtubers-make-mon... suggests that 0.25% of all YouTube channels makes any money (not good money, any money). Which means 99.75% earns 0$.

Basically I would flip the question and ask: if you could produce videos now very simply with AI and so could other 10000 people, how many of the new channels do you think will be successful? If anything, the YouTube example shows you exactly that it doesn't matter than 1000000 people now can produce content with low overhead, just a handful of them will be successful, because the market of companies available to spend money to sponsor through channels and the men hours of eyeballs on videos are both limited.

Talking about companies that just produce products, either you come up with something new (and create a new market), or you come up with something better (you take shares of an existing market). Having 10000 companies producing - say - digital editing software won't make suddenly increase by 10000x the number of people in need of digital editing software. Which means that among those 10000 there will be a few very good products which will eat all the market (like it is now), with the usual Paretian distribution.

The idea that many companies with smaller overhead can split the market evenly and survive might (and it's a big hopeful might) work on physical companies selling local and physical products (I.e., splitting the market geographically), but for software products I cannot even imagine it happening.

New markets are created all the time, and it's great if maybe smaller companies (or co-ops) could take over those markets rather than big corporations, but the way the market distribution happens I don't think will be affected. I don't see any reason why this should change with many more companies in the same market. I also don't think that 10000 new companies will create 10000 new markets, because that depends on ideas (and luck, and context, and regulation, etc.), not necessarily on resources available,


0.25% is 77,500 thousand channels by the way.

Now count how many TV stations there were in the 90s and 2000s.

This is just Youtube alone. What about streamers on Twitch? Youtube? Tiktokers? IG influencers?

Plenty of people are making money creating video content online.

The internet, ease and advancements in video editing tools, and cheap portable cameras all came together to allow millions of content creators instead of 100 TV channels.


77.500 channels which make any money. Now plenty of those make a handful of dollars per month. Also, 77k worldwide.

I am not going to deny that YouTube (and all social media) created new markets. But how is this not an argument that shows that when N people suddenly do some activity, only a tiny minority is successful and gains some market share?

If tomorrow a product that is made by 3 companies will see competitors by 10000 1-man operations, maybe you will have 30 different successful products, or 100. 9900 of those 10000 will still be out of luck. I

YouTube is not an example of a market that being exposed to a flood of players gets shares somewhat equally between those players or that allows a significant number of the to survive with it. Nor is twitch or any of the other platforms.


I don't understand the people who think more companies with fewer employees is a good thing.

I already feel spammed to death by desperate requests for my consumption as is.


Because: 1. One person companies are better at giving the wealth to the worker. 2. With thousands of companies the products can be more targeted and each product can serve fewer people and still be profitable

Then companies won't need to spam you to convince you that you need something you don't. Or that their product will help you in ways it can't.

Once person companies will not have a 100 person marketing team trying to inject ads into ever corner of your life.


> Once person companies will not have a 100 person marketing team trying to inject ads into ever corner of your life.

Because these one person companies will scale up everything with AI except marketing/advertising? Consider me skeptical.


> One person companies will not have a 100 person marketing team to inject ads into every corner of your life.

But they could have a thousand-agent swarm connected via MCP to everything within our field of vision to bury us with ads.

It's been a long time since I read "The Third Wave" and up until 2026, not much has reminded me about its "Small is beautiful", and "rise of the prosumer" themes besides the content creator economy which is arguably the worst thing to ever happen to humanity's information environment, and LLM agent discussions.


> the content creator economy

This is exactly one of the things I find maddening at the moment. "Everyone" (except my actual friends) on social media is trying to sell me something.

Eg: I like dogs. It's becoming increasingly hard to follow dog accounts without someone trying to sell me their sponsor's stuff.


> And those companies will do what? Produce products in uber-saturated markets?

> Or magically 9900 more products or markets will be created, all of them successful?

Yes. Products will become more tailored/bespoke rather than a lot of the one size fits all approach that is pervasive now.


And if it's so cheap and bespoke, why buying it and not making it in house? What about access to people with know-how of that product? You use a product that only 4 other companies use, you can be sure you won't find any new hire that knows how to use it.

To me it seems the reality works in the opposite way. Among the many products built, some will be successful and will swallow the whole market, like now with basically any software or SaaS product.


> And if it's so cheap and bespoke, why buying it and not making it in house?

0. Sure, some products will be made in house. That said, being able to spec a product well is a skill that is not as common as some folks seem to think. It also assumes that an org is large enough to have a good internal dev team, which is both rare and relatively expensive.

1. It sloughs responsibility, which many folks want to do.

2. It allows for creation to be done not by committee and/or with less impact from internal politics.

3. It facilitates JIT product/tool development while minimizing costs.

That’s off the top of my head.

The realities of business often point to internal development not being ideal.


No? I dont see any indication that this would be a good idea. Or even looked for.

> No? I dont see any indication that this would be a good idea. Or even looked for.

Do you spec software for a variety of businesses?

I do.

It’s rare that one SaaS or software package does what the people paying want it to do. Either they have to customize internally (expensive and limited to larger orgs with a tech department) or Frankenstein a solution like Salesforce or WordPress with a lot of add ons. And even then, it’s not hitting all pain points.

Being able to spin up or modify an app cheaply and easily will be a massive boon for businesses.


> smaller companies

And large companies. The first half of my career was spent writing internal software for large companies. I believe it's still the case that the majority of software written is for internal software. AI will be a boon for these use cases as it will make it easier for every company big and small to have custom software for its exact use case(s).


> AI will be a boon for these use cases as it will make it easier for every company big and small to have custom software

Big cos often have the problem of defining the internal problems they’re trying to solve. Once identified they have to create organizational permission structures to allow the solutions. Then they need to stay on tasks long enough to build and use the software to solve the problem.

Only one of these steps is easily improved with AI.


I think a lot of companies are going to get burnt on these things. Sure it is easy to one-shot something which looks close, but then you are responsible for releasing/maintaining/improving.

Not to mention that you'd need to integrate it with lots of other vibe-coded products. It can be great for some use cases for sure, though, but identifying them can be tricky, as big orgs are pretty terrible at formulating what they need clearly.


And thenb potentially suffer from integration hell.

The benefit of using off the shelf software is that many of the integration problems get solved by other people. Heck you may not even know you have a problem and they may already have a solution.

Custom software on the other hand could just breed more demand for custom software. We gotta be careful how much custom stuff we do lest it get completely out of hand


yeah, I agree.

When Engineering Budget Managers see their AI bills rising, they will fire the bottom 5-10% every 6-12 months and increase the AI assistant budget for the high performers, giving them even more leverage.


In my case, over the last 3 years, every dev who left was not replaced. We are doing more than ever.

Our team shrunk by 50% but we are serving 200% more customers. Every time a dev left, we thought we're screwed. We just leveraged AI more and more. We are also serving our customers better too with higher retention rates. When we onboard a customer with custom demands, we used to have meetings about the ROI. Now we just build the custom demands in the time we took to meet to discuss whether we should even do it.

Today, I maintain a few repos critical to the business without even knowing the programming language they are written in. The original developers left the company. All I know is what is suppose to go into the service and what is suppose to come out. When there is a bug, I ask the AI why. The AI almost always finds it. When I need to change something, I double and triple check the logic and I know how to test the changes.

No, a normal person without a background in software engineering can't do this. That's why I still have a job. But how I spend my time as a software engineer has changed drastically and so has my productivity.

When a software dev say AI doesn't increase their productivity, it truly does feel like they're using it wrong or don't know how to use it.


> Today, I maintain a few repos critical to the business without even knowing the programming language they are written in. [...] No, a normal person without a background in software engineering can't do this.

Of course they can - if you don't know any of the tech-stack details (i.e. a "normal" user), why can't someone else who also doesn't know the tech-stackc details replace you?

What magic sauce do you possess other than tech-stack chops?


In the future, they might be able to. Not yet though. I still have a job.

When a non software engineer can build a production app as well as I can, I know I won’t be working as a software engineer anymore. In that world, having great ideas, data, compute, and energy will be king.

I don’t think we will get there within the next 3-4 years. Beyond that, who knows.


Could you provide some details on your company, code base, etc? These are wild claims and don’t match the reality I’m seeing everywhere else.

How big is your team? How many customers? What’s your product? Can we see the code? How do you track defects? Etc.

Part of the reason I’m struggling with this is because we’d be seeing OpenAI, Anthropic, etc. plastering these case studies everywhere if they existed. Instead, I’m stuck using CC and all its poorly implemented warts.


Not OP, but I am seeing this in my current company.

Companies are charged per token, which means heavy AI users deliver more and stress budgets. They recently announced significant payroll costs over the past ~3 years.

Those savings I think will partially be reclaimed by AI companies, enabling the high performers more ai model usage.


Now if only companies knew how to correctly assess actual impact and not perceived impact.

I don't think this is an AI problem. Even before AI, FANGA companies famously optimize promotions on perceived impact.

During the promo review, people will look how many projects were done and the impact of those projects.


Acquisition rate, retention rate, revenue, profit margin?

By those metrics, Microsoft lost 20% of it's value due to hopping on the AI coding assistance train.

I'm not saying it is the case, just making it apparent how unreliable it is to measure productivity by comparing what's happening at the lowest level in a company to its financials.


Microsoft is mostly a SaaS company. Many SaaS businesses have lost much more in market cap since due to AI disruption.

Imagine if Microsoft didn't invest in AI? Maybe they'd be down 50% now.


This seems like a bot comment.

So is yours.

That means the system will collapse in the future. Now from bunch of people some good programmers are made. Rest go into marketing, sales, agile or other not really technical roles. When the initial crowd will be gone there will be no experienced users of AI. Crappy inexperienced developer will make more crap without prior experience and ability to judge the design decisions. Basically no seniors without juniors.

This implies that writing code by hand will remain the best way to create software.

The seniors today who have got to senior status by writing code manually will be different than seniors of tomorrow, who got to senior status using AI tools.

Maybe people will become more of generalists rather than specialists.


> The seniors today who have got to senior status by writing code manually will be different than seniors of tomorrow, who got to senior status using AI tools.

That’s putting it mildly. I think it’s going to be interesting to see what happens when an entire generation of software developers who’ve only ever known “just ask the LLM to do it” are unleashed on the world. I think these people will have close to no understanding of how computing works on a fundamental level. Sort of like the difference between Gen-X/millenial (and earlier) developers who grew up having to interact with computers primarily through CLIs (e.g., DOS), having to at least have some understanding of memory management, low-level programming, etc. versus the Gen-Z developers who’ve only ever known computers through extremely high level interfaces like iPads.


I barely know how assembly, CPUs, GPUs, compilers, networking work. Yet, software that I've designed and written have been used by hundreds of millions of people.

Sure, maybe you would have caught the bug if you wrote assembly instead of C. But the C programmer still released much better software than you faster. By the time you shipped v1 in assembly, the C program has already iterated 100 times and found product market fit.


Casey Muratori says that every programmer should understand how computers work and if you don't understand how computers work you can't be a good programmer.

I might not be a good programmer but I've been a very productive one.

Someone who is good at writing code isn't always good at making money.


Problem with much of this talk is receipts are always nowhere to be found.

But I don't see any receipts from the opposite side either.

You don't see any good software made by people who know how computers work?

You don't see any good software made by people who don't know how CPUs, GPUs, networking work at a deep level?

AI slop books made more money than JK Rowling, too.

Maybe in the future, yea. Most likely not because creating books is much easier now but total reading time can't increase nearly as fast. More books chasing the same amount of reading time.

To be fair, I wouldn't be entirely surprised if they were better than barfs onto a page. She's not exactly Tolkien

> I think it’s going to be interesting to see what happens when an entire generation of software developers who’ve only ever known “just ask the LLM to do it” are unleashed on the world.

we only have to look today at how different software quality is compared to the "old days" - when compilers were not as good, and people wrote in assembly by hand.

Old software were fast and optimized. Hand written assembly used minimal resources. Today, people write bloated electron webapps packaged into a bundle.

And yet, look who is surviving in the competitive land of software darwinian natural selection?


Generalist is not automatically bad. I design digital high speed hardware and write (probably crappy) Qt code. The thing is that I have experience to judge my work. Greenhorns can’t and this will lead to crapification of the whole industry. I often ask AI tools for an advice. Sometimes it’s very useful, sometimes it’s complete hallucination. On average it definitely makes me better developer. Having rather abstract answer I can derive exact solution. But that comes from my previous experience. Without experience it’s a macabre guessing game.

I think we were headed that way before LLMs came on to hunt scene.

LLMs just accelerated this trend.


By and large "AI assistant" is not a real thing. Everyone talks about it but no one can point you to one, because it doesn't exist (at least not in a form that any fair non-disingenuous reading of that term would imply). It's one big collective hallucination.

> I think companies will need fewer engineers but there will be more companies.

This would be strange, because all other technology development in history has taken things the exact opposite direction; larger companies that can do things on scale and outcompete smaller ones.


  This would be strange, because all other technology development in history has taken things the exact opposite direction; larger companies that can do things on scale and outcompete smaller ones.
I don't think this has always been true.

Youtube allowed many more small media production companies - sometimes just one person in their garage.

Shopify allowed many more small retailers.

Steam & cheap game engines allowed many more indie game developers instead of just a few big studios.

It likely depends on the stage of the tech development. I can see Youtube channels consolidating into a few very large channels. But today, there are far more media production companies than 30 years ago.


That’s an argument for giant companies at scale like Google/YouTube.

I don't think so. Are there more media production entities now or in the 2000s?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: