Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How long until “the LLM did it it” is just as effective as “AWS is down, not my fault”?


Never because the only reason that works with Amazon is that everyone is down at the exact same time.


Everyone will suffer from slop code at the same time.


Yeah but that's very different from an AWS outage. Everyone's website being down for a day every year or 2 is something that it's very hard to take advantage of as a competitor. That's not true for software that is just terrible all the time.


This to me is the point.. LLMs can't be responsible for things. It sits with a human.


Why can LLMs not be responsible for things? (genuine question - I'm not certain myself).


because it doesn't have any skin in the game and can't be punished, and can't be rewarded for succeeding. Its reputation, career, and dignity are nonexistent.


On the contrary - the LLM has had it's own version of "skin in the game" through the whole of it's training. Reinforcement learning is nothing but that. Why is that less real than putting a person in prison. Is it because of the LLM itself, or because you don't trust the people selling it to you?


Are you claiming that LLMs are... sentient? Bold claim, Taylor.


This doesn't seem to have stopped anyone before.


Stopped anyone from doing what? Assigning responsibility to someone with nothing to lose, no dignity or pride, and immune from financial or social injury?


If you’re just a gladhander for an algorithm, what are you really needed for?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: