Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve iterated on 1k lines of react slop in 4h the other day, changed table components twice, handled errors, loading widgets, modals, you name it. It’d take me a couple days easily to get maybe 80% of that done.

The result works ok, nobody cares if the code is good or bad. If it’s bad and there are bugs, doesn’t matter, no humans will look at it anymore - Claude will remix the slop until it works or a new model will rewrite the whole thing from scratch.

Realized during writing this that I should’ve added the extract of requirements in the comment of the index.ts of the package, or maybe a README.CURSOR.md.



My experience having Claude 3.5 Sonnet or Google Gemini 2.0 Exp-12-06 rewrite a complex function is that it slowly introduces slippage of the original intention behind the code, and the more rewrites or refactoring, the more likely it is to do something other than what was originally intended.

At the absolute minimum this should require including a highly detailed function specification in the prompt context and sending the output to a full unit test suite.


> Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away.

Lordy. Is this where software development is going over the next few years?


In that case we can look forward to literally nothing of any complexity or reliability being produced.


It's actually where we have been the whole time.


I'd pay to review one of your PRs. Maybe a consistent one with ai usage proof.


Would be great comedic relief for sure since I’m mostly working in the backend mines, where the LLM-friendly boilerplate is harder to come by admittedly.

My defense is that Karpathy does the same thing, admitted himself in a tweet https://x.com/karpathy/status/1886192184808149383 - I know exactly what he means by this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: