I’ve iterated on 1k lines of react slop in 4h the other day, changed table components twice, handled errors, loading widgets, modals, you name it. It’d take me a couple days easily to get maybe 80% of that done.
The result works ok, nobody cares if the code is good or bad. If it’s bad and there are bugs, doesn’t matter, no humans will look at it anymore - Claude will remix the slop until it works or a new model will rewrite the whole thing from scratch.
Realized during writing this that I should’ve added the extract of requirements in the comment of the index.ts of the package, or maybe a README.CURSOR.md.
My experience having Claude 3.5 Sonnet or Google Gemini 2.0 Exp-12-06 rewrite a complex function is that it slowly introduces slippage of the original intention behind the code, and the more rewrites or refactoring, the more likely it is to do something other than what was originally intended.
At the absolute minimum this should require including a highly detailed function specification in the prompt context and sending the output to a full unit test suite.
Would be great comedic relief for sure since I’m mostly working in the backend mines, where the
LLM-friendly boilerplate is harder to come by admittedly.
The result works ok, nobody cares if the code is good or bad. If it’s bad and there are bugs, doesn’t matter, no humans will look at it anymore - Claude will remix the slop until it works or a new model will rewrite the whole thing from scratch.
Realized during writing this that I should’ve added the extract of requirements in the comment of the index.ts of the package, or maybe a README.CURSOR.md.