Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I fall back to people saying this are being dishonest, or not dealing with anything large.

There are some problems that you can't just "make smaller"

Secondly, I write VERY strongly typed code, commented && documented well. I build lots of "micro-pkgs" in larger monorepos. My functions are pretty modular, have tests and are <100-150 lines.

No matter how much I try all the techniques, and my baseline fits well into LLM workflows, it doesn't take away from the fact that It cannot one shot on anything over 1-2k lines of code. Sure, we can go back and forth with the linter until it pumps out something that will compile. This will take a while, in which I could have used something like auto-complete / co-pilot to just write the boilerplate, and fill it in myself in a shorter amount of time than it takes for the agent to reason about a large context.

Then if it does eventually get something "complex" to compile (after spending a ton of your credits/money) Often times, it will have taken a shortcut to do so and doesn't actually do what you wanted it to. Now I can refactor this into something usable and sometimes that is faster than doing it myself. But 8/10 times I waste 2 hours paying money for an LLM to gaslight me, throw out all the code and just write it myself in 1.5 hours.

I can break down a 1-2k line task into smaller prompts tasks too. but sorry I didn't learn to program to become a project manager for a "Artifically Intelligent" machine



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: