Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really don't get it.

While letting the AI write some code can be cool and fascinating, I really can't undersand how:

- write the prompt(and you need do be precise and think and express carefully what you have in mind)

- check/try the code

- repeat

is better than writing the code by myself. AI coding like this feels like a nightmare to me and it's 100x more exhausting.



For me, on small personal projects, I can get a project to a point in about 4 hours where previous to new AI tools it would’ve taken about 40. At work, there is a huge difference due to the complexity of the code base and services. Using agents to code for me in these cases as 100% been the loop of iterating on something so often, I would’ve been better off with a more hands on approach, essentially just reviewing PRs written by AI.


I bet some people felt the same way when we collectively moved from assembly to compilers.


Yeah, if not only for the small fact that you are leaving a well defined set of rules for pure chaos and randomness this time around.


Yeah, I don't understand this comparison. I've programmed for years in higher level languages professionally and never learned assembly and never got stuck because the higher level language was limited or doing something wrong.

Whenever I use an LLM I always need to review its output because usually there is something not quite right. For context I'm using VS copilot, mostly ask and agent mode, in a large brownfield project.


Exactly that's the trade-off.

People keep comparing higher-level programming languages to lower-level abstractions - these comparisons are absolutely false. The whole point of higher-level programming languages is for people to get away from working with the lower level stuff.

But with the way software engineers are interacting with LLMs, they are not getting away from writing code because they have to use what comes out of it to achieve their goal (writing and piecing together code to complete a project).


My career sat at the interface of hardware and software. We would often run into situations where the code produced by the compiler was not what we desired. This issue was particularly pronounced when we were transitioning some components from being written in assembly by hand vs using a compiler.

I think the parallels are clear for those of us who have been through this scenario.


In reality, the outcome doesn't appear to be the result of "pure chaos and randomness" if you ground your tools. Test cases and instructions do a fantastic job of keeping them focused and moving down the right path.

If I see an LLM consistently producing something I don't like, I'll either add the correct behavior to the prompt, or create a tool that will tell it if it messed up or not, and prompt it to call the tool after each major change.


In the previous scenario, programmers were still writing the code themselves. The compilers, if they were any good, generated deterministic code.

In our current scenario, programmers are merely describing what they think the code should do, and another program takes their description and then stochastically generates code based on it.


Compilers are (a) typically non-deterministic and, (b) produce different code from one version to the next, from one brand to the next, and from one set of flags to the next.


To some degree you're correct -- LLMs can be viewed as the kind of "sufficiently advanced" compiler we've always dreamed of. Our dreams didn't include the compiler lying to us though, so we have not achieved utopia.

LLMs are more like DNA transcription, where some percentage of the time it just injects a random mutation into the transcript, either causing am evolutionary advantage, or a terminal disease.

This whole AI industry right now is trying to figure out how to always get the good mutation, and I don't think it's something that can be controlled that way. It will probably turn out that on a long enough timescale, left unattended, LLMs are guaranteed to give your codebase cancer.


It's not. And people are realizing that, which is causing them to bring back and reinvent aspects of software engineering to AI coding to make it more tolerable. People once questioned whether AI will replace all programming languages with natural language interfaces, it now looks like programming languages will be reinvented in the context of AI to make their natural language interface more tool-like.


It's a change in mindset. AI is like having your own junior developer. If you've never had direct reports before where you have to give them detailed instruction and validate their code then you're right, it might end up more exhausting than just doing the work yourself.


It definitely feels like a move towards management, which is something I've avoided for my entire career.

It's a perfectly cromulent approach and skillset - but it's a wildly different one.


So basically what an engineering manager or product manager enjoys doing




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: