Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I look at it like this: Yes, AI can write code. It can write it much faster than I can. Sometimes it can also write it better than I can.

But: programming languages, libraries, and abstractions are not going away. It is still possible (and might always be possible) to get deep into the weeds of Python or Rust or whatever to understand how those work and really harness them to their full potential, or develop them further. It just won't be _compulsary_ (in most industries) if your only goal is to trade lines of code for dollars in your bank account.

 help



I mostly share your perspective, but I don't know if I would share your emphasis.

Lines of code for dollars used to be a trade businesses made with developers out of necessity, but soon it will only be economically viable to make that trade with AI providers. So not only will going deep in the weeds not be compulsory, understanding anything about any programming concept will become economically void (though not void of educational value, or enjoyment).

On the other hand, what that code does depends entirely on a particular understanding of the real world, which is indescribably complex (i.e. combinatorially explosive). This is what I truly care about, and the possibilities for the application and customization of software are infinite. The interface between the world and software will always involve a value decision that AI cannot have a monopoly over (it would be economically infeasible, no matter how cheap inference becomes). This means that as long as my passion is not within the machine, but is instead centered on the relationship between the machine and the world, I will never be out of a job.

And part of me thinks, "good riddance!". For all the good we created, developers have also generated so much bullshit, it's honestly insane that any software companies were ever successful in spite of it. The human-politicking is probably the worst of it - think of the countless years of human life wasted in scrum ceremonies - but also so much of the software we've created sucks, and users hate it!

We used to be a proud culture of hackers, building miracles with miniscule resources, or at least that's what the greybeards here on HN like to whine about. They're right, we've squandered limitless cycles, uncountable exabytes of useless data. If there was a God of hackerdom, we are living in his Gomorrah, and he will strike us down with AI as punishment for these sins.


What makes you think that AI cannot become significantly better than humans at "understanding" and modelling the world? If the AI is always more likely to be right than you or me due to being able to take more variables/knowledge into account by default, then why ever listen to a human, or even to yourself when it comes to an economic decision?

My honest and rather pessimistic take is that in the long-term any craft that purely lives in the abstract is likely to be doomed.


It's not that it won't be better at understanding, it's that there's too many possibilities to understand. This is true for humans too, but I can use the output to make money in a particular scenario.

Take even 1 simple example - software applications on a smart watch. How many dimensions of reality are relevant? Maybe I'm a busy person, so I need a personal assistant for my calendar. Maybe my wife needs access too. Maybe I'm a bird watcher and I'd like to track the birds I see. Maybe I'm a bird researcher and those observations need to integrate with my research.... ad nauseum forever.

AI will write all the code, and make all the meaningful decisions, but the backstop of the whole thing has to be some non-virtual reality with a paying user, otherwise there is no value to extract.

I personally only care about the outcome, I don't even really care if I understand how anything else works, or any of the decisions made. My dollars go in, working code comes out to suit me.


I agree with your overall perspective here. You need the human in the loop to ground the request/direction in a reality with human needs, but that's about it.

What I was getting at is that nothing stops you from asking AI what would be the next best smartwatch app to build, and based on all its aggregated knowledge and other inputs (e.g. search) it has, it can potentially make a better estimation than you or any human of a product that would sell.

Of course whether that is actually true depends on how well its training data is able to model/mimic reality, and how grounded its inputs (e.g. internet) are. You can always help it a bit by steering it into the right direction, providing additional grounding. Was mainly wondering for how long this "additional" guidance would be a necessity, fearing that it won't be for as long as we think.


I agree with you. I think the "additional guidance" era we are in will be measured in single-digit years.

Good thinking on the relationship between machine and world. Very reassuring.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: