Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
My AI-driven identity crisis (phillips.codes)
60 points by wonger_ 4 months ago | hide | past | favorite | 73 comments


“I have hopes for a utopian future where AI does everything better than humans, which allows us to spend our time poorly doing the things we are most excited about.”

This thought is quite common and widespread, I’ve heard it multiple times, and it always baffles me. The very idea that human beings would stop exploiting each other and just live a peaceful, content life with their AI helpers is so hopelessly distant from the way I understand the world. I wish I was wrong, but human beings don’t exploit each other because we need to, as if it was an unfortunate thing; we exploit each other because we want and because we can. Even if AI robots solved most of our problems, we will never simply accept that and let others live in peace. Some human beings will always look for opportunities to rise above others, as long as they somehow can. I’d go even further as to say that, in case a future like that became actually feasible and predictable, lots of people currently in positions of power would fight very hard to keep that from happening.


I always answer that with this question: When was the last time that productivity gains translated directly into a quality of life increase for the workers?

Never, of course, you just have to do more work in less time and the win materializes in lower costs and higher profits for the capital owners. Workers may get some crumbs that fell from the table as an unintended side effect.


Not never, but never in recent history. I'd say the cut off point was when we moved most of our factories to cheaper countries and made our economies reliant on services instead of goods.

I'm sure a worker in the peak of the industrial revolution had a much worse time than I do now. But I'm also equally sure my parents and grandparents had it much easier in term of purchasing power, work/life balance, job stability, being able to afford a home on a single income, being able to afford kids, being able to retire at a somewhat reasonable time, &c.

Basically everything after the digital revolution disproportionately benefited a very small percentage of people, while previous advances benefitted the masses (agriculture, train/cars, factory automation, &c.). We got a lot of new shiny bells and whistle to regularly pump up the dopamine but we lost a lot of basics


https://en.wikipedia.org/wiki/Late_capitalism

> According to skeptics of the "late capitalism" idea, so far there just has not been any real evidence of:

> (1) long-term economic stagnation or prolonged negative economic growth in the advanced capitalist countries;

> (2) pervasive social decay and persistent cultural degeneration that just keeps getting worse and worse, and

> (3) pervasive and persistent rejection of capitalism and business culture by the majority of the population

(all three of the skeptic points seem ripe to be re-analyzed)

> For many Western Marxist scholars since that time, the historical epoch of late capitalism starts with the outbreak (or the end[9]) of World War II (1939–1945), and includes the post–World War II economic expansion, the world recession of the 1970s and early 1980s, the era of neoliberalism and globalization, the 2008 financial crisis and the aftermath in a multipolar world society. Particularly in the 1970s and 1980s, many economic and political analyses of late capitalism were published.

Late Capitalism (1973)

https://www.goodreads.com/book/show/931838.Late_Capitalism


> When was the last time that productivity gains translated directly into a quality of life increase for the workers?

It's easy to be cynical, but productivity gains in agriculture are the main reason why we have enough to eat. Less obviously, they led to a huge improvement in quality of life for workers across all levels of society. The effect played out over hundreds of years.

US farms produce more than enough food for the entire population with less than 2% of the work force. [0] Surpluses of capital as well as increases in available labor from improvements in agricultural productivity were among the many factors that enabled the industrial revolution. [1] The root causes of the English industrial revolution were many, but it's hard to escape the importance of agricultural productivity in the mix.

[0] https://www.ers.usda.gov/data-products/chart-gallery/chart-d...

[1] https://en.wikipedia.org/wiki/Industrial_Revolution#Causes


In Asia they are doing pretty well. The US for whatever reason has tended to allocate the gains to the top 1%.

Here's Kurzweil on things getting better https://www.youtube.com/watch?v=uEztHu4NHrs&t=376s


For a remedy to this flavour of pessimism I'd suggest you read Bregman's Humankind[1]. tldr; the current level of exploiting each other doesn't come naturally for people.

[1] https://en.wikipedia.org/wiki/Humankind:_A_Hopeful_History


I don’t agree this is pessimism, actually I live a pretty content and optimistic life. However my comfort and optimism comes from being skeptical in a healthy way and knowing that human beings look out for themselves and that I need to look out for myself and my family, keeping others at check in a healthy, respectful way. I don’t buy into this idea that human beings are supposed to be “larger than life” in order to live happy and in balance with each other.


> the current level of exploiting each other doesn't come naturally for people.

Agreed, it doesn't. Nothing about that fact seems reason for optimism, however. It's a gigantic leap to go from "the current level of exploiting each other doesn't come naturally for people" to "that level is going to decrease, or stop rising, any time soon".


It doesn’t as long as we keep each other at check. I do believe it’s natural for humans to take advantage of opportunities that arise in your environment; if a human being sees that they can do better, at the expense of others, then it seems to me that is a very natural response for an individual that has a single, disconnected brain (as opposed to some kind of hive mind).

But don’t mistake “natural” with “good”. Actually that is much more natural (in a wild sense) than having a complex society full of moral and philosophical constraints. I myself believe very strongly in ethics and try to be ethical as much as I can, but that doesn’t mean ethical behaviour comes “naturally”. If you can’t accept that not everyone will be ethical, and some will act “savagely”, then you’re being naive and opening yourself to, well, opportunities of exploitation.


Doesn’t it, though? Where else could it come from, but our own nature? Do we not live in an environment of our own creation, more and more each generation?

Edit - you know what, after a bit of reading, I do get the point being made. Thanks for the book reference, I’m going to place a hold at my library if they have it.


I think you may be misapprehending what some people think may be happening here.

The issue is not people exploiting one another to rise above each other. The issue is a few people exploiting AI's they own to keep themselves above everyone en masse.

They don't lose the need to rise above you, rather staying above you is the entire point of the AIs they're creating. If you want to compete in the economy in the future, you will have no choice but to also exploit their AIs to try to rise above others. Thereby helping the owning class rise further above you.

Why not make your own? Because you don't happen to have an army of AI researchers to create it for you. Why not use open source models? Again, you can, but you're betting those models will be better than commercial models. And right now the capability gap there is widening.

People in power wouldn't fight AIs to keep themselves in power. Rather you, yourself, using the AIs they own to make a living, would more deeply entrench their already extant power.


You’re not wrong, but I was addressing a specific point the author made in a specific sentence. The idea that people will get some sort of universal salary and be left alone to just do some woodworking in their backyard (in a non-profitable way I mean) is to me absurd. It doesn’t matter if AI solves all the problems of humankind; if someone sees you getting a salary without “earning” it (whatever the hell that means), then you will be bothered, no question about it.


I’ve tried to use flagship models to produce long-ish technical writing content and the result has not been satisfactory.

The resulting content lacks consistency and coherency. The generated prose is rather breathless: in carefully articulated content, every sentence should have a place. Flagship models (Opus 4~) don’t seem to understand the value of a sentence.

I’ve tried to prompt engineer this behavior (one should carefully attend to each sentence: how is it contributing to the technical narrative in this section, and overall?), but didn’t have much success.

I suspect this might be solved by research on grounding generation against world models: much of verifying “is this sentence correct here?” has to do with my sharing of a world model of some domain with my audience. I use that world model to debug my own writing.


I was born in 1990, quite late into the end tail of ordinary people not writing letters by hand, reading (a single book/article) for hours on end etc. and it's already evident that people outside of occupations that demand it just don't consume or produce as much (quality) writing as they used to.

My fear isn't that LLMs will fail to meet our standards, my fear is that LLMs will drag us down to a new low of homogenized, dull and lifeless form of writing.

Perhaps I'm biased, but it feels as if the few times I am impressed by someones writing, the material is 20 years old or older, even if said author is still producing work.


I think you're accurately describing a "regression to the mean" phenomenon, which LLMs have transmitted to areas involving programmatic control through text.

I recently used an LLM to help scaffold a piece of technical writing. I showed it to several of my peers (sharp and accomplished PhD students), and they immediately identified the paragraphs (even sentences in captions!) written by an LLM. I had spent a good amount of time prompt engineering Opus 4 to try and produce a coherent piece of content aligned with a narrative which I had already developed, but it was still _immediately obvious_ what was not consistent ... it stood out like a sore thumb to experts.

By the the time we finished the content, there was not a sentence of LLM-generated writing in the final product.

Possibly an incorrect extrapolation, but I think my experience here is the current status quo. Code is a bit easier for these systems, because the LLM can align against test suites, type systems, etc. But for writing ... you just can't trust the current capabilities to pass the sniff test by experts.

After all, how does one commmunicate "coherence" as a test which the LLM might align its generations against? That's why you need a shared world model (with your audience) -- and if we could produce a computational representation of such a thing, we might have a chance at better coherent / consistent technical writing generations.


Maybe do a new prompt at paragraph level?


Exactly this sort of thing makes it feel like there could be a "cliff" or "eternal winter of human publishing", where books like his are never again written by humans.

Often when tech comes out that does something better than people, it makes sense for people to stop doing it. But in the case of "books explaining things", AI only learned how to explain things by examining the existing corpus - and there won't be any more human-generated content to continue to learn and evolve from, so the explanatory skills of AI could wind up frozen in 2025.

An alternative would of course be that humans team up with AI to write better books of this sort, and are able to develop new and better ways of explaining things at a more rapid pace as a result.

A relatively recent example that sticks in my mind is how data visualization has improved. Documents from the second half of the 1900's are shockingly bad at data presentation, and the shock is due to how the standard of practice has improved in the last few decades. AI probably wouldn't have figured this out on its own, but is now able to train on many, many examples of good visualization.


Books like his used to be a huge thing. There was a wall of computer books in every book store. When internet came they mostly disappeared. Why would I pay a lot to buy a book if I can find more up to date knowledge about any given software thing on the internet for free? Recently I noticed they make a comeback in the form of a small shelf that you can find in you try hard. But it reminds me of a comeback of record players. Books like his were already vintage before AI. A thing you buy not for the utility, but just because you have nostalgia and too much money.


Fair, I do like reading long form content with a narrative arc for some non-fiction topics though. Books on architecture and meta skills are still valuable.


> Stack Overflow usage is in absolute free-fall.

I'm actually sad by this because it was such a place that was so foundational my development as an engineer. I visited it once in last month or so, when I had an AWS Lambda - Playwright issue that need specific settings to solve the problem but Claude, Gemini gave me a mash of the answers. I'm not complaining, but I "grew up" with Stack.

Our space no longer looks like a pyramid, I liken it to diamond-ish. New grads will simply not be able to find jobs[0] and I'm worried for anyone entering the E in STEAM (can't speak for others).

It's happening now and we're only at the tip of the iceberg. I have 2 young children and am uncertain about how to help them navigate the future.

The sword of efficiency doesn't care about my children. I mean that's my whole job right? to help people become efficient? It's just the speed and scale is insane.

More questions than answers...

[0] https://www.nytimes.com/2025/08/10/technology/coding-ai-jobs...


My partner and I have basically decided not to have children at this point for the exact reason you stated. The rate of change at the moment is dizzying and I’m not sure that the future we’re heading toward is one that I’d want my kids to be part of.


It is dire and your conclusion is valid, having children in the face of uncertainty allows you to influence the future though. Even if it means sacrificing your children at the altar, gives them the opportunity to influence the future too.


I understand your thoughts. I've had similar motivation problems about blogging since the release of ChatGPT. Feels like you are writing for a machine rather than readers. Definitely seen a decline in readers since December 2023 on older articles that previously had steady traffic for years.

Also, I just purchased LazyVim For Ambitious Developers. I've used the online edition a number of times in recent months. Thanks for your work!


You shouldn't feel a need to justify your existence. You don't owe anybody anything for the right to exist.


The world does not owe you the right to exist, either. Not under US-type capitalism.


I prefer to believe otherwise. In the words of the Desiderata¹

You are a child of the universe no less than the trees and the stars; you have a right to be here.

¹- https://www.desiderata.com/desiderata.html


US-type capitalism is a high-level concept though. The right of an individual to exist - existence in effect being its own justification - is such a lower level concept, that something so lofty as capitalism can’t really have any bearing on the issue.

It’s not a question of whether capitalism allows our existence - it’s very obviously the other way around.


The U.S capitalist system / democracy does not try to guarantee the existence of its members ? That's a dark take. It could do a much better job but I think there's at least a minimal, some would say sufficient, effort made to support the existence of members of society.


> The world does not owe you the right to exist

I mean thats just nature. Darwin etc.

Maybe you should try the DPRK?


Hold up

Nature does give everyone the right to exist without justification.

I'm not sure what Darwin has to say on the matter. He doesn't strike me as prescriptive.


> Yet you participate in society. Curious!


There is a vast spectrum between US-style neoliberal-late-capitalism and the DPRK, maybe both extremes just need to die to stop this kind of stupid thought-terminating cliché for once.


I think we’re going to have to set bigger goals for ourselves. We’re all still figuring out what that means.


Phone numbers and addresses became useless knowledge. Not useless, but you know what I mean. We’ll shuffle more out so what’s useful can be learned and stored. And so on and so on, but the pace and scale of this one is huge.


Strangely enough, I suspect that if AI truly becomes prevalent, in about 5-10 years there will be a counter culture of those seeking things they strongly believe are human made, tangible, and genuine. I believe folks will yearn for in person shows and performances, seek respected authors and artists, getting out of the AI slop bubble at least for a time.

I don't think it will ever counter the change, but I suspect there will be some interesting developments in culture worldwide caused by this.


This can be compared with fine art (as against photography or digital art), theatre (as against films), handmade (as against factory produced) etc. In all these cases, the originals occupy an expensive niche, usually beyond the reach of the common-folk.


You are probably right. That would track well with prior behavior.

I suppose it will also depend on how affordable/accessible these models will be.


Could be, but not for learning honestly, and not for most things if we're being honest. If I want to learn Python then I want to learn Python, I don't need the sweat of another human involved in the experience. Plus A.I will probably do a much better job curtailing to my needs than some book.

Sure, for some niche artwork and prostitution there will always be demand for human labor.


Had this thought too. Even came up with a tongue-in-cheek term for such a group: luddaites :). I believe it will be sooner than 5-10 years.


As an author who is currently under contract to finish a book, I'm a bit annoyed that AI really can't write for me. I'm busy, I really like writing, but writing a good book does involve serious time researching and exploring. I've tried to have AI help me through, but I've more often than not had to dive into source code and run experiments to make sure I understand what's going on. It's even a book on AI, so there would be no shame at all in using AI to write it.

> So what am I good for anymore?

And here I was thinking this question is why writers write at all. Who else would do something requiring so much work for so little reward but those who fundamentally think they aren't worth much, it's what unites us.

But I don't think I worry about being replaced, not because I'm irreplaceable, but because if I could be completely replaced I think that might be quite a delightful experience. Imagine all those people who need something from you satisfied. That's what being replaced would entail: not a single person demanding a single thing from you. But unfortunately no, I'm still needed here, annoyingly.


As for me I've seen LLM based AI produce bland summaries¹ for a few years. Occasionally they generate something funny². But they cannot yet write anything I would spend my time or money to read.

> I'm still needed here, annoyingly.

Indeed you are. Indeed you are.

¹- https://news.ycombinator.com/item?id=34647947

²- https://news.ycombinator.com/item?id=43168595


> But I don't think I worry about being replaced, not because I'm irreplaceable, but because if I could be completely replaced I think that might be quite a delightful experience.

Would like a bit more of whatever it is you're taking :) Seriously, you don't need to know you create some value to the world ? If we all stop contributing why the hell would Zuckerberg let us live ? We're ruining his earth in his eyes probably ... I bet his A.I is already telling him it's not great to let 8 billion people consume so much resources...


> If we all stop contributing why the hell would Zuckerberg let us live ?

Is this Zuckerberg in the room with us right now?

You need to seek therapy, seriously.


> You need to seek therapy, seriously.

I promise to seek therapy if you promise to try to understand sarcasm on the internet.


You'd be surprised, I see people on the internet nowadays that have this stance seriously.


Well people are exaggerating for sure but do I trust Zuckerberg to have an enormous, almost infinite amount of power ? or Musk ? No I don't.


> And here I was thinking this question is why writers write at all.

As a job? Income. Like every other job we do in this insufferable world.


>Who else would do something requiring so much work for so little reward but those who fundamentally think they aren't worth much, it's what unites us.

This seems like the most dystopian statement given that writers are taking pride in the unity / fact that they are under rewarded / basically exploited by the system and you are taking this exploitation as a sense of unity?

I understand that you didn't mean any harm with the statement but I feel like this statement is true and it just shows what a bloody dystopian nightmare we live in man.

Exploitation has become the norm.


> Exploitation has become the norm.

This has been the case since the dawn of ages.


Yes. But the world has become way more interconnected yet it still feels like the world doesn't care about such exploitation though.

I had also said the same thing after writing this comment and the conclusion I built was: The world has gotten so good at propaganda that even though we can change the world in good direction, nobody wants to because of such propaganda / in general the algorithms make us worry about smaller things than large scale change. Clippy came to my mind too thinking that it is maybe a symbol of change though. I might create a blog post some day about it but I hope you get the idea.


CoRecursive's recent episode "Coding in the Red-Queen Era" about identity as a programmer and AI tools is very good. He talks about how to avoid falling into the trap of making yourself dumber.

https://corecursive.com/red-queen-coding/


The issue is calling it "AI" in the first place.

It's not, it's a statistical model of existing text. There's no "genAI", there is "genMS" - generative machine statistics.

When you look at it that way, it's obvious the models are build on enormous amounts of work done by people publishing the the internet. This amount of work is many orders of magnitude more hours than it took to produce the training algorithms.

As a result, most of the money people pay to access these models should go to the original authors.

And that even ignores the fact that a model trained on AGPL code should be licensed under AGPL as well as its output - even if no single training input can be identified in the output, it's quite straightforwardly _derived_ from enormous amounts of trainings data input and a tiny (barely relevant) bit of prompt input. It's _derivative_ work.


> It's _derivative_ work.

fwiw, I mostly agree with you (ai training stinks of some kind of infringement), but legal precedent is not favouring copyright holders at least for now.

In Bartz v. Anthropic and Kadrey v. Meta "judges have now held that copying works to train LLMs is “transformative” under the fair use doctrine" [1]

i.e. no infrigement - bearing in mind this applies only in the US. The EU and the rest of the world are setting their own precedents.

Copyright can only be contested in the jurisdiction that the alleged infringement occurred, and so far it seems that fair use is holding up. I'm curious to watch how it all plays out.

It might end up similarly to Uber vs The World. They used their deep pockets to destabilise taxis globally and now that the law is catching up it doesn't matter any more - Uber already won.

[1] https://www.ropesgray.com/en/insights/alerts/2025/07/a-tale-...


> fwiw, I mostly agree with you (ai training stinks of some kind of infringement), but legal precedent is not favouring copyright holders at least for now.

I know. I am describing how it should be.

Copyright was designed in a time when concealing plagiarism was time-consuming. Now it's a cheap mechanical operation.

What I am afraid is that this is being decided by people who don't have enough technical undersanding and who might be swayed by everyone calling it "AI" and thinking there's some kind of intelligence behind it. After all, they call genMS images/sounds/videos "AI" too, which is obviously nonsense.


I don’t think copyright works that way.

Also, not sure what you mean by “statistics”.

If you mean that a parameter for a parameterized probability distribution is chosen in order to make the distribution align with a dataset, ok, that’s true.

That’s not generally what I think of when I hear “statistics” though?


> I don’t think copyright works that way.

Maybe but it should - see sibling comment.

Statistics as in taking a large input and processing it into much fewer values which describe the input in some relevant ways (and allow reproducing it). Admittedly it's pretty informal.


I've been reading a lot of (human-written) books lately, and one thing this has made abundantly clear to me is that AI writing just doesn't stack up. For one AI writing is often completely wrong about the details. But it also just tends to be bland and superficial. If you want a 5-minute summary of something, sure, it can do a passable job. But if I want something substantial and carefully thought out, I'll choose a book written by a human expert every time.

Maybe this will change at some point in the future, but for now there's no way I would substitute a well-written book on a subject for AI slop. These models are trained on human-written material anyway, why not just go straight to the source?


As a tech writer, I feel like I'm becoming a context curator: I'll still write, with AI being one of the readers. https://news.ycombinator.com/item?id=44837875


> So what am I good for anymore?

That question will haunt many over the next two decades. Especially once tech gets good enough to replace most manual labour. Suddenly a billion plus thinking exactly that


Wow, what an enormous left side "header" ("sider"?). It takes 750 pixels out of 1920 on my screen, almost 40% of the site.


yes, in the age of AI making any money is getting increasingly hard


Sell a course.


didnt read the article but i doubt ai was the driver


[dead]


Beautiful comment, thanks


Ai needd to be perdonalized to understand human assumption and knowledge better.


The name of the game for highly creative people is pivot. Have you ever wished you had a clone, even imperfect one, that you could just tell what to do, so you can do other stuff? Do you have any ideas that would require you to hire some people to even attempt them but never could afford the cost nor the risk? Did you ever wanted to try something else but doing what you are best at took the bulk of your time? Explore those ideas now with the use of AI. Broaden your horizons. Throw a lot of daring things at the AI wall and see what sticks for you. With AI powered tools there never been a better time to be highly creative person.


Read that comment in Dan Steven's American accent and it makes a lot more sense!


> It’s not clear whether AI will take our programming jobs

I think it's safe to say it is pretty clear.

As an example, you can power 10 developers with the highest tier of Claude Code Max for a year under the price of a new developer. At this point, having plenty of personal experience with the tool, I'd pick the former option.

There, one less job for a developer.


I agree. It's getting tougher and I don't know why you're being downvoted. However, so many people can now build new companies we might have a good few years left, perhaps more than you'd think. It could be another decade of good work which is enough for me personally.


I don't mind the downvotes. I get it - it is a hard pill to swallow, especially when so many identify with being a programmer.


It's both the identifying thing and of course the steady (and mostly - high) income. Could be quite a hit for many people to become economically dislocated just like that.


> It's both the identifying thing and of course the steady (and mostly - high) income.

It's the income for me. This career was my only ticket out of poverty, it actually saved me from an otherwise horrible life. Now supposed to cheer that I might very well be replaced in the foreseeable future? It's the only line of work I'm skilled at or qualified to do and I honestly don't know what awaits me if I lose it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: