Your idea of there being competition for human relationships is super fascinating. In my own life, there are fun/easy relationships, and there are those which push me to think deeply and differently, for any number of reasons.
In that vein, doesn’t “competition for relationships” necessarily breed egocentrism above all else? The winning relationship will give you what you want, but not what is necessarily true…
In that vein, you might also consider that the commenters you’re replying to may be worth engaging intellectually with more deeply purely based on the fact that they’re presenting divergent views that are uncomfortable.
Based on how we’ve designed AI to date and how you describe it in terms of optimizing for self enjoyment for each individual (and difficult to argue most will choose that for themselves), it’s hard to see a world where AI can push productive conflict the way humans can.
Then again, I might just be a flawed human who doesn’t fully understand the point you are trying to make and is extrapolating from my own biases, flaws, experiences, and the limited sample size I have of your point of view.
The divergent views need to be backed by real reasoning, otherwise it's a case of giving value to an opinion just because it's different, not because it has actual value. I'll give you an example, I'd very likely get the same kind of haughty, a bit hurt ego response if I proclaimed that I don't believe that reading books has much value anymore. Which is something I also believe btw. The average human would immediately respond in the very typical, trained societal way via: "well, I suggest you start going to the library and start reading more and engaging with the material because you are clearly not understanding the value of reading." Such a response has nearly no value and comes from a biased position with no attempt to understand my position. They assume that they are correct while spending no energy on thought about it. It's typical of humans and AI is so much superior here.
I actually also disagree that AI cannot push productive conflict, surprisingly the first thing that AI was able to do very well was insults. Of course insults are not productive conflict but it was something I noticed and then I gave a voiced AI (elevenlabs) a big prompt about how it should please be critical, truth seeking, always thinking about how I might be wrong and suddenly I was getting a lot of pushback and almost human-like investigation of the ideas I was proposing. It was still too shallow and unable to evolve but it was giving me some real pushback. You also have to remember that the typical human criticism is always drenched in ego, greed, various self benefit calculations etc. To actually get constructive and professionally informed criticism is really hard to get from humans too, it's not like AI is in a bad spot even now. You basically have to pay somebody to get good human criticism because it's tiring to a human, it's work and it takes expertise. People on average are simply not doing this or doing it well.
I'm merely trying to see this whole AI situation as objectively as I can and likewise I try to see the value of humans as objectively as possible. Obviously humans have value, but many seem to like overestimating the value of humans a lot. We've been at the top of the food chain for so long, we've been the strongest species on the planet for so long.. we can't even think of a mental model where humans aren't inherently valuable. Similar to how people cannot think of how books couldn't inherently be of value. Because we were immersed for centuries in a system where books were the best way to get the highest quality information. Now suddenly it changed and people cannot grasp it, it's a non grata thought - simply an unwelcome thought.
In that vein, doesn’t “competition for relationships” necessarily breed egocentrism above all else? The winning relationship will give you what you want, but not what is necessarily true…
In that vein, you might also consider that the commenters you’re replying to may be worth engaging intellectually with more deeply purely based on the fact that they’re presenting divergent views that are uncomfortable.
Based on how we’ve designed AI to date and how you describe it in terms of optimizing for self enjoyment for each individual (and difficult to argue most will choose that for themselves), it’s hard to see a world where AI can push productive conflict the way humans can.
Then again, I might just be a flawed human who doesn’t fully understand the point you are trying to make and is extrapolating from my own biases, flaws, experiences, and the limited sample size I have of your point of view.