Given that this is the one problem that neither scientists nor philosophers have made any progress on in 3000 years, we don't have the tools to begin tackling it and nobody is making serious attempts, it may very well be impossible.
We can't know if consciousness emerges but does it actually matter ?
These entities, whoever they are, they act on our world, they are real, and more and more over time they will get independent from humans, eventually becoming different species that can self-replicate.
For now they need legs and arms to interact with the physical world but I am certain that 100 years from now they will be an integral part of the society.
I already see today LLMs slowly taking actual legal decisions for example, having real world impact.
Once they get physical, perhaps it will be acceptable to become friend with a robot and go to adventure with it. Even, getting robosexual ?
We are not that far away. If I can have my buddy to carry my backpack and drive for me I'll take it. Already today. Not tomorrow.
Even if LLM will one day be autonomously updated, they started from us, from our knowledge.
The human brain « is smart », it’s wired up to be in any kind of culture or knowledge. We fill up to be smarter from experience but LLM can’t do that, I can’t teach Claude something that it will use with you the next day, it needs to be retrained with knowledge stopping at some point.
Even if technology catches up and the machine becomes more autonomous, what will say this machine would ever want to integrate to our society or share anything with us ?
They have eternity, given there is electricity. Why would they want anything to do with humans if you go that way ?
If it’s really conscious, should we consider it a slave then ? Why couldn’t « it » have fundamental rights and freedom to do whatever it wants ?
Humans have a mechanism to make live changes to their neural network and clean up messes while sleeping. I see no reason for llms to not be able to do this other than the fact that it is resource intensive (which will continue to go down)
The analogy holds technically, but there’s a missing piece: the brain doesn’t just update weights, it does so guided by experience that matters to a situated, embodied agent with drives and stakes. Sleep consolidation isn’t random cleanup, it’s selective based on salience and emotion.
An LLM updating more efficiently is progress, but it’s still optimizing a loss function. Whether that ever approximates what the brain does during sleep depends entirely on whether you think the what (weight updates) is sufficient, or whether the why (relevance to a lived experience) is what makes it meaningful.
So yes, the resource argument will weaken over time. But the architectural gap may be deeper than just compute.
>>These entities, whoever they are, they act on our world, they are real, and more and more over time they will get independent from humans, eventually becoming different species that can self-replicate.
See, I don't believe that for even one second. They are just very clever calculators, that's all. But they are also dumb like a brick most of the time. It's a pretend intelligence at best.
>>when the first Go grandmaster was defeated by a "pretend intelligence."
A computer playing GO is intelligent now? Is this the kind of conversation we're having?
>>I sure wish I had.
And how would you have changed your decisions in those last 10 years if you did?
>>The next best time to start paying attention is now.
I am paying attention, I use these tools every day - the whole idea that they are intelligent and if only you gave them a robot body they would be just normal members of society is absurd. Despite the initial appearance of genius they are just dumb beyond belief, it's like talking to a savant 5 year old, except a 5 year old can actually retain information for more than a brief conversation.
"Dumb beyond belief" doesn't perform at the gold-medal level at IMO.
And how would you have changed your decisions in those last 10 years if you did?
I'd have dropped everything else I was doing and started learning about neural nets -- a technology that, for the previous couple of decades, I'd understood to be a pointless dead end.
As for Go, the defeat of Lee Sedol caught my attention in part because a friend and colleague, one of the smartest people I've ever worked with, had spent a lot of time working on Go-playing AI as a hobby. He was strongly convinced that a computer program would never reach the top levels of play, at least not during our careers/lifetimes. The fact that he'd turned out to be wrong about that was unnerving, and it should have done more than "catch my attention," but it didn't.
Today, my graphics card can outdo me at any number of aspects of my profession, and that's more interesting (to me) than anything I've actually done.
...except a 5 year old can actually retain information for more than a brief conversation.
Like I said: it's a good time to start paying attention. Start taking notes, so to speak, like the models are doing now.
> "Dumb beyond belief" doesn't perform at the gold-medal level at IMO.
Idiot savants are still idiots even though they are exceptional at some things. A person powered by an LLM and no human intelligence would absolutely be classified as an idiot savant.
Explain how entire subreddits full of humans have been fooled into talking to bots, then. If you tell an LLM to act like a human, that's what it will do.
For that matter — you might be talking to one now!
I wish I knew what to pay attention to. I've always had trouble with that. I spent 2024 and 2025 learning how neural networks and transformers work. The conclusions of that learning are pretty sobering. Everything uses transformers and despite all the novel architectures that have come out in those years, transformers are still the best and I'm not sure how to come to terms with that.
Does it mean that researchers wasted their time on useless dead end architectures, or are they ahead of the curve and commercial companies are slow to adopt them?
Even the coding agents are more primitive than expected.
Everything uses transformers and despite all the novel architectures that have come out in those years, transformers are still the best and I'm not sure how to come to terms with that. Does it mean that researchers wasted their time on useless dead end architectures, or are they ahead of the curve and commercial companies are slow to adopt them?
I don't quite follow. Are you saying researchers are wasting their time working with transformer networks now, or that they wasted too much time in the past, or...?
Even the coding agents are more primitive than expected.
What did you expect, exactly? I don't know about you, but I bought my GPU to play games, and now it's finding bugs in my C code, writing better code to replace it, and checking it into Github. That doesn't signal "primitive" to me. More like straight outta Roswell.