> We might find (imo probably will find) that a human-free military is more effective
The premise of movies from Dr Strangelove to War Games: a military consisting of an array of automatically launched nuclear missiles.
The worry I have is not so much the idea of an AI going entirely rogue against humans as the much more mundane one of it being weaponized by humans against other humans. The desire to do that is so obvious and so strong, whether it's autonomous weapons or trying to replace all art with slop or doing AI-redlining to keep out "undesirables". It's just that that maps onto existing battle lines, which "apolitical" AI bros (both pro and anti "safety") don't want to engage with.
(We all understand that a hypothetical conscious AI would (a) have politics of its own and (b) be fairly unrecognizable to human politics, except being linked to whatever the AI deemed to be its self-interest, yes?)
John Henry died immediately after winning his competition. He's like a 19th century Kasparov or Lee Sedol, a notable domino of human superiority falling forever.
Before Kasparov was beaten, he was the best chess player.
Then we saw human-AI teams, "centaurs", which beat any AI and any human.
Now the best chess AI are only held back by humans.
We don't know if humans augmented by any given AI, general or special-purpose, will generally beat humans who just blindly listen to AI, but we do know it's not always worth having a human in the loop.
It sounds like you don’t understand what is claimed.
There is an often repeated claim that while the best AI has beaten the best human chess player, a combined human/AI player beats the purely AI player. The idea is that an AI and a human collaborating together will play better chess than just the AI alone. This arangement (an AI and a human collaborating together to play as one) is often called a “centaur”. Akin to the mythical horse/human hybrids.
The sentence you asked about, the “Now the best chess AI are only held back by humans” claims that these “centaurs” are no longer better players than the AI alone. That the addition of a human who is meddling with the thinking or the decision making of the AI makes the AI plays worse chess than if the human would be not present.
Sure, humans built the systems and they are interested in the results. Yes it is a human endeavour. That is not what the claim disagrees with. It disagrees that a human meddling with the AI mid-game can improve on the outcomes.
“According to researcher Scott Reynolds Nelson, the actual John Henry was born in 1848 in New Jersey and died of silicosis and not due to exhaustion of work.[4]”
John Henry would like a word. https://en.wikipedia.org/wiki/John_Henry_(folklore)
> In the long term we might plausibly find an economic equilibrium where humans are not worth feeding
https://en.wikipedia.org/wiki/Holodomor
> We might find (imo probably will find) that a human-free military is more effective
The premise of movies from Dr Strangelove to War Games: a military consisting of an array of automatically launched nuclear missiles.
The worry I have is not so much the idea of an AI going entirely rogue against humans as the much more mundane one of it being weaponized by humans against other humans. The desire to do that is so obvious and so strong, whether it's autonomous weapons or trying to replace all art with slop or doing AI-redlining to keep out "undesirables". It's just that that maps onto existing battle lines, which "apolitical" AI bros (both pro and anti "safety") don't want to engage with.
(We all understand that a hypothetical conscious AI would (a) have politics of its own and (b) be fairly unrecognizable to human politics, except being linked to whatever the AI deemed to be its self-interest, yes?)