> or, best case, in 2000 years, the entire observable universe will have been consumed for energy
You're definitely mixing things up, and the set of things may include fiction. 2000 years doesn't get you out of the thick disk region of our own galaxy at the speed of light.
> The doomers have also been funded to the tune of half a billion dollars and counting.
> If these Gen-X'ers and millennials really believed their kids right now were in extreme peril due to the stupidity of everyone else, they'd be using their massive warchest full of blank cheques to stockpile weapons and hire hitmen to de-map every significant person involved in building AI. After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?
The political capital to ban it worldwide and enforce the ban globally with airstrikes — what Yudkowsky talked about was "bombing" in the sense of a B2, not Ted Kaczynski — is incompatible with direct action of that kind.
And that's even if such direct action worked. They're familiar with the luddites breaking looms, and look how well that worked at stopping the industrialisation of that field. Or the communist revolutions, promising a great future, actually taking over a few governments, but it didn't actually deliver the promised utopia. Even more recently, I've not heard even one person suggest that the American healthcare system might actually change as a result of that CEO getting shot recently.
But also, you have a bad sense of scale to think that "half a billion dollars" would be enough for direct attacks. Police forces get to arrest people for relatively little because "you and whose army" has an obvious answer. The 9/11 attacks may have killed a lot of people on the cheap, but most were physically in the same location, not distributed between several in different countries: USA (obviously), Switzerland (including OpenAI, Google), UK (Google, Apple, I think Stability AI), Canada (Stability AI, from their jobs page), China (including Alibaba and at least 43 others), and who knows where all the remote workers are.
Doing what you hypothesise about would require a huge, global, conspiracy — not only exceeding what Al Qaida was capable of, but significantly in excess of what's available to either the Russian or Ukrainian governments in their current war.
Also:
> After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?
You presume they know. They don't, and they can't, because some of the people who will soon begin working on AI have not yet even finished their degrees.
If you take Altman's timeline of "thousands of days", plural, then some will not yet have even gotten as far as deciding which degree to study.
I somehow accidentally made you think that I was trying to have a debate about doomers, but I wasn't which is why I prefixed it with "fwiw" (meaning for-what-it's-worth; I'm a random on the internet, so my words aren't worth anything, certainly not worth debating at length) Sorry if I misrepresented my position. To be clear, I have no intense intellectual or emotional investment in doomer ideas nor in criticism of doomer ideas.
Anyway,
> You're definitely mixing things up, and the set of things may include fiction. 2000 years doesn't get you out of the thick disk region of our own galaxy at the speed of light.
Here's what Arthur Breitman wrote[^0] so you can take it up with him, not me:
"
1) [Energy] on planet is more valuable because more immediately accessible.
2) Humans can build AI that can use energy off-planet so, by extension, we are potential consumers of those resources.
3) The total power of all the stars of the observable universe is about 2 × 10^49 W. We consume about 2 × 10^13 W (excluding all biomass solar consumption!). If consumption increases by just 4% a year, there's room for only about 2000 years of growth.
"
About funding:
>> The doomers have also been funded to the tune of half a billion dollars and counting.
> I've never heard such a claim. LessWrong.com has funding more like a few million
"
A young nonprofit [The Future of Life Institute] pushing for strict safety rules on artificial intelligence recently landed more than a half-billion dollars from a single cryptocurrency tycoon — a gift that starkly illuminates the rising financial power of AI-focused organizations.
"
You're definitely mixing things up, and the set of things may include fiction. 2000 years doesn't get you out of the thick disk region of our own galaxy at the speed of light.
> The doomers have also been funded to the tune of half a billion dollars and counting.
I've never heard such a claim. LessWrong.com has funding more like a few million: https://www.lesswrong.com/posts/5n2ZQcbc7r4R8mvqc
> If these Gen-X'ers and millennials really believed their kids right now were in extreme peril due to the stupidity of everyone else, they'd be using their massive warchest full of blank cheques to stockpile weapons and hire hitmen to de-map every significant person involved in building AI. After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?
The political capital to ban it worldwide and enforce the ban globally with airstrikes — what Yudkowsky talked about was "bombing" in the sense of a B2, not Ted Kaczynski — is incompatible with direct action of that kind.
And that's even if such direct action worked. They're familiar with the luddites breaking looms, and look how well that worked at stopping the industrialisation of that field. Or the communist revolutions, promising a great future, actually taking over a few governments, but it didn't actually deliver the promised utopia. Even more recently, I've not heard even one person suggest that the American healthcare system might actually change as a result of that CEO getting shot recently.
But also, you have a bad sense of scale to think that "half a billion dollars" would be enough for direct attacks. Police forces get to arrest people for relatively little because "you and whose army" has an obvious answer. The 9/11 attacks may have killed a lot of people on the cheap, but most were physically in the same location, not distributed between several in different countries: USA (obviously), Switzerland (including OpenAI, Google), UK (Google, Apple, I think Stability AI), Canada (Stability AI, from their jobs page), China (including Alibaba and at least 43 others), and who knows where all the remote workers are.
Doing what you hypothesise about would require a huge, global, conspiracy — not only exceeding what Al Qaida was capable of, but significantly in excess of what's available to either the Russian or Ukrainian governments in their current war.
Also:
> After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?
You presume they know. They don't, and they can't, because some of the people who will soon begin working on AI have not yet even finished their degrees.
If you take Altman's timeline of "thousands of days", plural, then some will not yet have even gotten as far as deciding which degree to study.