Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Ito: I generally agree. The only caveat is that there are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we’re going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen.

> Obama: And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man.

Do we seriously think that it would be that easy? I think an a "generalized" AI, if aware of the ability for a human to remove its power and so that as a bad thing, would not be stopped by unplugging it. By the point you realized you needed to unplug it, it would have already convinced a human that to help it spread and it would have found other sources of power.



That's kind of handwavy, don't you think? How does an AI, presumably requiring specialized (and expensive) hardware, simply escape? Further, this requires humans to be easily hackable by the AI, which is not obviously going to be the case. Why would human cognition have a built in flaw the AI can exploit to escape? Imagine a superintelligent person in a cage, a person as smart as an AI. No matter how clever he is, he's not going to be able to escape that cage given certain levels of precaution.

If you'd say that the AI will be super-persuasive, persuasive enough to make humans behave irrationally, I say, maybe, but it's possible to simply use already irrationally fearful humans as guards to prevent the AI from escaping.


I find the scenario in 'Avogadro Corp' pretty reasonable.


It sounds like Obama was making a joke.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: