Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Amazing response, thanks for taking the time - concise, clear, and I don’t think I’ll be using that comparison again because of you. I see now how much more convincing mathematical models are than philosophical arguments in this context, and why that allows modern climate-change-believing scientists to dismiss this (potential, very weak, uncertain) cogsci consensus.

In this light, widespread acknowledgement of xrisk will only come once we have a statistical model that shows it will. And at that point, it seems like it would be too late… Perhaps “Intelligence Explosion Modeling” should be a new sub-field under “AI Safety & Alignment” — a grim but useful line of work.

FAKE_EDIT: In fact, after looking it up, it sorta is! After a few minutes skimming I recommend Intelligence Explosion Microeconomics (Yudkowsky 2013>) to anyone interested in the above. On the pile of to-read lit it goes…



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: