I prefer to retain some hope that our civilization has a future and that humans or at least human values and preferences have some place in that future civilization.
And most people who think AI "progress" is so dangerous that it must be stopped before it is too late have loose confidence intervals extending for at least a couple of decades (as opposed to just a few years) as to when it definitely becomes too late.
And most people who think AI "progress" is so dangerous that it must be stopped before it is too late have loose confidence intervals extending for at least a couple of decades (as opposed to just a few years) as to when it definitely becomes too late.