By that standard I cannot see how any reasonable person could justify building anything at all; most of us, not being evil bastards, have imaginations which will simply fail to suggest such uses.
I think the people that created smartphones and social media had the best of intentions, but the resulting effects on mental health are profound.
At the same time, it is hard to imagine someone letting go of implementing an idea because of vague negative future effects that are not real in the present. And there is a lot of money incentivizing betting on lots of new ideas to see what takes off.
So it's like, if we uncover the next transformative technology that we know little about the future effects of, we just have to eat the cost of proliferating it everywhere before countermeasures can be figured out, if they can be created at all?
Sometimes I think the ease of virality in software could be a Great Filter. If not something farfetched like human extinction, then the Great Filter of human isolation, or of lasting intergenerational conflict, or something else that's profound but not totally catastrophic. Not only is new tech too tempting to spontaneously put down, but it's nearly impossible to know when to put it down. I think maybe if information overload and the like was hypothesized about like AI is starting to be today, we would still not be able to leave social media uninvented, because nobody had tried it and witnessed it fail yet. But the Great Filter comes in when maybe you can only witness certain failures once.
Won’t your new operating system have security issues that other operating systems don’t? How do you propose accounting for the additional harm you’re bringing into the world?
He's not saying don't build anything. He's saying think about the potential for misuse (and implying, take reasonable steps to prevent it). This seems completely sensible to me.
I am aware I cannot think of every bad way a service could be misused . I also am aware that there is always a way to misuse a service. Therefore, in order to prevent a service I use from being misused, I must not build anything.
Obviously that conclusion is wrong, so one of the premises must be wrong. Specifically, liability should be on the abuser not on the service.