Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm already wary of how much trust we put into programs without formal proofs; its a bit troubling that a formal proof it won't fire at kids with toy guns is essentially intractable.


Why not use non-lethal force in these guns that are used in public places? Sure, nobody wants their child hit with a rubber pellets or knocked out with a tranq because they held up a water gun (just random thoughts, could be many possibilities), but if the miss rate was small enough and the consequences for missing were small (non-lethal, non-life-threatening), they could definitely be used to save lives.


That would be a simple problem to solve. Toy guns don't fire projectiles that travel in a straight line at 1,700 mph (the average speed of a bullet). One could easily build a system that tracks the velocity of any object traveling in a confined space, and only engage the source of an object that it determines to be a traveling bullet. Additionally, the system wouldn't have to use deadly force - it could focus on disabling the suspect with a stun gun or on destroying the actual weapon the shooter is firing.


That's true. Something like stabbing would be harder to detect though. It could look almost as benign as a close handshake.


As if we have proofs that humans won't do such a thing? A general AI would be able to reason as well as a human.


Problem is, we are pretty good at picking out problematic people. A general AI might reason as well as a human, but if they want to be able to update bugs they need to have a option for reprogramming it, so there's a bit of paranoia for me of it being a human very prone to suggestion.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: