Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The best way to lead is by example. Thank you, Googlers.

The choice not to accept business is a hard one. I've recently turned away from precision-metrology work where I couldn't be certain of its intent; in every other way, it was precisely the sort of work I'd like to do, and the compensation was likely to be good.

These stated principles are very much in line with those that I've chosen; a technology's primary purpose and intent must be for non-offensive and non-surveillance purposes.

We should have a lot of respect for a company's clear declaration of work which it will not do.



That's a pity - as someone in stereo machine vision, metrology is something we really really need for a huge range of applications - that said, if its feeding an autonomous armed targeting system, I agree. We really need to decide as a species we want to continue being a species and realize that solving problems with each other by force is coming to an end. I'm not speaking from a pacifist viewpoint, just a practical one - there's pretty much no way we're not going to continue evolving vision systems in all their hyperspectral glory, and they can do a lot of good - we just cannot arm them. Ever. And surveillance is a complex question - there are enough positives it can offer not to rule it out a priori


Russia reportedly has armed robots capable of operating autonomously and they preemptively refused to honor any ban, and other nations are developing their own.

As terrifying as the prospect is, it's already happening.


Yes. I can only refuse to play and hope the Russians et al come to their senses, the United Nations starts to work better, etc. If we go down that road, it will be the end of most of us, if not all of us. We really have no chance against a machine that's decided to shoot us. As a species we need to have zero tolerance for it - we won't be rendered extinct by snazzy terminators, we'll be killed by the 21st century version of a Victorian steam loom. Its truly a horrible problem, because the solution can't be creating autonomous systems to kill other autonomous systems. The rest of the distinction is just data. The only way to win is not to play.


That’s the problem with mutually assured destruction weapons (which AI bots most surely qualify as). If you accept as a given that at least one faction is building them, then the other factions’ only choices are: 1. Build your own and pray for a Cold War stalemate and 2. Don’t build your own and guarantee you’ll be destroyed.


MAD was based on the concept of the 'Nuclear Club' - the barrier to entry was the mass defect ratio, which you'd have to calculate on your own, and then you'd confront nation-state level expenses of turning that knowledge into practical engineering. It had an inherent cap on the number of possible players. Not saying it was nice, it was just manageable. The barriers to entry here are much lower. For smaller players, lethal drone prices would be at the several hundred dollar range with a six digit investment in infrastructure, and I'd have full confidence in American defense contractors to drive that price much lower, in order to protect their margins, and at scale, they'll mess up their inventory control. The only choice is to recognize as a species this cannot be done, or it will be done. This equation is not about the few agreeing not to kill the many, it is about the many realizing that this is a road to an extinction level event. I don't mean to be dramatic, but I have a system I could make kill if I wanted to, and I paid for the whole thing on credit cards. Its too easy and too efficient. The number of players can't be counted on one hand, it's a tech lone individuals could deploy. If they do then others will, you net an exponential growth rate in deployment, and then the game is over. At the end of the day, all of the pieces are already here - a motivated individual can do a lot of damage, several motivated individuals or groups can get into a squabble and the damage rises exponentially. It is simply not an avenue we can accept. The limit to entry is not financial, it is not technical, it only can be that we must not. I'm pleased Google has defined their ethics as a company, I just argue in this case we need to define our ethics as a species. We cannot build these because if we do they will get loose, they will be used, and it will be a tragedy. We must all agree that arming autonomous machines is beyond the pale, for any one, for any reason. As soon as one is built, your destruction is guaranteed, regardless of whether you too build one. .


I guess my point is (and it looks like we agree): good luck getting all billion+ people on the planet with a credit card to agree that building a kill bot is a bad idea. All it takes is for one of those people to disagree, build them, and then we all must have them.


>All it takes is for one of those people to disagree, build them, and then we all must have them.

A killbot is (more or less) a mobile booby trap. If we have a problem with madmen leaving booby traps around, we can't solve that by laying more booby traps.


This is a bit dramatic though because these robots can't reload themselves. Automated targeting for a single area is miles away from planning strategy, deploying battle groups, etc.


> This is a bit dramatic though because these robots can't reload themselves.

Neither can mines, and look at the decades of devastation those have caused. Now imagine a minefield where the mines get up and chase after you.


I just love hacker news. One of the last bastions of people who think on the internet. This comment is succinct, dead on the money, and absolutely terrifying. My hat is off to you, I had never managed to formulate the risk so concisely. :-) Keep on doing this, and you help keep us all honest.


Truly the stuff of nightmares. Makes me think about that episode of Black Mirror with the Big Dog (DARPA) inspired surveillance robot...


Not dramatic - the robots are not expected to reload - one robot one kill - simple as that- mass produced plastic, lots of robots, no intention whatsoever for reusability - the risk is that this means you dump thousands to millions of these stupid little drones, many make mistakes, and many others take naps and then wake up and do more damage when everyone thought it was all over. These are disposable killing machines, not Reapers. Furthermore, at todays prices, a cloud infrastructure and a million dollars can net you 10K lethal drones. There's no finesse, just brute force.


None of that stuff sounds impossible, though. Hard, sure, but well within the bounds of possibility.


> This is a bit dramatic though because these robots can't reload themselves

Yet...


Seriously, I've already got three ideas cooking on how that could be done. That's essentially the same skill as landing.


Russia didn't say that they refused to honor any ban, they said they refused to honor any ban that was based on trust. Their point was that, unlike nuclear weapons, these types of devices can be manufactured in secret or can be manufactured to be quickly convertible into fully autonomous devices.

I far more concerned about the hackability of civilian autonomous systems than I am about Russia's killbots. If Russia wants to end the world they already can and paranoid militaries make more secure cyber systems than random internet connected cars or planes.


Agreed regarding surveillance. In my book, working on Landsat, JWST, or LISA = awesome.

I will only direct expertise toward an imaging system like Keyhole or targeting/guidance systems if our society faces a clear and acute foreign threat.


What about street level city/town surveillance? Of course all the cop shops want to feed all their wants/warrants into it. I have no problem with this if there is a legal framework that supports it. More importantly, there are toddlers that get loose from their mothers because they are fundamentally greased pigs, or there are older members of our society who may be prone to wandering off in a daze - the ability to locate them within a very few minutes would be a good thing indeed. My own take is that I will look, I will try and recognize, but I will NEVER target. Machine is too good, sacks of meat like us have no prayer. That said, I've talked to a couple of cop shops, my position is that I will work with them when there is a framework that defines a legal basis for that level of tracking of people who clearly wish not to be tracked, even if they are bad actors. It's not a question of how far away you are, its a question of what you will track, why you will track, and what structure our society has put in place to get the benefits, not the risks. Looping back to the main point of the post, looking is OK, tracking requires societal governance, and any autonomous lethal capability, never mind action, should be absolutely forbidden, not just by law, but because we don't all want to die.


Turning away will slow things down but AI for military applications will happen. Someone will fill the void.


That doesn't make it ok to be the one to fill it.


On a geopolitical level, it does. It's far better for the world that the United States developed atomic weapons before either Germany or the Soviets did.


It's a good point, and if anybody thinks an arms race isn't already happening then they are very naive. While we're debating the morals of this, the Chinese, Russians and many others are trying to make it happen. For all we know, they may have already succeeded, and we're discussing the morals of it while it's already in full production a few thousand miles away.

The relevant research papers, knowledge and skills are widely available across the world. There are some advanced courses at Chinese universities right now that can only be seen as 'AI for military'.


And I can almost guarantee that, unlike the US, the Russian and Chinese militarized AI programs are not optimized for minimizing civilian casualties.


Really? Looking at drone strike causualities you’d have no clue how much the US even tries.

Its funny cause I haven’t heard of large scale Chinese or Russian drones flying over other countries targeting terrorists but ending up murdering children on more than one occasion.

Perhaps these technologies can be just evil and the US is the only country powerful enough to get away with using them.

I really have no stomach for just a vapid excuse, and I cannot fathom how so many people fall for it.


If you ever end up in a war with China, you'll be seeing plenty of drones. Obviously China doesn't use drones for killing terrorists - China doesn't kill terrorists because terrorists have no way to get into China in the first place.


So if China does X US can do it too, also if China could do X but we do not know for sure then US can also do X.

Using this logic then US can do anything with the exception of the things you are 100% sure China(or insert other country here) is not and will not do it.

This means surveillance, killer robots, black magic,genetically enhanced humans, illegal experiments and procedures, is a valid tool for US because "what if we have a war with China, we must have same tools as them"


Not so, nobody forces anybody to compete in an arms race. My point is that there is an arms race - and inventions such as AI for war are part of that arms race and are being developed. These are very easy inventions too, given the widely available and extremely powerful software and hardware. These are facts. No amount of upsetting moral arguments make the arms race go away.

The question is: should America compete in the arms race? I don't know. But there are big consequences either way.


Sure, you are not forced, I said "CAN",

Say US wants to spend a lot of many with some black magic consultant that could assassinate at a distance, you can justify it by launching a rumor that China does it too or probably does it or it will do it.

So you throw away any moral discussions by blaming China , they do it so we have no choice.


Hmm makes sense. I see what you mean from the position of say a general in US army tasked with this stuff: if someone says China can do X, his bosses are going to ask why America can't do X too. And therefore to force America into doing X, you just need reliable sounding misinformation to prove China doing X, and then moral arguments are swept aside.

However that aside, AI is very big in China right now, and they're using it for numerous applications with thousands of students going through Chinese universities being taught how to handle this stuff. While the same doesn't apply to niche interests like genetically enhanced humans (who is working on that, really?), something like AI with thousands of capable researchers and engineers is a different story.


Sure, I would like to see the real reason, We want drones to strike in 3rd world countries. Is not like China will send drones in US.

People don't like war, so it is natural that some people won't want to use their talent for making weapons.


They have plenty of terrorists and frequent attack, but also no free press so you do not get to know much about it...


I would really love it if a single person who derides the US for their drone strike casualties would actually back their position up with some data.

At a minimum, I would like to know the civilian and non-civilian casualty rates of drone strikes, the definition of civilian being used, a good idea of what alternative military action the US would have taken if they didn't have drone capabilities, and the civilian and non-civilian casualty rates of those military strike options.

Without that, bringing up the drone strike casualties is nothing more than moral grandstanding based on how certain types of military action make you feel. Bonus points if you use the words "murdering children" in an attempt to bypass any logic and go straight for emotions.



Look at Syria and Chechnya if you have the slightest amount of ignorance about how Putin does counterterrorism. Are you literally a Russian troll?


Doesn't seem like the US one does either.


You can't just claim something like that as if it was a fact.

Maybe it would be better for the world for Switzerland to have the most advanced AI. Why does it need to be the US, especially in the rapidly deteriorating political climate from the past 2 decades?


If e.g. France was willing to step up, and had a flourishing tech industry and a track record of responsible global leadership, that would be great. They seem to have their shit together slightly better than the US does, at this point. But, even at our worst, the US is a better custodian of this kind of power than either China or Russia, and those are the only two other countries that would be remotely interested in this kind of technology.


Please tell me what was the last time when China attacked someone across the globe (or anywhere). When was the last time Russia attacked someone across the globe (yes they have wars on their borders, but tell me what would happen if Russia builds the military base in Mexico like NATO does at Russia borders)? Now tell me when was the last year when USA was not in a war?

I don`t know how somebody rationalizes deaths of millions of civilians caused by USA/NATO army/interventions, which acted in non-defense, but I for sure know that they would not rationalize it anymore if they would happen to be on the receiving and of "democratization".


Yeah, I'm sure USA would go to war with Mexico, deny it and annex a part of it. Russia has military bases near NATO countries as well. I'm not rationalising USA army interventions but comparing it to Russia in this case is ridiculous.


Tell that to Kuba when Russia tried to install army equipment there. And USA has about 800 military bases in the whole world outside of USA. Russia has 8 or 9 outside of Russia. USA is spending hundreds of billions of dollars annually just to support those military bases. I know I would be pissed about that if I were a US citizen.


The US were the only one to use atomic weapons on a already defeated enemy/at all.


The Japanese were hardly already defeated. Had the bombs not been dropped, the United States would have invaded Japan directly, island by island, until the country surrendered. The loss of life on both sides would have been tremendous.

Source: My grandfather had orders to go and do exactly that when the dropping of the bombs ended the war.


This is of course one of the justifications American leaders used, and as always the victor gets to set the perceived historical narrative. Politically it was extremely important for the US to believe the bomb materially shortened the war given the huge amount of resources the Manhattan Project had consumed that otherwise could have been invested elsewhere in the war effort, especially when the military had to justify the incredible expense to Congress (adjusted for inflation the total cost is around 30 billion in 2018 dollars). I've recently been reading the excellent "The Making of The Atomic Bomb" by Richard Rhodes which covers the events of this period in much detail.

The US had already been ridiculously effective using firebombing to level Japanese cities with their B-29s - so much so, they actually had to consider slowing down/changing targets to leave enough behind to use the Atomic Bomb on: there was almost nothing left worth hitting in strategic terms. By the time the bomb was dropped Japan was largely a beaten nation already considering surrender, Tokyo a smoldering rubble pile save for the Imperial Palace.

"The bomb simply had to be used -- so much money had been expended on it. Had it failed, how would we have explained the huge expenditure? Think of the public outcry there would have been... The relief to everyone concerned when the bomb was finished and dropped was enormous." - AJP Taylor.

Of course no one can say with certainty, but I certainly don't consider the answer to this question to be a simple one.


The US had no way of knowing for sure what the top-level strategic decisions were in Japan. All they knew was that, throughout the war, Japanese troops virtually never surrendered, repeatedly fought to the death, and engaged in outright suicidal tactics including Kamikaze attacks. This persistence not only continued but intensified on Okinawa. There was no reason to believe that the Japanese military would ever stop short of fighting to the bloody end.

Even after Nagasaki, it took personal intervention from the Emperor and the foiling of an attempted coup for Japan to surrender.

Of course, dropping the bomb and developing the bomb are two distinct, albeit related, ethical questions.


I believe that much of the world is completely unaware of the devastation wrought by the firebombing campaigns.

To quote Wikipedia: "On the night of 9–10 March 1945, Operation Meetinghouse was conducted and is regarded as the single most destructive bombing raid in human history. 16 square miles (41 km2) of central Tokyo were annihilated, over 1 million were made homeless with an estimated 100,000 civilian deaths."

https://en.wikipedia.org/wiki/Bombing_of_Tokyo


> there was almost nothing left worth hitting in strategic terms.

The bombing campaign leading up to the atom bombs specifically left about 5-6 cities relatively untouched. There were still major strategic targets left in August. They did this to test their effectiveness on cities, as a demonstration to the soviets, and to destroy morale.

Demonstrating their effectiveness to the soviets is why they didn't drop them in Tokyo bay.


Would you rather it was Stalin?


> It's far better for the world that the United States developed atomic weapons before either Germany or the Soviets did.

[citation needed]

The US having nuclear weapons didn't work out so well for the 70,000-120,000 innocent civilians that were killed in the attack on Hiroshima.[1] I don't have handy access for how many innocent civilians were killed in Nagasaki but I would assume it was similar.

Would the Nazis have done the same thing? We dont know, and we cant know. But what we do know is that despite the Soviets/Russia, France, UK, China, India, Pakistan and probably Israel and North Korea having the capability, only the US has used nuclear weapons for indiscriminate & wholesale massacre.

So with respect, I really dont think you can go around trumpeting how it was "far better for the world" for this to happen when there is zero evidence to support that viewpoint, and at least 70,000-120,000 reasons to refute it.

1 - https://en.wikipedia.org/wiki/Atomic_bombings_of_Hiroshima_a...


Come on are you really making this argument? That we cant know if it was better for the USA or Nazi Germany to have nuclear weapons? Like, say you have to go back in time and give nuclear weapons to Nazi Germany or the USA during ww2.. you would throw up your hands and say we can't know who the weapons should go to?


If I understand correctly, you're saying that Germany and Russia shouldn't have had nuclear weapons because they would have used them, while arguing that it was right that they went to the Americans, in spite of the fact that they used them.

This does not seem like a valid position.


They didn't mention the USSR/Russia at all, just Nazi Germany. Their position is quite obviously that the US having nuclear weapons and the end of WW2 was better than Nazi Germany having them.

It has nothing to do with whether or not they would have used them. The reason that it was better for the US to have them is because Nazi Germany was conquering sovereign countries through military action, not to mention engaging in the industrialized genocide of millions of people.

Their position rests on the fact that most people agree that the Allies were the "good guys" and the Axis were the "bad guys" in WW2, which is not a position that really has to be defended.


The United States imposed their will on Japan with the force of nuclear weapons, and as a result, Japan is a prosperous, free, and independent country.

If Germany or Soviet Russia had the opportunity to use nuclear weapons to impose their will on their enemies, one need only look at what happened to the victims who already fell under their dominion.


I am saying that we cant know what Nazi Germany would have done with them - i.e. we cant know if they'd have been "good" or "bad" with them. Anything we think now is merely conjecture and speculation. We have to take a step back and really examine and question our own beliefs and biases. How much of what I am thinking is actually legit, and how much is affected by what I've seen at the movies/been told by teachers/read in the papers/seen on TV and just accepted as fact?

But we do know for an absolute fact is that the US did use nuclear weapnos to kill thousands and thousands and thousands of innocent men, women and children (and for the sake of balance, we do know for an absolute fact that the Nazis did kill thousands and thousands and thousands of innocent men, women and children, but not using nuclear weapons)

There is only one country with actual blood on their hands here with regard to nuclear weapons - the other nuclear powers have so far been able to show restraint.

As such, I find it pretty objectionable for people to suggest that it was "far better" this way, when the evidence really does not back it up.

I am not saying it is not best possible outcome for the world. Could it have been worse if the Nazis had for example nuked London in 1945 and killed a million? Sure, of course that might have happened, but it didn't actually happen. Perhaps had the nazis had that chance, the UK would have surrendered and there would have been peace and countless lives could have been saved and a completely new era of peace and prosperity begun? Or perhaps it would have also been untold slaughter and misery like the US did to Japan?

We just cant know, and so I object to people saying it was "far better" for history to have played out the way it did based mainly on - I suspect - the plot of Hollywood movies they've seen. History is written by the victors.

Anyway, this is way off topic and Godwin's Law has clearly been invoked. We should stop.


We absolutely know the Nazis would have at least use atom bombs on the Eastern front. Whether they would use them on the UK could be up for debate, but arguably they would have used them there as well.

Regardless, the atom bombs were certainly not the worst things any country did in WWII. The US firebombing was far worse. Everyone did bad things in that war.

Stop using your modern sensibilities to judge them.


Firstly, Godwin's law obviously doesn't apply to discussions about WW2.

Secondly, your assertion that we can't know what Nazi Germany would have done with nuclear weapons is correct, but you seem to be interpreting that as meaning "all possible outcomes of Nazi Germany having nuclear weapons are equally as likely", which is absolutely not true and a common mistake to make in an argument.


I'm sure the German and Soviet scientists thought that it would be best if they developed nukes first, too.

How sure are you that the present-day United States is the "right" group to have AI-controlled murder-drones?


Would you rather use Google self-driving car, or one made by a random hacker start-up[0]? Similarly, would you rather your military use AI from a low tier company?

https://www.theverge.com/2017/7/7/15933554/george-hotz-hacki...


I don't get it. Do you suggest that Google or big players make things better than anyone else? Or other way around, small startups cannot make good products? I believe if it was so then only companies starting with 1000+ people could do anything useful, small startups were doomed to fail, also big player could never do wrong. Sounds a bit simplistic and contrary to countless examples from life.


In the specific space of machine learning in life-critical applications, I generally prefer product from established companies with reputation to sub-1000 startups.

The incentives to cut corners and go to market are much higher for small startups with short runways. I don't want corners cut when lives are on the line.


"I don't want corners cut when lives are on the line"

...still not sure if you are talking about the (heavily) cost optimising conglomerates or not....if we agree to constrain the topic to finances leaving out innovative ideas, ethics, integrity, trustworthiness, etc. where conglomerates may be loose with standards.

Also do you know how Apple and Google started? (I hoped the suggestion will get through without stating the obvious, but it did not)


This is not a binary choice nor a necessety.


That claim is subject to different perspective. This might not represent as a problem for someone else.


Is it better to develop weapons tech for your own in-group, or to risk being on the receiving end of weapons tech that another group developed and against which you can't defend yourself due to your earlier principled stand?


I don't think it's a given that if you don't develop a certain type of weapon it renders you unable to defend yourself against it.


No, but it's a given that if you don't develop weapons tech at all, it renders you unable to defend against weapons that you would otherwise be able to counter.


Probably someone else will do the dirty deed, but now 1) it won't be Google, and 2) it'll be someone willing to do more evil than Google.

That said, your argument does not avoid complicity in behaving badly or potentially doing so. It says only, "I'm a shit, but I'm willing to be a shit because there are other shitty people in the world who will behave badly even if I behave well, so I choose to behave badly because it serves me and the outcome is probably the same either way."

Of course, if your business partners adopt the same Machiavellian philosophy toward you that you espouse, one day they'll probably speak those very words when they turn against you, since someone else probably would have.


The only things we can change in life are our own choices.


The entire point of weapons is that they give you significant influence over other people's choices.


You can say that about violence in general


> Someone will fill the void.

Especially since Google is publishing their results for every void-filler out there to review. Unless they plan to start hiding results that might have military applications?


Inevitable isnt the same thing as right.

After all, everyone inevitably dies, so why not murder them.


If someone will inevitably develop nuclear weapons, then it is right to develop nuclear weapons even if it "just" create s mutually assured destruction, as opposed to abstaining and thus ensuring dominance of whoever does decide to develop them.


> ”non-surveillance purposes”

Not what it says. It says:

“...surveillance violating internationally accepted norms.”

Thanks to leaks, we have a glimpse of the new normal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: