Given a sufficiently desperate situation, I can't imagine any military with autonomous tanks taking a call to keep the autonomous shooting "disabled" when it would risk actually loosing a war.
If anyone had real ethical concerns, they would've not put a gun on a robot to begin with. Once you arm a robot, a software disable is just lipstick on a pig.
True, but it's still a slippery slope. Can't wait for the day where the decision of who to shoot is outsourced to call centre-like environments in India where people will be paid peanuts to decide who lives and dies. /s
We can ban anything we want but countries still make them and force the hands of others to counter it. How many hundreds of thousands of tons of chemical weapons still exist?
Just making the illegal more illegal does nothing but mollify a few here and there but certainly lets politicians grand stand as if they accomplished something.
Weapons of war deserve no mercy, since these robots will exist then the only logical solution is design systems to take them down. Weapon systems can certain be designed to take down anything mechanical and ignore people. If anything depriving the petty war mongers out there of their weapons to use against people might be the only worthwhile outcome of this technology.
They only way to get rid of mines is to make them useless. Robots can make them useless, because they can spot and disarm or label mines automatically, or just step on them.
"This isn't a new thing! Look we already have this superficially similar thing!" is a common argument in favour of changes to the status quo, especially when people believe the change is negative. It's intellectually dishonest.
And already we have all sorts of problems with cruise missiles (fired by mistake, locking on the wrong target, misbehaving in flight...). Increasing the window of risk to ground combat is very, very risky. As the British ex-soldier quoted in TFA, even humans struggle to discern friend from foe in situations like Afghanistan or Iraq, a machine could potentially create clusterfucks of unimaginable proportions.
Not that I think someone won’t do it - if it gives even a resemblance of tactical superiority, someone will deploy it, then watch in horror as things go terribly wrong.
They fly pretty high and the antenna points towards the sky not the ground, and yes they do use specialised antennas. You'd much easier find a predator using radar.
Watch some of the videos on youtube, it's quite interesting. You can see that the predator is able to target and track automatically so the pilot basically just has to hit the button to fire the weapons and a hit is almost guaranteed. No need to have good reflexes or low latency. The pilots are in Nevada so the best case scenario is probably 100ms pings anyway.
That demo where the unmanned tank pointed its gun at the platform full of generals...... talk about your nightmare demo. I’ve done a few and even getting my network connection working at a client site can make me sweat a little when a room full of executives are waiting. I can’t imagine how the engineers felt when their baby went off the reservation like that.
Even now, 33 years later, that robot is goddamn terrifying. Props to the designers for creating the most disconcerting creature I've seen in all cinema.
No government or regulation can strop entropy, and given the tools and the knowledge, a Robocop is inevitable. The best we can do is prepare the rest of the world to deal with these things, and the time when without a doubt they get hacked, or become self-aware. The question is, what are we going to do?
If you're being attacked, troops on the ground aren't necessarily better than autonomous robots. Robots don't rape and pillage. An advancing robotic army doesn't incentivize the retreating force in adopting a scorched earth tactic, since the robots don't need buildings to house them, groceries or farms.
Maybe I read too much about the Nanking massacre recently, but having it fresh in my mind makes me question whether the often repeated assumption that humans need to be involved, is to be taken at face value. Maybe it's a good idea to keep humans at a safe distance, even if they're actively part of the conflict.
But we should have 'skin in the game'. Removing the risk, by using robots, lowers the barriers to engage in conflicts and may lead us to behave more aggressively than we would have done otherwise.
These articles always focus on man vs. machine but war is a great leveler. What happens for example when there is a proxy war and these devices get used? Lets say the US deploys them somewhere for one side but then surely Russia or China or whoever is supporting the other side will deploy their own versions to counter. Will we then end up with a machine vs. machine scenario?
This is basically the plot of Horizon Zero Dawn. [Spoilers] Machine swarm vs machine swarm in human proxy war, but then the humans lose the password and can’t access the swarms and they destroy everything.
On one hand, I'm pretty excited that something from the universe of Ghost in the Shell would become reality; on the other hand, it's a slippery slope to some dystopian reality where ethical considerations are completely removed from battle.
Repeat of WWI, to be honest, with all these technologies clashing together for the first time. There weren't a lot of "ethical considerations" to using chemical weapons when the alternative was losing thousands in another doomed charge out of the trenches. Same with WWII, where the US would rather open Pandora's box of the Nuclear Age than risk a ground invasion of mainland Japan.
I'll grant you that there weren't much ethical considerations to speak of in the last world wars, and perhaps my naive view has been unrealistically colored by sci-fi TV shows, but my point is simple: a broken autonomous killing system won't know when to stop. Even unethical war makers have affinity to humankind and know when to stop. And despite the meaning implied by the MAD doctrine, it never meant that humans will nuke themselves to oblivion; it simply says a nash equilibrium will be achieved with both conflicting parties staying out of each other's way.
Unintended consequences certainly gets more and more dangerous the more advanced autonomy these weapons gets. In keeping with your mention of sci-fi, there's a Star Trek Voyager episode where two sides of killer robots continue to wage war against each other after having killed their makers because their makers had agreed on a truce, and their instructions had been to eliminate anyone that prevented them from beating their enemy and hence by agreeing a truce their makers had become an obstacle to achieving their programmed goal.
Having done software development my entire life, I'd never trust anyone to manage to avoid every pitfall in guaranteeing you have a foolproof way of making them stop...
yet five months later Japan maintained an external persona of implacable resistance. For example the subsequent invasion of the Okinawa islands resulted in over 14,000 American deaths. It was reasonable to apprehend that the invasion of the far bigger Japanese home islands would be far more lethal, also for the Japanese population.
In the Tokyo raid 96 American airmen were killed, despite the only effective defence being poorly co-ordinated anti-aircraft fire. Given a weapon that claimed the potential to end the war without further American losses it seems unrealistic to expect it not to be used.
Japan was an extremely bad actor. They had death camps just like the Germans did at that time. One can imagine that the German victims wished someone dropped a few nukes so Germany would back off a bit quicker. In the end it wasn't exactly pretty, but saying it was just for terror and fear seems like it takes a complicated situation and makes it one-dimensional. We've all been alive long enough to know real life is generally not that simple.
Hiroshima and Nagasaki hosted mostly old retired people and children, and women in a smaller percentage. It hosted no military equipment of any significance, nor strategic locations, nor any significant factory.
A large number of people were [ex] fishermen and quite poor.
These people had very little power or ability to either support or resist the militaristic elite. They were well in the bottom 20% of literacy, wealth, fitness for war, political influence and specialized skills. Therefore they were mostly innocent.
They were wiped away as a show of force and aggressiveness to intimidate other countries including USSR.
> They had death camps just like the Germans did at that time
The Nazi did it, not "the Germans".
And it still does not justify targeting a large number of retirees and children that are contributing nothing to the war effort.
> takes a complicated situation and makes it one-dimensional
Please don't what? I was offering a different view to the same issues, along with a plea to view complicated situations in the same way that the people subject to them at the time did: As complicated situations. Reducing it to this one-dimensional "everyone was innocent" reasoning is demeaning to the people making those hard decisions. They have to sleep too, and that's hard enough when the situations in question are viewed as they are, let alone when they are oversimplified by people who were only alive after the fact.
> It hosted no military equipment of any significance
Given that there were a number of "work camps" within spitting distance of those targets, it seems a little disingenuous to claim that there wasn't anything there. And even if that was the case, someone pulled the trigger in the way they did - it behooves us to think about the why of that decision in manner that doesn't reduce them to imperialist caricatures of what they were. That way, we'll never really understand why their decisions were made in the way they were. If we don't understand them, it will be harder to choose to avoid their reasoning ourselves.
Even if we eventually arrive at the conclusion that this was entirely the wrong decision, we owe it to every victim (both the people dying and the people doing the killing) to understand the situation properly so what they went through teaches us the right things.
Please don't try to sneak in a justification for genocide and paint it as a necessary evil.
> I was offering a different view
Your "different view" came with the unwritten implication that the actions of a terrible government justify genocide:
>>> Japan was an extremely bad actor. They had death camps just like the Germans did at that time. One can imagine that the German victims wished someone dropped a few nukes so Germany would back off a bit quicker.
This sentence does not mention a specific military strategy.
You are clearly implying that a being an "extremely bad actor" is enough to justify retaliation against unarmed civilians.
> we owe it to every victim ... to understand the situation properly
I'm not sure what to tell you, except to try to emphasise to you that understanding what other people think (or arguing for the things they might think as a rhetorical device) doesn't "imply" that that is what is right. I don't presume to tell other people that they are flat wrong and they need to think something else and I would appreciate it if you could extend me the same courtesy.
> Hiroshima and Nagasaki hosted mostly old retired people and children, and women in a smaller percentage. It hosted no military equipment of any significance, nor strategic locations, nor any significant factory.
Wikipedia may well be wrong, but it does list some significant military items:
======
At the time of its bombing, Hiroshima was a city of industrial and military significance. A number of military units were located nearby, the most important of which was the headquarters of Field Marshal Shunroku Hata's Second General Army, which commanded the defense of all of southern Japan, and was located in Hiroshima Castle. ... Also present in Hiroshima were the headquarters of the 59th Army, the 5th Division and the 224th Division, a recently formed mobile unit. ... In total, an estimated 40,000 Japanese military personnel were stationed in the city.
Some 70,000–80,000 people, around 30% of the population of Hiroshima at the time, were killed by the blast and resultant firestorm, and another 70,000 were injured. It is estimated that as many as 20,000 Japanese military personnel were killed
The city of Nagasaki had been one of the largest seaports in southern Japan, and was of great wartime importance because of its wide-ranging industrial activity, including the production of ordnance, ships, military equipment, and other war materials. The four largest companies in the city were Mitsubishi Shipyards, Electrical Shipyards, Arms Plant, and Steel and Arms Works, which employed about 90% of the city's labor force, and accounted for 90% of the city's industry."
Of 7,500 Japanese employees who worked inside the Mitsubishi Munitions plant, including "mobilized" students and regular workers, 6,200 were killed. Some 17,000–22,000 others who worked in other war plants and factories in the city died as well. Casualty estimates for immediate deaths vary widely, ranging from 22,000 to 75,000. At least 35,000–40,000 people were killed and 60,000 others injured. In the days and months following the explosion, more people died from their injuries. Because of the presence of undocumented foreign workers, and a number of military personnel in transit, there are great discrepancies in the estimates of total deaths by the end of 1945; a range of 39,000 to 80,000 can be found in various studies.
======
Of course none of that is grounds for jubilation: many, probably most, of those killed or maimed were victims to some degree. But that was also true of strategic bombing of all kinds by all sides throughout that war and those since. The uniqueness of the atomic bomb lay in the scale and ease of unleashing horrors, not in introducing horror to war.
> “where ethical considerations are completely removed from battle”
I’m surprised you say that. Having tired, squishy humans in a hot, cramped box fearing for their life is not exactly a guarantee of “ethical considerations”.
Remotely controlled weaponry removes said humans from the immediate danger. It’s a lot easier to apply ethical considerations, when you can be certain a moment of hesitation won’t cause your death.
Fully autonomous weaponry moves the ethical decision making even further up the chain. Where cold heads can do all the thinking beforehand.
Now, don’t get me wrong: There is a lot of ills with remotely controlled/autonomous weaponry. Chiefly among them that it removes certain costs from the political calculus. Politicians fear the spectacle of their troops dying. This fear moderates their decision making. Removing this cost will make the world a worse place. But i would hardly call this an “ethical consideration”.
That is assuming that both sides have them. That is not likely unless we see the major superpowers going to war with each other, and I don't see that happening.
And I'd say that ethical decision should never be moved up the chain. War is ugly, and should be ugly so that we think twice before waging one. You can still keep a level head with non-autonomous systems, e.g drones with human operators, or remote controlled marine bots.
GITS is largely post-cyberpunk, but dips into cyberpunk for ideas. Mostly for it's antagonists. The cyberpunk driver of the post-cyberpunk tank being an interesting take on things, at least.
Clarification for those who don't know. Cyberpunk spun off of a bunch of genres that call themselves $name-punk but ironically ignore the -punk aspect and just replace $name for the name of the technology they replaced magic with in what's otherwise a fantasy story. (Including literally magic as technology.) Because cyberpunk was taken, the cyber version of $name-punk ended up being called post-cyberpunk instead of cyber-steampunk for whatever reason.
Excited in the way that the big robots we read so much about in stories become real, but I don't wish for the world in which those death machines become a necessity turn into reality.
Given a sufficiently desperate situation, I can't imagine any military with autonomous tanks taking a call to keep the autonomous shooting "disabled" when it would risk actually loosing a war.
If anyone had real ethical concerns, they would've not put a gun on a robot to begin with. Once you arm a robot, a software disable is just lipstick on a pig.