The experts met in Geneva, but no agreement could be reached: the United States and Russia blocked all work. Perhaps this is the only time when hegemons work so harmoniously.
Meetings of experts in the format of the Convention on Inhuman Weapons ended in Geneva to decide the fate of the so-called combat robots - autonomous weapons that use artificial intelligence to defeat targets. However, no agreements could be reached. The United States, Russia, South Korea, Israel, and Australia were among the minority nations that have succeeded in blocking the sentiment towards a complete ban on killer robots.
So, although there is still no working autonomous weapon in the world, the technology remains, so to speak, humane - it can be developed and researched. Interestingly, the United States and Russia, according to the Stockholm Peace Research Institute (SIPRI), top the list of the largest arms exporters. South Korea, Israel and Australia also do not fall behind in this rating - they are among the top 20 market players.
And although China (the fifth exporter of weapons in the world, a permanent member of the UN Security Council advocates a ban on combat robots, it did not manage to tweak the scales in its favor during the meetings. Today, 26 countries openly support the ban on the use of artificial intelligence in war. Others evade a clear position France and Germany (the third and fourth arms exporters) offer to sign a document that would consolidate the primacy of man over artificial intelligence, but they are more likely on the side of those who want to develop autonomous combat vehicles.
“It’s certainly disappointing that a small group of military giants can hold back the will of the majority,” commented the outcome of the meetings in Geneva, Campaign to Stop Killer Robots coordinator Mary Verhem.
Indeed, the situation looks like a conspiracy of armed monopoly tycoons, given that the United States and Russia usually cannot reach at least some kind of compromise on important issues. Take the Syrian one: Washington and Moscow mutually blocked each other's resolutions after using chemical weapons in Syria this spring. Asphyxiating gases and other toxic substances for military purposes, by the way, were previously banned by the Convention on Inhumane Weapons.
The next meeting on the fate of killer robots will take place in Geneva in November.
Why do they want to ban autonomous weapons
Proponents of the robot warfare ban insist that the battlefield is not a place for artificial intelligence. In their opinion, such technologies pose a huge threat. At least, today it is not clear how the machine will distinguish between combatants (those who are directly involved in hostilities) from non-combatants (army service personnel who can only use weapons for self-defense) and civilians in general. There is a possibility that the work will kill the wounded and those who surrender, which is prohibited by the current rules of warfare.
What prevents the work from interrupting all parties to the conflict, even the owners of such weapons? Artificial intelligence elements are already successfully used in military equipment, missiles; robots are attracted for reconnaissance, but the last word still rests with humans. Autonomous weapons will not obey the orders of the commanders - that is why they are autonomous. That is why military generals from different countries are skeptical about the introduction of machines into the ranks of personnel.
And one more open question is international terrorism. Autonomous weapon technology can fall into the wrong hands, and it can eventually be hacked. A year ago, Russian President Vladimir Putin said that the ruler of the world will be the one who will become the leader in the development of artificial intelligence. In the case of autonomous weapons, the one who gains access to such technologies will become the ruler of the world. And for this, in fact, you only need a computer and a dodger who will pass through the security systems. The Pentagon, by the way, has been hacked more than once. Consequently, no one can give guarantees that autonomous weapons will remain inviolable.
It is also unclear who will be legally responsible if a war crime is committed as a result of the functioning of the autonomous weapons system. “The engineer, programmer, manufacturer or commander who used the weapon? If responsibility cannot be defined as required by international humanitarian law, can the deployment of such systems be recognized as legal or ethically justified?”The International Committee of the Red Cross notes.
Interestingly, scientists also advocated a ban on combat robots. In July of this year, more than two thousand scientists, in particular the creator of Tesla and SpaceX Elon Musk and the co-founders of DeepMind, signed a document that they would not develop lethal autonomous weapons. Google did the same. The tech giant has given up work on the Pentagon's Maven project. And in 2017, a number of scientists have already called on the UN to ban the creation of killer robots.
By the way, the issue of artificial intelligence in war appeared on the agenda of the United Nations at the end of 2013, but practically nothing has changed since then. Only this year, expert meetings began in the format of the Convention on Inhumane Weapons. That is, it took more than four years to come to some more or less practical plane.
Why they don't want to ban autonomous weapons
No matter how trite it may sound, the arms race is the main reason why they do not want to ban killer robots. Putin is right: whoever gets autonomous weapons first will dominate the world. Officially, this reason is voiced.
The main argument of opponents of the ban is the impossibility of separating civilian artificial intelligence from the military. Like, we will not ban kitchen knives just because terrorists can use them. Indeed, it is practically impossible to separate civilian development of artificial intelligence from the military. But now we are talking about the prohibition of this weapon, which will be able to independently determine and attack targets. This could be the Maven project, which the US Department of Defense is working on in conjunction with Booz Allen Hamilton (Google refused the contract).
The Maven developers want to teach drones to analyze images, in particular from satellites, and - potentially - identify targets for attack. The Pentagon began working on the project back in April 2017 and hoped to get the first working algorithms by the end of the year. But through the demarche of Google employees, the development was delayed. As of June of this year, according to Gizmodo, the system could distinguish between elementary objects - cars, people, but it turned out to be completely insignificant in difficult situations. If the ban on autonomous weapons is nevertheless adopted at the UN level, the project will have to be scrapped, while the Pentagon claims that their development can save lives, because it can be programmed to work more accurately and reliably when compared with people.
“You need to understand that we are talking about technology, that it does not have samples that would work. The idea of such systems is still very superficial,” noted on the eve of the meeting in Geneva at the Russian Foreign Ministry. - In our opinion, international law, in particular, the humanitarian sector, can be applied to autonomous weapons. They do not need modernization or adaptation to systems that do not yet exist.”
Well, and one more real, but not voiced, reason is money. Today, the market for military artificial intelligence technologies is estimated at more than six billion dollars. But by 2025, the figure will triple - to almost 19 billion, according to analysts of the American company MarketsandMarkets. For the largest arms exporters, this is a good motivation to block any restrictions on the development of killer robots.
Progress cannot be stopped
Proponents of a ban on autonomous weapons note that technology is developing very quickly and artificial intelligence will eventually become a weapon - a matter of time. There is logic in their words. Artificial intelligence is an integral part of the fourth scientific and technological revolution, which continues now. It should be borne in mind that technical progress is in one way or another associated with military operations. The third scientific and technological revolution lasted until the mid-50s of the 20th century, that is, it peaked during the Second World War.
In 1949, Geneva adopted the Convention for the Protection of Civilian Persons in Time of War. In the post-war period, they also supplemented the IV Hague Convention of 1907, which determined the rules for the conduct of war. That is, the horrors of World War II became the catalyst for this process. So, human rights defenders do not want to wait for the Third World War in order to protect humanity from autonomous weapons. That is why deciding the fate of killer robots is necessary now, they insist.
According to Human Rights Watch experts, the use of combat robots contradicts the Martens Declaration - the provision of the preamble to the Hague Convention on the Laws and Customs of War of 1899. In other words, killer robots violate the laws of humanity and the requirements of public consciousness (the position was confirmed in the IV Hague Convention).
“We must work together to impose a preventive ban on such weapons systems before they spread around the world,” said Bonnie Doherty, senior researcher in the arms department at Human Rights Watch.
Well, it didn't work out to ban killer robots this time. Predictably, the meetings in November will also be fruitless. True, almost all countries agree - the technology cannot be allowed to flow by gravity and combat robots need such a stop-crane. But it is still not clear whether humanity will have time to pull it when the need arises.