Good Code is a weekly podcast about ethics in our digital world. We look at ways in which our increasingly digital societies could go terribly wrong, and speak with those trying to prevent that. Each week, our host Chine Labbé engages with a different expert on the ethical dilemmas raised by our ever-more pervasive digital technologies. Good Code is a dynamic collaboration between the Digital Life Initiative at Cornell Tech and journalist Chine Labbé.
Follow @goodcodepodcast on Twitter, Facebook, and Instagram.
On this episode:
There is already a lot of autonomy in modern wars, but a Human is still always involved in determining what the targets are. How would war change if Humans were taken out of the loop altogether?
We ask Peter Asaro about the humanitarian and moral questions raised by autonomous weapons, and we talk about the potential for technical glitches.
He also tells us about the ongoing talks at the UN on these weapons, and whether he’s optimistic about the possibility to reach an international ban.
You can listen to this episode on iTunes, Spotify, SoundCloud, Stitcher, Google Play, TuneIn, YouTube, and on all of your favorite podcast platforms.
We talked about:
- In this episode, you can hear two extracts from the Slaughter Bots video by the Future of Life Institute. Watch the full video here.
- Peter Asaro co-founded the International Committee for Robots Arms Control (ICRAC) in 2009. ICRAC is one of the founders of the Campaign to stop killer robots, a coalition of NGOs lobbying for an international preemptive ban on lethal autonomous weapon systems, or LAWS.
- Asaro talks about the many technical problems that may arise when using AI technologies to determine a target at war. Read about how researchers fooled such a system into thinking a turtle was a rifle.
- He explains that autonomous weapons could increase the risk of friendly fires, when an army turns against its own allies by mistake. In Iraq, the US Patriot antimissile system shot down two allied planes. Wars could also be started purely by accident, Asaro warns. In 2002, British marines “invaded” Spain by mistake. And in 2007, a Swiss company accidentally invaded Liechtenstein.
- Peter Asaro mentions the attempted drone assassination of Venezuelan President Nicolas Maduro. Last August in Caracas, two small commercial drones armed with explosives exploded in the air as he was giving a speech. They were remotely piloted.
- Our guest also talks about Project Maven, an AI Pentagon project which caused such controversy that Google had to promise it would drop the contract when employees discovered it was involved. Several thousand Google employees signed a letter to Google’s CEO protesting it. Others quit. “Google should not be in the business of war”, the letter said. You can read the letter in the New York Times. Over 1.000 tech researchers and academics wrote a separate letter to Google’s leadership. Not only did Google promised it would not renew the contract, they also published new AI principles. They say Google will “not design or deploy AI” for weapons or technologies “that cause or are likely to cause overall harm.”
- Asaro compares the legal dilemmas which could be raised by autonomous weapons with those already seen in cyberwarfare. Read about it here.
- Is it possible to get a preemptive ban on autonomous weapons? Asaro believes it is. Why? Because a precedent exists. In 1995, a new protocol to the Convention on Conventional Weapons (CCW) banned blinding lasers before they reached market. Last week, a new Convention on Conventional Weapons meeting on lethal autonomous weapons was held at the UN in Geneva.
- Who supports and who opposes such ban? See the list established by the Campaign to Stop Killer Robots.
Read more:
- The Campaign to Stop Killer Robots met this March in Berlin, Germany. But the talks did not end on very optimistic prospects, according to this Politico article.
- Yoshua Bengio, co-winner of the A.M. Turing Award (some call it the Nobel Prize for computing), supports a preemptive ban on killer robots.
- In November 2018, UN Secretary-General Antonio Guterres said he favored an international ban on autonomous weapons, calling “morally repugnant” the possibility for machines to target and kill without human control.
- It seems that no journalist will be able to get their hands on the content of Google’s work with the Pentagon on project Maven. It is exempt from the Freedom of Information Act, The Intercept revealed.
- Paul Scharre’s book “Army of None” looks in depth at autonomous weapons and the future of war.