Militaries around the world are developing artificial intelligence dubbed ‘Slaughterbots’

Artificial Intelligence - Military technology

The unmanned vehicles are being programmed to seek out targets and destroy and/or kill.

Countries such as China, Russia, and the United States are developing new artificial intelligence (AI) that can be used for military purposes. The technology has been nicknamed “slaughterbots”.

A UN conference addressing the subject of this use of AI failed to come to an agreement on a ban.

As a result, many are now issuing doomsday-style cautions about superpower arms races that involve the development of artificial intelligence capable of identifying and engaging targets on their own. The caution extends as far as suggesting that the AI robots would have the capacity to eliminate humanity if left unchecked.

Major global powers have poured billions in investments into advanced AI weapons able to hunt down and strike targets without any human input. In fact, the first autonomous kill on human targets has already taken place when a Turkish-made drone struck targets in Libya, destroying itself in the process, according to a UN report.

Artificial Intelligence Dangers

Experts warn that military artificial intelligence is advancing faster than the dangers can be considered.

Experts caution that governments and societies haven’t been able to keep up with the consideration of the potential dangers of AI in military applications because the tech is advancing too fast. They’ve explained that machines capable of making their own choices are at risk of unpredictable errors that spread rapidly.

Those rapidly spreading errors occur as a result of algorithm coding that is sometimes even beyond human programmer comprehension. As a result, programmers aren’t yet able to stop those situations from occurring. Should those AI robots be armed with chemical, biological or nuclear weapons, the outcome could be unpredictable, unintentional and catastrophic.

“It is a world where the sort of unavoidable algorithmic errors that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities,” said Macalester College Professor James Dawes when discussing the potential drawbacks of armed military artificial intelligence robots. “The world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.” Soon after, a similar warning was issued by Future of Life Institute co-founder and MIT professor Max Tegmark.

Leave a Comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.