For Artificial Intelligence, Killing is the Least of Worries

AI

AIThis risk is present even with narrow artificial intelligence, but grows as levels of AI intelligence

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. As AI grows more sophisticated and ubiquitous, the voices warning against its current and future pitfalls grow louder. AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics. Since recent developments have made super-intelligent machines possible much sooner than initially thought, the time is now to determine what dangers artificial intelligence poses.

AI robots become smarter and more dexterous. AI arms race could inadvertently lead to an AI war that also results in mass casualties. This risk is present even with narrow AI but grows as levels of AI intelligence and autonomy increase. Aside from being concerned that autonomous weapons might gain a mind of their own, a more imminent concern is the dangers AI autonomous weapons might have with an individual or government that doesn’t value human life. The most extreme form of this myth is that superhuman AI will never arrive because it’s physically impossible. Any powerful technology can be misused.

 

Artificial intelligence can pose risks:

Autonomous weapon systems are robots with lethal weapons. AI autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force. Various forms of AI bias are detrimental, too.  AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave.

A woman has claimed that four artificially intelligent robots killed 29 scientists in a lab in Japan in 2017. And Kalashnikov autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists. The fear of machines turning evil is another red herring. The disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies. High-end autonomous weapons are likely to lead to more frequent wars.

Tesla and SpaceX leader and innovator Elon Musk suggest artificial intelligence could potentially be very dangerous. The black box problem of AI makes it almost impossible to imagine the morally responsible development of autonomous weapons systems. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use. Finally, autonomous weapons will undermine humanity’s final stopgap against war crimes and atrocities

Autonomous weapons in rapidly evolving environments are simply too great.  AI programmed to do something dangerous, as is the case with autonomous weapons programmed to kill, is one way AI can pose risks. Meanwhile, human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development.  AI fulfills its huge potential and strengthens society instead of weakening it.