Date:18/07/18
It’s the latest move from an unofficial and global coalition of researchers and executives that’s opposed to the propagation of such technology. The pledge warns that weapon systems that use AI to “[select] and [engage] targets without human intervention” pose moral and pragmatic threats. Morally, the signatories argue, the decision to take a human life “should never be delegated to a machine.” On the pragmatic front, they say that the spread of such weaponry would be “dangerously destabilizing for every country and individual.”
The pledge was published today at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, and it was organized by the Future of Life Institute, a research institute that aims to “mitigate existential risk” to humanity. The institute has previously helped issue letters from some of the same individuals, calling on the United Nations to consider new regulations for what are known as lethal autonomous weapons, or LAWS. This, however, is the first time those involved have pledged individually to not develop such technology.
Signatories include SpaceX and Tesla CEO Elon Musk; the three co-founders of Google’s DeepMind subsidiary, Shane Legg, Mustafa Suleyman, and Demis Hassabis; Skype founder Jaan Tallinn; and some of the world’s most respected and prominent AI researchers, including Stuart Russell, Yoshua Bengio, and Jürgen Schmidhuber.
Max Tegmark, a signatory of the pledge and professor of physics at MIT, said in a statement that the pledge showed AI leaders “shifting from talk to action.” Tegmark said the pledge did what politicians have not: impose hard limits on the development of AI for military use. “Weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons and should be dealt with in the same way,” said Tegmark.
So far, attempts to muster support for the international regulation of autonomous weapons have been ineffectual. Campaigners have suggested that LAWS should be subject to restrictions, similar to those placed on chemical weapons and landmines. But note that it’s incredibly difficult to draw a line between what does and does not constitute an autonomous system. For example, a gun turret could target individuals but not fire on them, with a human “in the loop” simply rubber-stamping its decisions.
They also point out that enforcing such laws would be a huge challenge, as the technology to develop AI weaponry is already widespread. Additionally, the countries most involved in developing this technology (like the US and China) have no real incentive not to do so.
Paul Scharre, a military analyst who has wrriten a book on the future of warfare and AI, told The Verge that the pledge was unlikely to have an effect on international policy, and that such documents did not do a good enough job of teasing out the intricacies of this debate. “What seems to be lacking is sustained engagement from AI researchers in explaining to policymakers why they are concerned about autonomous weapons,” said Scharre.
He also added that most governments were in agreement with the pledge’s main promise — that individuals should not develop AI systems that target individuals — and that the “cat is already out of the bag” on military AI used for defensive. “At least 30 nations have supervised autonomous weapons used to defend against rocket and missile attack,” said Scharre. “The real debate is in the middle space, which the press release is somewhat ambiguous on.”
However, while international regulations might not be coming anytime soon, recent events have shown that collective activism like today’s pledge can make a difference. Google, for example, was rocked by employee protests after it was revealed that the company was helping develop non-lethal AI drone tools for the Pentagon. Weeks later, it published new research guidelines, promising not to develop AI weapon systems. A threatened boycott of South Korea’s KAIST university had similar results, with the KAIST’s president promising not to develop military AI “counter to human dignity including autonomous weapons lacking meaningful human control.”
In both cases, it’s reasonable to point out that the organizations involved are not stopping themselves from developing military AI tools with other, non-lethal uses. But a promise not to put a computer solely in charge of killing is better than no promise at all.
Elon Musk, DeepMind founders, and others sign pledge to not develop lethal AI weapon systems
Tech leaders, including Elon Musk and the three co-founders of Google’s AI subsidiary DeepMind, have signed a pledge promising to not develop “lethal autonomous weapons.”It’s the latest move from an unofficial and global coalition of researchers and executives that’s opposed to the propagation of such technology. The pledge warns that weapon systems that use AI to “[select] and [engage] targets without human intervention” pose moral and pragmatic threats. Morally, the signatories argue, the decision to take a human life “should never be delegated to a machine.” On the pragmatic front, they say that the spread of such weaponry would be “dangerously destabilizing for every country and individual.”
The pledge was published today at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, and it was organized by the Future of Life Institute, a research institute that aims to “mitigate existential risk” to humanity. The institute has previously helped issue letters from some of the same individuals, calling on the United Nations to consider new regulations for what are known as lethal autonomous weapons, or LAWS. This, however, is the first time those involved have pledged individually to not develop such technology.
Signatories include SpaceX and Tesla CEO Elon Musk; the three co-founders of Google’s DeepMind subsidiary, Shane Legg, Mustafa Suleyman, and Demis Hassabis; Skype founder Jaan Tallinn; and some of the world’s most respected and prominent AI researchers, including Stuart Russell, Yoshua Bengio, and Jürgen Schmidhuber.
Max Tegmark, a signatory of the pledge and professor of physics at MIT, said in a statement that the pledge showed AI leaders “shifting from talk to action.” Tegmark said the pledge did what politicians have not: impose hard limits on the development of AI for military use. “Weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons and should be dealt with in the same way,” said Tegmark.
So far, attempts to muster support for the international regulation of autonomous weapons have been ineffectual. Campaigners have suggested that LAWS should be subject to restrictions, similar to those placed on chemical weapons and landmines. But note that it’s incredibly difficult to draw a line between what does and does not constitute an autonomous system. For example, a gun turret could target individuals but not fire on them, with a human “in the loop” simply rubber-stamping its decisions.
They also point out that enforcing such laws would be a huge challenge, as the technology to develop AI weaponry is already widespread. Additionally, the countries most involved in developing this technology (like the US and China) have no real incentive not to do so.
Paul Scharre, a military analyst who has wrriten a book on the future of warfare and AI, told The Verge that the pledge was unlikely to have an effect on international policy, and that such documents did not do a good enough job of teasing out the intricacies of this debate. “What seems to be lacking is sustained engagement from AI researchers in explaining to policymakers why they are concerned about autonomous weapons,” said Scharre.
He also added that most governments were in agreement with the pledge’s main promise — that individuals should not develop AI systems that target individuals — and that the “cat is already out of the bag” on military AI used for defensive. “At least 30 nations have supervised autonomous weapons used to defend against rocket and missile attack,” said Scharre. “The real debate is in the middle space, which the press release is somewhat ambiguous on.”
However, while international regulations might not be coming anytime soon, recent events have shown that collective activism like today’s pledge can make a difference. Google, for example, was rocked by employee protests after it was revealed that the company was helping develop non-lethal AI drone tools for the Pentagon. Weeks later, it published new research guidelines, promising not to develop AI weapon systems. A threatened boycott of South Korea’s KAIST university had similar results, with the KAIST’s president promising not to develop military AI “counter to human dignity including autonomous weapons lacking meaningful human control.”
In both cases, it’s reasonable to point out that the organizations involved are not stopping themselves from developing military AI tools with other, non-lethal uses. But a promise not to put a computer solely in charge of killing is better than no promise at all.
Views: 317
©ictnews.az. All rights reserved.Similar news
- Justin Timberlake takes stake in Facebook rival MySpace
- Wills and Kate to promote UK tech sector at Hollywood debate
- 35% of American Adults Own a Smartphone
- How does Azerbaijan use plastic cards?
- Imperial College London given £5.9m grant to research smart cities
- Search and Email Still the Most Popular Online Activities
- Nokia to ship Windows Phone in time for holiday sales
- Internet 'may be changing brains'
- Would-be iPhone buyers still face weeks-long waits
- Under pressure, China company scraps Steve Jobs doll
- Jobs was told anti-poaching idea "likely illegal"
- Angelic "Steve Jobs" loves Android in Taiwan TV ad
- Kinect for Windows gesture sensor launched by Microsoft
- Kindle-wielding Amazon dips toes into physical world
- Video game sales fall ahead of PlayStation Vita launch