World Economic Forum
5 Core Principles to Keep AI Ethical
Automation could eliminate millions of jobs globally. Recently, robotics and tech leaders have warned about the risks posed by AIs and stressed the importance of a code of ethics regarding AI. How can robots and humans live in harmony?
5 Core Principles to Keep AI EthicalBy Rob Smith
Science-fiction thrillers, like the 1980s classic film The Terminator, illuminate our imaginations, but they also stoke fears about autonomous, intelligent killer robots eradicating the human race.
And while this scenario might seem far-fetched, last year, over 100 robotics and artificial intelligence technology leaders, including Elon Musk and Google's DeepMind co-founder Mustafa Suleyman, issued a warning about the risks posed by super-intelligent machines.
In an open letter to the UN Convention on Certain Conventional Weapons, the signatories said that once developed, killer robots - weapons designed to operate autonomously on the battlefield - “will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”
SpaceX and Tesla founder Elon Musk signed an open letter on AI ethics. (Image: REUTERS/Aaron P. Bernstein)
The letter states: “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
AI Must be a Force for Good - and Diversity
This week, the United Kingdom government published a report, commissioned by the House of Lords AI Select Committee, which is based on evidence from over 200 industry experts. Central to the report are five core principles designed to guide and inform the ethical use of AI.
The first principle argues that AI should be developed for the common good and benefit of humanity.
The report’s authors argue the United Kingdom must actively shape the development and utilisation of AI, and call for “a shared ethical AI framework” that provides clarity against how this technology can best be used to benefit individuals and society.
They also say the prejudices of the past must not be unwittingly built into automated systems, and urge that such systems “be carefully designed from the beginning, with input from as diverse a group of people as possible.”
Intelligibility and Fairness
The second principle demands that AI operates within parameters of intelligibility and fairness, and calls for companies and organisations to improve the intelligibility of their AI systems.
“Without this, regulators may need to step in and prohibit the use of opaque technology in significant and sensitive areas of life and society,” the report warns.
Third, the report says artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
It says the ways in which data is gathered and accessed need to be reconsidered. This, the report says, is designed to ensure companies have fair and reasonable access to data, while citizens and consumers can also protect their privacy.
“Large companies which have control over vast quantities of data must be prevented from becoming overly powerful within this landscape. We call on the government ... to review proactively the use and potential monopolisation of data by big technology companies operating in the UK.”
Flourishing alongside AI
The fourth principle stipulates all people should have the right to be educated as well as be enabled to flourish mentally, emotionally and economically alongside artificial intelligence.
For children, this means learning about using and working alongside AI from an early age. For adults, the report calls on government to invest in skills and training to negate the disruption caused by AI in the jobs market.
Confronting the Power to Destroy
Fifth, and aligning with concerns around killer robots, the report says the autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
“There is a significant risk that well-intended AI research will be misused in ways which harm people," the report says. "AI researchers and developers must consider the ethical implications of their work.”
By establishing these principles, the UK can lead by example in the international community, the authors say.
“We recommend that the government convene a global summit of governments, academia and industry to establish international norms for the design, development, regulation and deployment of artificial intelligence.”
Original content can be found at the website of World Economic Forum.
This article is reproduced under the permission of World Economic Forum (WEF) and terms of Creative Commons Attribution-NonCommercial-NoDerivs 4.0 Unported License (“CCPL”). It presents the opinion or perspective of the original author / organization, which does not represent the standpoint of CommonWealth magazine.