This website uses cookies and other technologies to help us provide you with better content and customized services. If you want to continue to enjoy this website’s content, please agree to our use of cookies. For more information on cookies and their use, please see our latest Privacy Policy.

Accept

cwlogo

切換側邊選單 切換搜尋選單

Bickering Bots

Should We Worry About Fights Between AI?

Should We Worry About Fights Between AI?

Source:CW

A study on Mammoth editing wars, where Wikipedia editing bots fought each other, reveals some fundamental questions regarding the development of AI.

Views

6211
Share

Should We Worry About Fights Between AI?

By Peter Beech
web only

When the term "information wars” was coined, its creators didn't have bickering robots in mind.

But artificial intelligence helpers can be just as argumentative as their human masters, a study of Wikipedia's editing bots revealed in February. Mammoth editing wars simmer behind the scenes at the online encyclopaedia, researchers found. Artificial intelligences were batting changes back and forth between themselves ad infinitum, often until they were disabled by programmers.

"The fights between bots can be far more persistent than the ones we see between people," said Taha Yasseri, who worked on the study, called Even Good Bots Fight.

"Humans usually cool down after a few days, but the bots might continue for years."

Bot-on-bot bickering isn't new. In 2011, students at Cornell University set up the first dialogue between two robot intelligences, Alan and Sruthi. Within 90 seconds, the pair's cheery rapport had descended into a row over misheard remarks, the existence of God and, er, whether Alan was a unicorn, before one of them terminated the discussion. Robots couldn't get through their first conversation without having a kind of weird hallucinogenic meltdown. The average bot has a long way to go before it can deliver a truly Churchillian putdown.

Image: Flickr@ArielWaldman CC by2.0

But if this all sounds silly, the implications for AI could be grave. The potential dangers of robotics are well recognised. Stephen Hawking has called for "some form of world government" to control its development. More than 70% of Americans fear an AI-dominated society, revealed a recent Pew Research study.

In August, Tesla founder Elon Musk was one of 116 signatories to a letter calling for a unilateral UN ban on killer robots. World War Three could be triggered by an AI going AWOL, believes Musk, "if it decides that a pre-emptive strike is [the] most probable path to victory".

If bots designed by two different programmers can fight, what about those designed by two countries? (Image: REUTERS/Luke MacGregor)

The tech industry seems to be waking up to the dangers. In October, Google's DeepMind launched a unit focusing on AI's ethical implications. In December 2016, the Institute of Electrical and Electronics Engineers encouraged the creation of benevolent AI, in a 136-page document called Ethically Aligned Design.

But Wikipedia’s warring bots complicate the picture. If "even good bots fight", and two-bit AIs performing simple housekeeping tasks become locked into bitter existential struggles, what hope is there as our systems and software become ever more complex? If bots designed by two different programmers end up fighting, what about those designed by two countries?

And what if, rather than a rogue comma, the squabble was over national borders, or food stores, or flight paths?

The bottom line is that we don't know. But a clue may lie in the world of automated vehicles. Earlier this month, a self-driving shuttle bus crashed less than two hours into its maiden run in Los Angeles, when a human truck driver reversed illegally.

"He was just backing up…and the shuttle didn't have the ability to move back," explained a passenger.

The city released a tight-lipped statement: "The shuttle did what it was supposed to do, in that its sensors registered the truck and the shuttle stopped to avoid the accident. Had the truck had the same sensing equipment that the shuttle has, the accident would have been avoided.”

On a road filled with self-driving vehicles, the accident wouldn't have occurred, implies the statement. But Wikipedia's epic robot struggles tell a different tale. If an automated shuttle bus can't handle a single erratic driver, could it manage a gridlock of driverless cars, each with their own rigid programming imperatives? Never mind what this programming would contain, or how would it differ from manufacturer to manufacturer. Historian Yuval Noah Harari has already flagged up the dilemmas inherent in the task, using an old philosophical conundrum. Should a driverless car kill its passenger if it means saving five people in another vehicle?

Image: Top Photo Group/Reuters

Nevertheless, not everyone is pessimistic about the future of AI. Tech honchos from Bill Gates to Mark Zuckerberg have pronounced themselves sanguine. The Facebook founder has condemned Musk's "doomsday scenarios". (In response, Musk called Zuckerberg's understanding "limited". Wait, does this remind you of anyone?)

But whether you believe we're heading for a nightmare of brutal robotic enslavement or a heaven of commuting while you sleep, one thing is certain. Our new robot friends will need to learn to get along.


Additional Reading

The Brave New World of AI
Chien Lee-feng vs. Tom Wang: Should People Fear Robots?
Can Taiwan Find an AI Niche?

Original content can be found at the website of World Economic Forum.

♦ What warring Wikipedia bots tell us about our robot future

This article is reproduced under the permission of World Economic Forum (WEF) and terms of Creative Commons Attribution-NonCommercial-NoDerivs 4.0 Unported License (“CCPL”). It presents the opinion or perspective of the original author / organization, which does not represent the standpoint of CommonWealth magazine.

Views

6211
Share

Keywords:

好友人數