THE FUTURE IS TECH

Machine Awakening

by Sophie Kalkreuth
10 Apr 2017

Technology is advancing at warp speed, but it has yet to develop a system of best practices to guide it. When it comes to artificial intelligence (AI), this could have serious consequences.

Last year, a Google computer program stunned one of the world’s top players in a game of Go. The match, between Google DeepMind’s AlphaGo and the South Korean Go master Lee Se-dol, was viewed as an important test of how far research into Artificial Intelligence has come in its quest to create machines smarter than humans. After all, the abstract strategy game for two players is often considered the most complex board game ever created.

Machine awakening

“I am very surprised because I have never thought I would lose,” Mr. Lee said following three and a half hours of play. “I didn’t know that AlphaGo would play such a perfect Go.”

Demis Hassabis, founder and CEO of DeepMind and the creator of AlphaGo called the program’s victory a ‘historic moment’ and explained that the AlphaGo did not try to consider all the possible moves in a match as traditional AI machines do, rather, it narrowed its options based on what it learned from millions of matches played against itself and 100,000 Go games available online.  The central advantage of AlphaGo, according to Mr. Hassabis, is that “it will never get tired, and it will not get intimidated either.”

While the field of AI continues to make breathtaking advances—machines now succeed at humanlike capabilities such as understanding speech and vision and most notably, they are capable of learning and continuously developing themselves—the ultimate goal ‘strong AI’ (defined as a machine with an intellectual capability equal to that of a human) remains elusive.

Silicon Valley tech billionaires may talk about harnessing technology to overcome disease and feed the world, but a tool is only as good as the hands that wield it. And all bets are off if the tool is able to outsmart its master.

“Before the prospect of an intelligence explosion, we humans are like small children playing with a ****,” says Nick Bostrom, a philosopher who runs the Future of Humanity Institute at Oxford University. He sees the implications of AI as potentially Darwinian. If we create machine intelligence superior to our own, and then give it freedom to grow and learn through access to the Internet, what would prevent it from evolving strategies to secure its dominance, just as in the biological world?  

This scenario is most distressing in light of so-called Lethal Autonomous Weapons Systems (LAWS). The debate within **** organizations is no longer about whether to build autonomous weapons but how much independence to give them. America is about a decade away from having the technology to build a fully independent robot that could decide on its own whom and when to ****, though it currently has no intention of building one. Nevertheless, AI experts are alarmed. Over 1,000 high-profile researchers including Nick Bostrom, Stephen Hawking and Elon Musk penned an open letter urging a ban on the use and development of fully autonomous weapons.

It is unclear who, if anyone, could be held responsible if an autonomous weapon caused an atrocity. And this is just one of many fundamental moral questions that must be addressed. How do we decide which capabilities AI should develop? And who should decide this? Can we design self-learning machines to be essentially compatible with humanity?

Some companies are taking initiative to address these issues. In January this year, the founders of LinkedIn and eBay announced they are donating a combined US$20 million to fund academic research aimed at ensuring the safety of artificial intelligences. Last September, The Partnership on AI, a collaborative effort involving Google, Facebook, Amazon, IBM and Microsoft was launched to “establish AI best practices,” though it has yet to do anything publicly.

Science fiction has done much to awaken our imaginations to the potential disasters of AI. In the short term, our most pressing concern will likely to be that robots will take away our jobs or collide with us on the highway. But addressing the problem of machine ethics is best dealt with now -- our future may depend on it.