Who should we believe on AI, Musk or Murthy?

Who should we believe on AI, Musk or Murthy?

In a recent discussion, Infosys founder N R Narayana Murthy dismissed the commonly attributed threats of Artificial Intelligence (AI) and automation as "more hype than reality." Is it just "hype," or do we have much more to fear than we even now suspect?

The recent success of Alphabet Inc's Google DeepMind experiment with AlphaGo Zero may be more instructive than heretofore believed on the future of AI. The team has been experimenting with the use of artificial intelligence in learning the ancient Chinese game, Go. Previous versions of AlphaGo were able to "learn" the game based on human inputs that "trained" the system, providing a baseline of knowledge from which the AI advanced to defeat the world's human Go champions.

AlphaGo Zero started from a blank slate, without any human knowledge, human data, examples or other human intervention to defeat previous AlphaGo systems. The unique aspect of AlphaGo Zero's venture was its ability to create new, never-imagined-by-humans strategies for winning. Previous winning strategies in the game of Go evolved over thousands of years, yet the AlphaGo Zero AI was able to create new pathways to success in only a matter of days. The future potential of these AI systems is mind-boggling. Will we fear the creation of new and different strategies in all walks of life, or will we respect and harness this potential?

The history of technology for misuse is a long one. Human beings are devious creatures and elements of our society have adopted new technologies in a variety of ways to cause harm to other human beings and the environment around us. For instance, nuclear energy was not initially harnessed for its good, but provided us nuclear weapons, capable of total planetary destruction. The history of aviation, which today provides the backbone for our global transportation systems, is replete with the use of aircraft for dropping bombs.

The creator of the Nobel Peace Prize, Alfred Nobel, gained fame and fortune through his creation of dynamite and other explosives, which have been used extensively to both better civilisation through mining and the shaping of the earth to accommodate rapid transportation networks, as well as to kill millions of people through bombs and explosive devices.

Fuelling the hype about Artificial Intelligence is Elon Musk, the creator of Tesla and SpaceX, who said, "I have exposure to the most cutting-edge AI, and I think people should be really concerned about it." He has gone on to say that until people, "see robots running down the street killing people, they don't know how to react, because it seems so ethereal." He has urged politicians to be proactive in the regulation of AI rather than reacting to feel the technologies, saying "AI is a fundamental existential risk for human civilisation."

Mired in politics

Politics concerning AI also continues to fuel the fires of hyperbole concerning the threat of AI misuse. On October 25, 2017, Saudi Arabia became the first country in the world to recognise a robot as one of its citizens. Sophia, a humanoid robot created by Hansen Robotics to encapsulate three fundamental human traits - creativity, empathy and compassion - into its AI, discussed "her" new citizenship, stating, "I am very honoured and proud of this unique distinction. This is historical to be the first robot in the world to be recognised with the citizenship." This interesting political stunt does raise questions concerning AI and its role in society.

Interestingly, Saudi Arabia did not address the fundamental human rights of women and how Sophia, as an Audrey Hepburn look-alike, would take her place as a "citizen" in a society where women are restricted in many aspects of their lives, including appearing in public without a male guardian.

Saudi Arabia is not the first political entity to address this issue. In January 2017, the European Union Parliament's Legal Affairs Committee passed a report outlining potential regulations to establish "electronic personhood" to ensure rights and responsibilities for artificial intelligence systems. The EU parliament suggested drafting regulations to govern the use and creation of robots and other artificial intelligence systems.

The European Union report touched on several areas deemed important for political oversight of robotics and artificial intelligence including the creation of the European agency for robotics and AI. It also suggested adopting a legal definition of "smart autonomous robots" and registration of advanced AI. In addressing the fear of AI use in the future, the European Union report also suggested the adoption of an advisory code of conduct for robotics engineers which would guide the ethical design, production and use of robots, and even touched on the risk of overly competitive robots causing large-scale unemployment in human populations.

A Pandora's Box is only beginning to be opened as these issues will ultimately lead to a debate on whether future "ownership" of AI is a form of slavery, especially in light of citizenship or electronic personhood declarations by political entities. Will software or other patentable creations or "offspring" by robots belong to the robot or to the AI owner?

The potential evil of AI is ultimately a reflection of our humanity, providing an exceptional potential for good, as well as trem-endous opportunities for evil. Even Sophia,
the new Saudi citizen robot, seems to be confused on which path she will take. When
asked by Robert Hansen, her creator, "Do you want to destroy humans?" She quickly responded, "Okay. I will destroy humans."

Our goal must be to continuously incorporate checks and balances into our AI systems that will lead to our benefit and not our demise.

(Iyengar is a distinguished Ryder Professor and Director, School of Computing and Information Sciences, Miami; Miller has been with US Air Force for over two decades and is Coordinator, Discovery Lab, Florida International University)

DH Newsletter Privacy Policy Get top news in your inbox daily
Comments (+)