In the age of technological singularity

In the age of technological singularity

Artificial Intelligence

In the age of technological singularity
The term ‘singularity’ has many meanings. In theoretical physics, singularity refers to a one dimensional point in space where the gravitational field becomes infinity. An example of this would be at the centre of a black hole, where the laws of physics cease to hold true. Technological singularity on the other hand, a term first attributed to John Von Neumann in the 1950s, refers to the hypothesis that machine intelligence will one day exceed human intellect. What this means is that, someday, Artificial Intelligence (AI) that constantly improves on its own can surpass human intelligence.

The idea of technological singularity has led to raging arguments. Some believe that it could lead to utopia — wherein robots could automate daily human tasks, thereby allowing us to lead a leisurely life. Others believe that singularity could be the downfall of humanity, as it could spell the end of the human race. There is also a growing consensus that not only is singularity probable, but it can be achieved as early as in the next 15 to 20 years. The last couple of years have seen great leaps in the field of AI and robotics. AlphaGo, a computer program developed by Google DeepMind, and Uber’s self-driving cars provide substantial proof of just how quickly AI is influencing the world.

Risky yet successful
Many distinguished scientists and technologists strongly believe that AI and singularity could have detrimental effects. Elon Musk, the Canadian-American businessman and inventor, once compared progress in AI with ‘summoning the demon’. Elon has also been outspoken of the need to recognise and control digital super-intelligence. Along with Sam Altman of Y Combinator (a start-up incubator), he started a non-profit AI research community called OpenAI. Their primary aim is to make AI technologies opensource. In line with OpenAI’s goal, companies like Facebook have made some of their AI technologies open. Stephen Hawking is another scientist who believes that AI could be the greatest threat to humans. In an op-ed for a UK paper, Hawking wrote, “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

From a biological standpoint, many believe that we are already in the age of singularity. The fields of biotechnology and nanotechnology already involve fusion of biology and computer science. For example, biotechnologists use AI tools to predict new production routes for bioproducts. In nanotechnology, artificial neuron networks are used to improve nanomaterial classification. AI has also played a pivotal role in healthcare improvement over the last few years. Machine learning has improved diagnosis and outcome prediction accuracies. A report by research firm Frost & Sullivan estimates that spending by healthcare providers and consumers on AI tools will increase from $633.8 million in 2014 to $6.6 billion in 2021 — a 10 fold increase in just seven years!

Another interesting aspect to note is that with the popularity of social networking sites and applications along with the increase in matchmaking websites and applications, our interactions and relationships with people are being influenced by algorithms. The ideas of human immortality and complete human genome control are often mentioned as possible singularity ramifications. According to a research paper published by scholars from Macquarie University, Australia and University of California, Davis, USA, we will have enough digital storage to save the massive information content contained in all the DNA in all the cells on Earth in about 110 years!

Earlier references
Finally, singularity finds ample mention in literature. Author and computer scientist Ray Kurzweil’s The Singularity is Near is a prominent example. In his latest book, Ray predicts some of the technological advancements that lead to singularity. He believes that the onset of singularity will occur by 2030. Science fiction writer Vernor Vinge also believed in the idea of singularity as reflected in his 1993 essay Technological Singularity. Many movies such as Blade Runner and Ex Machina, along with television shows like Person of Interest explore the possibilities of AI uprising. Some of these represent singularity as an inevitable outcome rather than a hypothetical scenario. Things like 3D printing, delivery drones and smart blood, which seemed impossible many years ago, are now commonplace.

At a recent AI conference held at California, USA most people, when questioned, believed that singularity was not a feasible possibility for at least the next 30 years. At the moment, the smartest robot pales in comparison to human intelligence. That is not to say that over the years, the capabilities of an AI system are likely to improve. With the approach of theoretical limit of Moore’s Law, the notion of singularity is rapidly gaining traction. It is surely not one that can be ignored.