Rules and rulers of Robotics: Revisiting Asimov

The Digital Alarmist

Roger Marshalla computer scientist, a newly minted Luddite and a cynic

If you think for even one minute that you are not expendable at your workplace, think again.

These days, even bees are expendable. The pollinating kind. Worker bees and drones alike. Recent scientific studies have shown that the bee population across the globe has been steadily declining due to climate change.

Not to worry, robo-bees are here. They are all drones, but they work. Courtesy of the robotics researchers at Harvard University. These bees can sting and pollinate but do not produce honey. No loss there since sugar is bad for you. AI’s way to control obesity and address climate change issues at the same time, I suppose. Even population growth, if the stingers are laced with deadly pathogens. Who would have thought that the wizards in the AI labs were such socially responsible creatures?

Not just bees, but spiders, cockroaches, fruit flies, etc. The fruits of biologically inspired computing and interdisciplinary research just waiting to be harvested. All in the name of gaining a competitive edge in all matters pertaining to the military, the economic or trade. Forget the information age. The age of pestilence is upon us.

Your next co-worker could very well be a robot – a thinking, feeling, seeing, hearing and smiling sensor-based animated object such as Simon, a humanoid robot developed at the Georgia Institute of Technology. With an eidetic memory and excellent surveillance skills. No more clunky cameras to install on ceilings or other hard-to-reach places. Your office décor has just been upgraded. Aren’t you glad?

Is Simon cute? Certainly. Trustworthy? Well, that depends. Is Simon aware of Asimov’s rules for robots? Has he been programmed to follow these rules?

In case you do not know who Isaac Asimov is and what these mysterious rules are, I don’t blame you. You are perhaps too busy on Facebook staying in touch with friends and family or checking out the latest bargains on Amazon for which you will pay with Libra currency.

Anyway, Isaac Asimov was an eminent science fiction author of the 20th century who foresaw the possible dangers to society of allowing uncontrolled automation in the workplace and elsewhere. He wanted robot designers to ensure that the automatons they created followed three simple rules. A robot should a) never harm a human, b) always obey its master unless it is ordered to harm a human, and c) protect itself without violating the first two rules.

Since Asimov’s time, robots have considerably evolved (unlike humans?) and can show up in any form, not just human-looking. Think of chatbots and any number of assorted bots roaming around on the internet today. Consequently, the rules will need to be updated. I would suggest changing just the first rule to read thus: never harm a human, physically or otherwise, either directly or indirectly. Given the latest advances in AI, especially algorithms to implement ‘deep thinking’, the robot should have no problem following the first rule. Or will it? Much depends on the prejudices and predilections of the human programmer who implemented the algorithm in the first place.

While attending the Davos Economic Forum in January 2019, an executive of Bengaluru’s best-known IT outsourcing company stated in an interview that companies interested in automating their industrial and business processes were looking to reduce their workforce by as much as 99%.

I read somewhere that the widow spider devours its mates. Fake news, perhaps.

DH Newsletter Privacy Policy Get top news in your inbox daily
GET IT
Comments (+)