The following article was written by Dr. Teck Boon, Research Fellow, Rajaratnam School of International Studies, Nanyang Technological University, Singapore. The article was published as part of Digital Asia Hub’s AI bumper Newsletter on December 2, 2016.
Earlier this year, the defeat of Go world champion Lee Sedol by Google’s AlphaGo underscored the coming-of-age of artificial intelligence (or AI for short). Since 1956, when the term AI was first coined, billions of dollars of investment have flowed into the development and commercialization of AI – a term referring to computer systems that demonstrate human-like intellect. Today, major tech companies like Google and IBM are claiming that AI will benefit humankind in unprecedented ways. For examples, autonomous vehicles (AVs) will improve both road safety and traffic flow while the Internet-of-Things (IoT) promise to improve everything from our quality of life to environmental protection to elderly care. Both AVs and most IoT gadgets have some form of AI technology embedded.
Even so, as AI-enabled technologies proliferate, there is growing concern about the pitfalls.
A system capable of learning from its own mistakes, AI would not be that different from us when it is perfected. It might conceive of new ways to get things done better, faster and smarter. We do that all the time except that we also get exhausted, distracted and sidetracked every once in a while. AI-enabled systems meanwhile, will not complain, bargain for improved working conditions and when that fails, go on strike. In other words, AI, once it has reached the pinnacle of its development, will not only be on par with humans in terms of cognition, strength and determination but significantly better in many aspects. Now that is a frightening thought especially when we consider how cruel we can be to lesser beings like animals and insects.
Then there is Hollywood. Sci-fi movies like Terminator and Battlestar Galactica for instance, have portrayed our gruesome extermination by AI supervillains. To be fair to the script writers, how else is a super-intelligent machine supposed to react when it finally realizes that alarmed humans pose the single most significant threat to its survival? We invented the deadly concept of a pre-emptive strike so it is natural to assume that AI would also understand the concept since it is after all, created in our own image. The trouncing of Lee by AlphaGo is perhaps the harbinger of mankind’s annihilation at the hands of Skynet – the fatal AI system in the Terminator movie franchise.
The reality is likely to be far less apocalyptic, according to AI experts.
In spite of the progress made since 1956, AI technology is still considered to be quite rudimentary. In fact, most experts do not see AI developing human-like cognition at least until the end of this century. While AI is far superior to humans in rule-based and routinized functions, it continues to be hopelessly deficient in unstructured tasks. Indeed, AI-enabled robotics still find many simple household tasks like assembling furniture and folding laundry insurmountable. Most notably, AlphaGo did not even realize that it had defeated a world champion because at the most fundamental level, it does not understand the concept of winning. Even a three-year old kid knows what that means. So at least for the foreseeable future, we need not worry that AI is an existential threat.
Still, like every major technological invention in human history, we can expect AI-enabled technologies to bring forth new challenges even as they promise unprecedented benefits. There is no space here to discuss all of them but two are certainly pressing enough to highlight.
One is technological unemployment – a term for jobs destroyed by new technologies. Indeed, the major concern right now is that technological unemployment is inevitable once the full suite of AI-enabled technologies come online. For instance, AVs – which rely on AI-enabled software to operate on-board sensors and controls – are expected to put many transportation workers out of work because these cutting-edge vehicles are much safer and dependable than human-driven ones. Likewise, AI-enabled robotics will likely displace many workers in the service industry since their work are characteristically routinized and structured. It is interesting to point out that Singapore has placed a number of smart bins – high-tech trash bins that send out an alert to city managers when full – on trial. Obviously, a major goal of these smart bins is to cut the number of sanitation workers needed by optimizing waste collection in the city-state.
Although one must concede that new jobs will be created by the introduction of AI-enabled technologies, it is also vital to recognize that displaced workers might not have the requisite skills for them. More significantly, traditional coping mechanisms like continued education and lifelong training are less likely to be effective because of the difficulty of foreseeing the kind of jobs that will be created.
In the first, second and third industrial revolutions – seismic shifts that gave us steam power, electricity and electronics respectively – machines substituted manual labour and many jobs were lost. Nonetheless, living standards improved over time as more value-added work was created. What is likely to be very different this time round with the fourth industrial revolution is that subsequent job growth is expected to be minimal since many of the new jobs created will probably be filled by AI-enabled robotics too.
In Southeast Asia, Singapore will ironically benefit from the introduction of AI-enabled technologies since the country suffers from persistent labour shortages. The story will be quite different for those countries with a large labour force for they will have to deal with many low-skilled workers that can no longer find work easily. Estimates by the World Economic Forum point to as many as five million workers around the world losing their jobs in the next five years thanks to the introduction of among other things, AI-enabled technologies.
Apart from technological unemployment, cybersecurity is also a matter of grave concern because AI-enabled technologies will be susceptible to hacking and manipulation by cyber criminals. Regardless of how sophisticated AI-enabled technologies are, they will always be software-based and because of that, will also share the same vulnerabilities that computer programs have today. Even more troubling are plans to links AI systems together (so as to facilitate data transfer and sharing) making it possible for a breach in one system to spread across entire networks. Furthermore, as militaries around the world begin deploying autonomous weapon systems like robotic sentries and killer drones, the real danger is that these systems could be spoofed and manipulated by the enemy to turn their guns against us. This scenario does not seem so farfetched once we recall “Tay” – the AI chat-bot that had to be terminated by its creator, Microsoft, when it turned into a hateful sex-crazed program after mining online conversations.
An interesting and novel suggestion to temper the disruption AI will have on the labour market is to introduce a form of universal basic income – a subsistence salary for the unemployed financed by AI-enabled technologies. Appealing as the plan may sound, it remains unclear if developing economies in Southeast Asia have the governance structure and fiscal capacity to bring this techno-utopian vision to light. Likewise, many countries in that region have neither the human nor technical expertise to deal with increasingly sophisticated cyber threats. If a superpower such as the US is struggling to fend off frequent cyber intrusions into its government and business networks, then it is reasonable to conclude that the challenge is greatly magnified for Southeast Asia.
Now that we have become so used to the comfort that comes with modern technology, unplugging from it is no longer a realistic option. Moreover, turning our back on AI will also mean that we will be turning away from the myriad benefits that it promises to deliver. So the more pragmatic move would be to weigh the risk of the technology against its potential upside. Going forward, there are three ways to temper the downside of AI.
Firstly, the world as a whole will have to push for the regulation of AI research so that the technology will never be weaponized. Akin to conventions at the moment that ban the use of gene-editing to establish a pregnancy, the international community must start to recognize that there are unforeseen dangers when handing the trigger over to AI. Secondly, countries need to enhance their adaptive capacity to better prepare themselves for an uncertain future. By being nimbler and more adaptable, they will be in a stronger position to respond quickly and adjust to conditions which we are only beginning to fathom. Lastly, countries around the world should also strengthen national resilience so that they do not cave in or come apart at the seams in the face of mass disruption. Even when the challenges are unsurmountable, these countries will likely remain standing if they start to cultivate a deep sense of resoluteness and shared identity.
Ultimately, these measures alone cannot guarantee that we will tower over AI’s downside. It would be an outright lie to claim that they would. But at the very least, these small steps will give some assurance that AI will not be our last invention.
This feature was written exclusively for Digital Asia Hub. For permission to republish or for interviews with the author please contact Dev Lewis.