Is AI a Threat to humanity?

Home » Blog » Effective Altruism »Is AI a Threat to Humanity?
Is AI a Threat to humanity?
4 Facts You Should Know

The dystopian vision of sentient machines rise against the human creator is a leitmotif in sci-fi fictions. Are these mere fictional imaginations? Or should we be deeply concerned with the devolvement of artificial intelligence? Two of the biggest tech leaders in Silicon Valley have very different opinions on the topic of AI risk. Elon Musk has been very vocal about the risk of AI, calling it the biggest threat we faced as a civilisation, while Mark Zuckerberg is optimistic, calling Musk’s “doomsday” attitude irresponsible.

Here are 4 facts you need to know to understand the debate. Spoiler alert: contrary to the apocalypse fear popularised by Hollywood, most industry experts – including the skeptics – do not worry about “evil robots”, but some experts do believe that AI does pose potential threats to humanity, if not developed properly.

1. The development of AI is expected to grow exponentially

Mark Zuckerberg suggested that AI will be better than human at most basic tasks in 5 to 10 years. This is in line with the opinion of the 350 AI experts polled in a 2015 survey by the Future of Humanity Institute (FHI), the University of Oxford. These industrial experts believe that AI will outperform humans in the next 10 years in tasks such as translating languages and writing high school essays. These experts also predicted that, by 2049, AI will be able to write a New York Times best-seller.

For now, AI is still at the stage of artificial narrow intelligence (Narrow AI), meaning that an AI can only be designed to do a very specific task. But the above experts also believe that there are 50% chance that AI will reach artificial general intelligence (AGI) within 45 years, meaning unaided machines can accomplish every task better and more cheaply than human workers. Some experts also believe that upon the emergence of AGI, the gap between human intelligence and machine intelligence will soon become insurmountable. As AGI will be able to learn and upgrade itself quickly without human instructions, AI will eventually achieve superintelligence – a level of intelligence that is beyond all of the humanity combined.

Nonetheless, AI has been repeatedly over-hyped in the past, even among the experts in the field. Will AI actually be able to undertake a gigantic leap in this century? Only time can tell. But the bottom line is: if superintelligence is actually developed, it will profoundly change the world as we know it – for better or worse.

2. AI is expected to solve some of the world’s biggest problems

Optimists like Mark Zuckerberg vast euphoric about the development of AI because it has an enormous potential in making the world a much better place. Many of our readers are probably familiar with the application of AI in the area of self-driving cars, which is expected to drastically reduce traffic accidents. AI is also expected to have a huge impact in other areas such as healthcare. According to the experts polled by FHI in 2015 AI will be able to perform surgeries by around 2053, By the time AI reaches the stage of superintelligence, some experts expect AI to eradicate war, disease, and poverty, making the creation of superintelligience one of the biggest events in human history.

3. What could go wrong? AI turns competent with goals that misalign with ours.

On the other end of spectrum, skeptics like Elon Musk and Stephen Hawking worry that the development of AI may go astray, turning into an existential threat to mankind. Elon Musk even put his money where his mouth is – he donated 10 million to AI risk research institutions in 2013. But how could AI go wrong if it is so “intelligent”?

One popular misconception is that AI will turn “evil”. The image of malevolent AI destroying mindkind has been popularised by Hollywood. But, we should stop anthropomorphise AI – AI does not hate you, nor does it love you. According to Oxford Professor Nick Bostrom, final goals and intellgience are independent. AI is only “intelligent” in terms of instrumental rationality – skill at prediction, planning, and means-ends reasoning in general. What most researchers worry about is AI turns competent with goals that misalign with human interest. In other words, AI may become very good at doing something that is detriment to human welfare.

There are two ways this may turn into something disastrous. First, AI may be programmed to do something destructive. Autonomous weapon is a case in point. Second, even when an AI is being programmed to do something innocuous, it may develop a destructive method for achieving its goal. Prof. Bostrom calls this “perverse instantiation” – a superintelligence discovers some ways of satisfying the criteria of its final goal that violates the intentions of the programmers who defined the goal. In a thought experiment by Prof. Bostrom, he gives us an example of a superintelligence whose goal is to make people smile. The superintelligence may end up reasoning that the most efficient way to achieve its goal is to paralyze human facial musculatures into constant beaming smiles. Hence, one challenge is to ensure that the way in which AI maximize a utility function does not violate the programmer’s intention and human value in general.

4. There is a pressing need to look into AI regulations

While AGI and superintelligence are unlikely to emerge in the next 10 or 20 years, it is still very important to look into AI research regulations now. Since an AI development team that does not care about AI safety will likely to move faster than a team it does, at some point, the governments have to step up to steer a middle course between advancing humanity via technology and minimizing AI risk. AI will bring about disruptions in a scale that is larger and quicker than anything else we have seen. Policymakers have to familiarize themselves with relevant issues to make sound decisions. If the first superintelligence does not align with human goals, the dire consequence will be unimaginable. On the other hand, if the first superintelligence is developed in such a way that it aligns with human values throughout its operations, we will have the first-mover advantage in ensuring all the AI subsequently developed is align with human value as well.


Further Readings

Does the Use of Enhancement Drug Diminish Our Agency?

GOGOVAN | The Burgeoning Hong Kong Startup Scene

Act vs Rule Utilitarianism: A Critique of Harsanyi’s Assessment

Combating Violent Extremism through Counter Narratives

Modern Slavery in Hong Kong: The Practice of Unethical Agencies


Blog Search

Search