The Biggest AI Risk of the Next Decade is not a Robot Uprising

Grand generalisations about the future impacts of artificial general intelligence overshadow the more pressing issues we face today.

Joseph Early

Finally, robotic beings rule the world — pictures of the Terminator and HAL are just played out at this point. (Flight of The Conchords, Robots)

The past decade has seen many interesting and impressive developments in tech (but it hasn’t been without its fair share of flops either). AI research and development have seen a huge increase in the past ten years, due in part to the accessibility of large datasets and greater compute power that has facilitated the deep learning revolution. With this recent uptake in the use of AI, what can we expect in the next decade?

Previous tech predictions for a “2020 vision” have been pretty hit and miss — anywhere from completely wrong to not quite there yet. The future of AI is one of the most unknown areas, with many AI experts being reluctant to give any concrete predictions. 67% of AI researchers in a 2016 survey said an artificial superintelligence was possible, but would only occur in more than 25 years. 25% said it wasn’t even possible.

Whilst the common fears of AI are based around a superintelligent robot uprising that wipes out humanity, we have far more pressing concerns in the near future. This is known as The Great AI Paradox — worrying about far off existential AI risk is actually misleading and a distraction from the real problems that AI could cause in the next decade. The idea of this article is to highlight the potential (mis)uses of the AI we currently have and how it differs from humanity-ending superintelligence. However, the irony is not lost on me that this article may also be part of the long list of historically unreliable AI predictions…

The past ten years have been a very exciting time for AI research. 2012 saw the unveiling of AlexNet — a deep convolutional network that gave considerably better performance on the ImageNet competition (an oft-used benchmark for assessing automated image classification). The original paper has over 53,000 citations on Google Scholar, and could be considered the beginning of the deep learning revolution that has kickstarted the resurgence in AI research and application.

Super-human performance in Go was something that experts predicted was at least another 10 years away at the time.

This deep learning springboard opened the door for many further AI applications. DeepMind developed revolutionary models for playing games, and achieved the milestone of super-human performance in Go in 2016, something that experts predicted was at least another ten years away at the time. Natural language processing also made significant progress over the past decade, with the development of BERT in 2018 redefining the start of the art in the area, allowing computers to better understand our language and speech.

Many areas of technology are now making use of AI — self-driving cars, language translation, mobile phones — the significance of AI research in the past decade is demonstrated through the rate at which it has been applied in different industries. Indeed, three of the big names in AI were awarded the 2018 Turing Award (the Nobel Prize of Computing), showing the significance of AI research in the past decade.

The concept of deep learning is actually a fairly old idea; it’s been around since the 80s. However, its real utilisation was only possible once computing power caught up and we had large labelled datasets that could feed the data-hungry algorithms. Whilst deep learning has allowed us to do many impressive things, we are still relying on a 30 year old idea — could we be beginning to find the limits of deep learning and need something new?

So what can we expect to see over the next decade? There are many predictions for how AI will affect industry in 2020 and beyond — the majority of this is research from the past few years filtering down and being applied in new ways. We have already seen it enhance certain industries, and this is likely to continue as data becomes ever more abundant.

Our current AI is significantly lacking in certain areas, and is still a long way from general intelligence.

All of these new innovations are still in the realm of narrow AI; programs that are only competent at the one specific task they were designed for (your chess playing AI would be no good at booking a table at a restaurant, and vice versa). Even though some of these could be considered expert AI, performing at human or superhuman levels, they are still fundamentally and significantly short of artificial general intelligence (AGI). Even the impressive feats in Go, something that was predicted to be at least 10 years away, remains in the realm of narrow/expert AI.

Our current narrow AIs are significantly lacking in certain areas. They are terribly sample inefficient: a deep learning system must be shown many, many examples before it can classify things accurately (in comparison to us humans who can learn from only a handful of examples). They have no common sense and fail to understand things we take for granted, although there are ongoing efforts to embed common sense understanding in our current systems. Deep learning systems are also very brittle; small changes to their operating environment can lead to catastrophic failures and they can even be exploited through adversarial attacks.

Since it seems AGI is still quite a long way off, should we be worrying about it? AI may be subject to Amara’s Law:

“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

The predicted impact of AI in the next decade may be overhyped (especially if we reach the limit of deep learning), and the consequences of developing AGI might be greater than we can even imagine. If this is the case, we should certainly be thinking about how to safely develop AGI, but not without overshadowing the ever increasing issues that could arise with our current AI systems. In the words of Andrew Ng:

“There could be a race of killer robots in the far future, but I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.”

There are differing views around the need for AI safety research. Musk’s Open AI is based around the development of safe AGI, although he is fairly fatalistic about the future of AI. Contrast this with the views of Brian Cox, who, in agreement with most AI experts, suggests AGI remains a long time away. The problem with grand generalisations about the future impacts of AGI and its existential risk is that it drags attention away from the more pressing issues we are facing today.

That’s not to say AI safety research is a pointless research area. By developing AI safety research alongside normal AI research, we can ensure that we are ready for AGI when (and if) it finally rolls around. It may even assist in increasing the performance of our AI systems. A great resource for those interested in AI safety is Robert Miles’ YouTube channel.

The flipside of AGI as an existential risk is that also provides “existential hope” — it could be the best thing to ever happen to humanity. Unlike other existential risks such as a nuclear apocalypse, the destruction of the human race by an AGI comes with the lure of utopia. However, we’re still a way off from world-ending AI scenarios, but that doesn’t mean it can’t be damaging in the meantime.

Plenty of things can go wrong with the AI we currently have, even without worrying about AGI. Whilst there is still a lot of uncertainty about how AI will impact our society and economy in the next decade, it is likely that it will continue to pervade large areas of industry. Below are three concerns about the use of AI in the next decade.

AI’s impact on jobs is a hotly debated topic. Some greatly overhype the ability of AI to automate large swathes of the workforce, but others think it will create more jobs than it replaces. The main impact of AI will be a shift in the kind of work we do — AI can take over the mundane elements of jobs, allowing human workers to use their creativity and skills that are beyond the scope of our current AI in a complementary fashion. However, this is not without its own set of stresses and will still cause disruption; people will need to willingly embrace more technology in their work and be re-trained to work alongside their new AI counterparts.

There are also concerns around equality issues exacerbated by the use of AI. Corporations could save money by using automated systems in place of human workers, which gives them great benefits in the short term by reducing their costs (and possibly increasing their output). Not only does this widen economic inequality, it is also a bad long term strategy as consumers are then left with less money and cannot use the services provided by said corporations. Measures will need to be taken to ensure that the benefits provided by AI are not kept to a small minority in positions of power.

While AI can be used in many useful and beneficial ways, it also has the potential for unethical and morally questionable uses. Autonomous weapons (and the wider use of AI in the military) are a prime example of how AI can be used in ways that some people might object to. Google received a lot of backlash over Project Maven — its contract with the US Department of Defense to develop AI software to analyse drone footage. Military AI doesn’t have to be Terminator-esque killer robots; there are many ways it can be applied. As AI continues to develop, we will continue to see wider use and increasingly competent AI systems in the military.

I need your clothes, your boots, and your motorcycle. AI-powered DeepFakes are a powerful tool for video manipulation and spreading misinformation. (Ctrl Shift Face)

Beyond physical warfare, AI also has unethical uses through social media and the spread of misinformation. One of the most worrying developments this decade was the rise of DeepFakes, and how they can spread misinformation through hi-tech forgery of videos and voices. Whilst the technology is not quite there yet and it is still possible to spot fakes under close scrutiny, they will only get better with time. The potential ease and efficacy of spreading misinformation through the use of AI systems could have far ranging and highly damaging consequences, and the technology behind it still builds off deep learning without any need to jump to AGI.

As AI systems are seeing wider use and on-going research, one would hope that we have a good grasp of how these systems learn and make decisions. Unfortunately, the deep learning methods behind most of today’s AI are fundamentally black box models — we are quite unable to peek under the hood and see what they are actually doing. The complexity of the models (often with millions of parameters) are far beyond human comprehension, and the systems themselves are unable to explain how they came to a decision. This is quite concerning, especially when these systems are deployed in situations that can directly affect us, for example in self-driving cars or medical applications.

“Metrics are just a proxy for what you really care about, and unthinkingly optimizing a metric can lead to unexpected, negative results.” — Rachel Thomas

Not only are we unsure of how our current AI systems make decisions, they also sometimes learn the wrong things. The use of metrics to train and evaluate AI systems can often be misleading, and this often harks back to the lack of common sense in AI systems. For example, an AI learnt a wacky and highly effective method for maximising score in a video game without actually winning the race. It optimised exactly what it was told to do, yet the designers actually wanted it to win the race. Whilst this is a trivial and inconsequential example, it doesn’t take much imagination to see how this could have a significant negative impact in real-world scenarios.

Well, the good news is that we’re already working on the potential impacts of current and future AI. Organisations such as the Machine Intelligence Research Institute and Oxford’s Future of Humanity Institute are looking into how AI will change our future (for better or for worse). The on-going development of AI technologies will need to overcome some of the current issues (lack of explainability, issues of bias, etc.) and include research into improving the safety of AI systems.

As well as technical developments, the next decade will need to see regulation of AI come through from well-informed policy makers that listen to the advice of experts. There is also a need to better inform the general public about AI and begin a conversation about the future directions in which AI could take the human race. By re-focusing our efforts on shorter-term AI impacts, we can ensure the rise of AI over the next decade is as beneficial as possible.

Favorite

Leave a Comment