There have been warnings for many years about the potential for disaster with Artificial Intelligence implementations. Many luminaries, from Elon Musk to Stephen Hawking, have warned about the implications if we unwittingly create a robotic overlord who deems that we are irrelevant at best and destructive at worst, and decides the world will be better off without us.
So why am I concerned now, and should you be?
The trigger for me writing this piece may seem strange. DeepMind AI agents recently won the majority of a series of StarCraft II Real Time Strategy (RTS) games against top professional players, beating the humans 10 out of 11 games. RTS games can be pretty complex, and while we’ve seen human champions (for example Kasparov and Ke Jie for chess and Go respectively) fall before, this is the first time that the complexities of Starcraft II have been mastered by an AI agent. Why is this significant?
Unlike in a game like chess, the playing area in StarCraft II is large, and due to “fog of war” (a veil over unexplored areas) at the start of the game the contents of the map are invisible to the player. In other words, you can’t see what your opponent is doing until you’ve explored the map or their units show up in your part of the map. There are a huge number of variables in choosing which strategy to adopt – there is no “best strategy” for victory – early rushes, where a low-tech, high volume army charges and wipes out the opponent’s base can be successful, but similarly, long, drawn-out matches are common.
Poor resource management decisions early in the game (e.g. planning to invest in one technology and neglecting others) can have long-term implications. And the variability in capability and unit strengths for each of the three races also impacts how a strategy is built.
In addition, as the name of the game category suggests, all action takes place in real time. Constantly changing variables mean that there are multiple decisions that need to be made at any point in time, and there is a requirement to constantly adjust based on new information.
Again, “So What?”, I hear the non-gamers out there ask. Why should you care? Tim Urban, on his hugely interesting site “Wait but Why” in a two-part post on AI, positions the situation like this – AI will either mean our eventual ascendancy to immortality (something I think is pretty horrifying) or our falling off the evolutionary balance beam into extinction.
The key point in his argument for me is that should AI ever reach the level of AGI (Artificial General Intelligence) it will so rapidly outstrip us and achieve ASI (Artificial Super Intelligence) that we won’t even be able to comprehend how vastly superior the new intellect (and our new overlord) is. For a visual representation of what that might look like, take a look at his Intelligence Staircase.
In the context of StarCraft II, the ability of the AI to conduct 200 years of training in a single week (at current silicon speeds) is something that shows the evolutionary advantage a computer-based intelligence has over a human. To do the same level of training would take a human .. 200 years 🙂 We can’t operate faster than our biology allows. We’re not able to test, analyse and adjust strategies as quickly or accurately as an AI agent would. We’re also hampered by cognitive biases, a lack of perfect recall and an inability to execute more than a single task at a time.
We’re seeing real progress in the implementation of autonomous vehicles. AI-driven assistants are becoming more and more part of our lives. Deep-learning algorithms are mining the world’s data, driving everything from investment decisions to medical diagnoses. The technologies to develop, for example, lethal autonomous weapons (or killer robots, for our sci-fi aficionados) are all available today. We have a huge proliferation of ANIs (Artificial Narrow Intelligences) which are good at one specific task or set of tasks.
If we are not intentional about the path we’re following, it is not a huge leap to posit a situation where we inadvertently create an overarching AGI, then ASI, which is neither malevolent nor benevolent. It will just be Other, and, as with the AI that beat the StarCraft II champions, its decision-making will be largely incomprehensible to human observers. If you really want to be worried about this, Nick Bostrom’s book, SuperIntelligence, paints a bleak picture indeed.
While there are clearly many advantages to leveraging AI, we need to be much more aware of the implications. And while the recent developments may seem trivial, they are one step more along a path to humanity no longer being the “smartest” entity on the planet, will all that may entail.