Did AI Just Make The Leap To Being Intuitive?

One of the most unfortunate things about Artificial Intelligence (AI) is that a moat of mystery has formed around it, convincing most mere mortals that AI is a black box which they couldn’t possibly begin to understand. In a strange way we’re drawn to the mystique of a machine which thinks in ways we can’t understand or predict; the alien intelligence among us! It’s a tantalizing fear. And that fear is fueled by regular reports about AI’s latest conquests.

Most recently, DeepMind (a Google company) announced that its new AlphaGo Zero (AI built specifically to play the game Go) had exceeded the abilities of DeepMind’s earlier version, AlphaGo, which had already won against the world’s best Go player, Lee Sodol; meaning that AlphaGo Zero cannot only beat the best human player, but also the best non-human player! Or, to be blunt, Zero can’t be bothered with humans, they’re just not enough of a challenge.

The ancient game of Go, with 10^800th possible moves (there are 10^80th atoms in the visible universe) is considered by many to be the world’s hardest board game requiring deep intuition as well as strategy.

What makes AlphaGo Zero a breakthrough is that it was not trained to play Go by humans. Prior to Zero you had to at least seed an AI with a good-sized repertoire of basic moves and countermoves. Zero, however, learned Go all on its own over the course of just 40 days by playing against itself.

Pretty amazing, right? But there’s that black box again. How does AlphaGo Zero learn on its own? Is it being intuitive? Does it dream of gleefully squashing human Go opponents?

Follow Your Gut (unless you don’t have one…)

First off, intuition is just a label we use for a correct decision that’s based on incomplete knowledge. We’re okay if people are intuitive, in fact we elevate and admire them for it, but we’re unsettled by the prospect of a machine making a decision that involves intuition, ambiguity, or less than complete data. But what if our gut is nothing more than a bunch of variables that we’re not consciously aware of?

AI is actually very well suited to making those sorts of highly intuitive decisions. Since it’s not conscious, it has no bias as to what it observes and therefore it’s aware of everything that influences a particular decision. It’s sort of like a machine version of Sherlock Holmes who notices every minute detail before arriving at an observation by determining which out of those myriad details is important.

AI is really doing nothing more than observing every possible input and then determining which pattern of inputs results in progress towards a desired goal. Still sound obtuse? Take a look at the second article in this series where I use a very simple analogy that demystifies the AI black box. (Link to the next article in this series.)