Why talk of another AI slowdown appears misguided.

Deep Freeze?

The 20s are here and, on cue, the past few months have delivered a raft of decade-in-review retrospectives by commentators from the worlds of entertainment to science, sport to politics, and everything in between. These stories are a staple of media outlets whenever we cross another calendar milestone.  So it was no surprise when I came across Sam Shead’s BBC article last weekend, which looks to wrap up the past decade’s progress in AI with gloomy speculation that we may be on the verge of another AI Winter; that progress in the field is starting to plateau.

Credit to Shead, the sensational headline and early pessimism give way to a fairly balanced article that distinguishes between rates of progress in the two AI sub-fields of Narrow AI (ANI) and General AI (AGI). He describes how AGI — the ability of machines to reason, represent knowledge, plan, learn, communicate and integrate these skills towards a common goal – following much early hyperbole has failed to deliver on early promises made on its behalf.

It is perhaps telling that Shead’s ‘Winter’ refers to a trough in some of the hyperbole surrounding AI, rather than actual, real-life, rubber-on-the-road progress, because the rate of gains in narrow, domain-focussed AI (sometimes called Weak AI — the most unkind of all the labels) have been nothing short of mind-boggling.

It’s these practical applications that differentiate the current era of AI development from previous epochs. We’ve reached the point where real benefits are being realised from real-world applications, from real companies who are making real profits in the process.

Narrow Gains

Several factors have enabled this progress.  Better theoretical techniques developed over the past decade have made AI algorithms more effective. Capital from big-tech and entrepreneurs is helping deliver practical, commercial applications (and, granted, an inevitable smattering of misfires, rebranders and downright imposters). Mobile computing is creating an abundance of data to sate supervised learning’s voracious hunger for real-world examples. And the democratisation of tools and infrastructure means that AI R&D opportunities are now no longer the preserve of a handful of specialists and academics, opening up a host of AI possibilities to engineers and tinkerers around the world.

I first began working with Neural Networks, a staple of Narrow AI, twenty years ago, when every component — from file handling to routines for calculating gradient descent — had to be built from scratch. Since then, data manipulation, processing and storage technology has rapidly progressed thanks to the efforts Google, Amazon, IBM and others. Each has built powerful, universally accessible tools for the development and deployment of AI solutions. Combine all of this with the onward-march of Moore’s law and these advances have enabled businesses and organisations to develop AI capabilities that create real practical and commercial value.

One must only look to their own everyday experience of interacting with consumer technology to experience the benefits; from voice recognition systems powering Alexa, Siri and Cortana, to recommendations for the next YouTube clip you might enjoy given your apparent fondness of cats, to finding the quickest route home in view of the never-ending roadworks on the M1.

Amid talk of an AI Winter, prospects of organisations walking away or scaling back investment in these narrow domains are precisely nil.

As with most aspects of technological progress, many advances in AI – Deep Learning in particular – are the product of technologies in combination.  The pairing Software Development Kits (SDK) for building machine learning algorithms with development of specialised Tensor Processing Units (TPU) are prime examples that result in a compounding of benefits and applications. In this respect, progress in ANI has come from doing more of the same, only better, rather than being the product of some theoretical leap or paradigm shift.  Impressive though the technology is, advances are the product of a kind of combinatorial-brute-force. Most of the fundamentals are the same today as they were at the start of the millennium, just easier and faster to implement.

For this reason, I believe that progress of Deep Learning in the ’10s may now be going much the same way as previous advances such as optical character recognition and chess-playing computers.  As it becomes more normalised, the loss of mystique causes us to reclassify the technology so that we no longer call it AI.  “Yes, it’s impressive, but it’s not really intelligent is it!” may be the sort of sentiment that underlies articles like Shead’s.

On the Horizon?

So, what of the prospects for progress in AGI?

When it comes to developing systems that can do those things that biological brains can do, many daunting puzzles remain; most challenging of which include understanding the mechanisms that give rise to emotional experiences, consciousness, planning function and creativity.

Let’s assume that for an AGI to be classed as such, it must demonstrate a degree of competence at meeting the definitions described at the beginning of the article; the ability to reason, represent knowledge, plan, learn, communicate and integrate these skills towards a common goal. They’re a little lightweight but they’ll do for now.

Whilst there are clearly many aspects of animal brain function we are yet to understand that are necessary to solve the challenge of AGI creation, it is unlikely that we must understand every aspect. In much the same way that I don’t need to be an expert in molecular biology to successfully grow tomatoes in my garden. A level of abstraction or imperfection in the components is acceptable, assuming our only objective is to get an AGI ‘off the ground’. Such imperfections could bring with them other issues in relation to control, but I will save that discussion for another time.

For example, in relation to the mechanisms listed above, I believe that those giving rise to planning function and creativity are essential for a system to exhibit the behaviours of an AGI. Both require inventiveness and the ability to imagine and originate something new (or at least a capacity to combine existing thoughts into new ideas).

One only need look at the work of Oscar Sharp in the short film Sunspring, the screenplay of which was written by a Recursive Neural Network, to understand just how far adrift we are in this domain and how dumb (relatively speaking) ANI can be. (If and when you watch it, consider that it also benefitted from interpretation by a BAFTA award winning director!)

Other mechanisms found in brains may be less fundamental to the creation of AGI. Emotion for example is an integral part of the human thought process. But whilst it forms part of human decision making, it undoubtedly impedes truly rational decision making. I consider an AGI that’s never been to my house before, but that’s able to prepare a gourmet meal from scratch for my dinner party this evening, no less capable because it doesn’t experience fear the guests may not like its cooking, in the same way that I would. That said, understanding more about emotions may help to solve some of the problems of AI control, and ways of convincingly simulating those emotions would undoubtedly help with human adoption.

If Not Now, When?

So, for progress to be made on AGI, a better understanding is needed many of the higher-level cognitive functions of biological brains and how, in turn, they leverage the lower-level neuronal functions that every day we’re getting better at emulating. This is the paradigm shift that’s needed to broaden today’s constrained horizons regarding AI technology.

These are indeed big hurdles, but we may be closer to clearing them than we first think.

When talking about AGI, I’ve noticed a tendency for people to somehow isolate the imagined intelligence as if it must be a discreet entity, like Arnie in the Terminator, or some God-like embodiment. They apply a kind of anthropomorphism. The term singularity, used to describe the point at which technological growth crosses into uncontrollable and irreversible expansion, itself conjures images of a solitary super-being.

But if the history of technological advancement has taught us anything, it is more likely that the AGI will be a somehow better connected. During the late 1990s we marvelled at technology’s ability to cram information onto a single CD ROM that a few years earlier would have spanned the volumes of an Encyclopaedia Britannica in the local library. Today we think nothing of the fact that a body of knowledge many orders of magnitude larger is accessible online at any time of the night or day, from home, on a train and countless places in between. Resources on this scale were unimaginable at the time of the CD-ROM or the first Terminator movies, but the combinatorial explosion brought by a quarter-century’s compounded improvement to hardware, mobility and infrastructure, make it simply mundane (incredibly, more often than not it’s also free!).

As Yuval Noah Hararo points out in Homo Deus, we need only look at the cognitive revolution that took place in apes around 70,000 years ago. Then, just a few small changes to DNA and a bit of ‘hardware’ rewiring gave rise to the general intelligence we perceive in today’s humans. Long before the collective resources of Google, Amazon and Facebook and the seemingly immutable growth of computing power had a hand it, an evolutionary switch was flicked.

So, applying meaningful research and development effort towards solving some of the puzzles surrounding mechanisms for higher cognitive function could well be enough to begin to knit some of the Narrow AI capabilities already built, together with the vast expanse of knowledge of the Internet. Suddenly we may begin to recognise the result as something resembling an AGI. Those stuttering, constrained first steps may somehow resemble those that a toddler might take. But, as any parent will attest, those abilities will develop very quickly leaving them struggling in vain to keep up.

From that point on, one can only imagine the possibilities. In Human Compatible, Stuart Russell’s book on control in AI systems, he points to the possibility that all the knowledge components needed to cure cancer in all its forms might be available online, right now. It may be that we just need a little help from someone or something — our newly created AGI perhaps — to assemble them in the correct sequence, and so another great milestone of bearing testament to human ingenuity is reached.

Granted, significant hurdles remain meaning we’re unlikely to be on the verge of any breakthroughs in the next few years at least. As Russell goes on to explain, problems of applying common sense to language processing; how cumulative knowledge can be attained by the AGI; or how hierarchies of plans and sub-plans for delivering some outcome are conceived, must all be tackled. But our experiences of exponential technological growth should teach us that the challenges that today appear insurmountable, rapidly become vanishing dots in the rear-view mirror of our combined technological efforts.

So, as we reflect on remarkable progress in the AI field this past decade and wonder at what’s to come, any talk of winter seems misguided. For AI, spring has just sprung, and the year’s set to be a scorcher!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: