Book a demo

For full terms & conditions, please read our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
White plus

Why is it so difficult to define artificial intelligence?

Dr Martin Goodson
April 28, 2021
SECTIONS

The European Union published the Artificial Intelligence Act last week. These are the most wide-ranging rules for AI yet created. All technologists should be aware of the Act's contents, which prohibits many AI use-cases.

But I will not be discussing any of that here.

I'm more interested in the Act's promise to provide 'a single future-proof definition of AI'. The legal world has been bewilderingly inept at defining Artificial Intelligence in the past. I investigate in this post whether the EU has delivered.

First, let's take a look at earlier attempts to set out the nature of AI. The first definition is found in a proposal written by Claude Shannon, Marvin Minsky, Nathaniel Rochester and John McCarthy in 1955:

For the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving.

It was a proposal for a summer project - they couldn't know their definition would endure for decades. Yet here is Richard Bellman - who created the Bellman equation, one of the foundational concepts in reinforcement learning - defining AI in 1978: [AI is the computerisation of] activities that we associate with human thinking.

We get something similar from Ray Kurzweil [1]. Kurzweil defined AI in 1990 as the: [The computerisation of] functions that require intelligence when performed by people.

All three definitions are saying the same thing: AI is computerised human intelligence. This is unscientific. The scientific field of 'human intelligence' doesn't exist, because there is nothing uniquely human about intelligence. The field of cognitive neuroscience studies intelligence across the animal kingdom.

Elephants scream and weep when one dies. They cover the body in branches and spend days quietly standing over it. They visit the graves of their friends many years after they died. In short, they mourn. If I created a robot that did everything an elephant can do, I would be doing artificial intelligence research.

Let's accept that the early AI researchers were products of their times. Nobody would think like this now though, right?

Then why did the UK Parliament define AI like this in 2018? 'Technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation.'

The UK government provided an even more wrong-headed definition of AI in 2019, this time from the Office for AI: 'AI can be defined as the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence.'

What happens if people commonly think playing the game of checkers requires intelligence but then stop commonly thinking it? Would checkers-playing computers stop being AI and start being something else instead? The definition of scientific concepts, say atomic nuclei or microscopes,  don't normally depend on what people commonly think.

I now come to the pinnacle of AI definitions, a work of crystalline thought and limpid beauty. This is from the EU High-Level Expert Group on AI in 2019:

Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.

I like this so much that I have started defining other things in the same style. Here is my new definition of an aeroplane:

Aeroplanes are physical objects made of metal, such as aluminium or zinc, or rarely made of paper, designed by humans, that move within the physical world in the dimension perpendicular to the surface of a planet by means of aerodynamic devices, using air and its atomic and molecular constituents along with principles of physics to convert fuel based on hydrocarbons or other kinds of materials, or other sources of energy, into upwards force.

This is a description, not a definition. A definition seeks to delimit a phenomenon and explain its essential character, not describe all its possible manifestations. (The OED defines an aeroplane as 'an aircraft that is heavier than air and has fixed wings')

So, how about the definition in the EU's new AI Act? Here it is, the world's newest definition of AI:

'Artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

Annex I lists three categories of modern AI techniques: machine learning, symbolic approaches and statistics. The Act has already caused dismay amongst statisticians, who had no idea they were actually doing AI all along.

Simple classification methods are covered by Annex I, but intelligence is not just about classifying stuff. How about other elements of intelligence: such as planning, episodic memory and intentional communication? Wouldn't algorithms for these things, if only they had been invented, also be AI?

Let's say we invent an AI psychologist which can conduct effective therapeutic conversations with  patients. There is no saying that such a system would use supervised learning or any of the other things listed in Annex I, so we'd have to update the annex to include it. A definition that needs to be kept up-to-date as technology evolves is the opposite of future-proof.

I reject any definition which consists of a catalogue of technical methods, is dependent on public opinion or is inappropriately human-centric. I prefer one which focuses on the phenomenon which is fundamental to intelligence: the ability to learn.

Legal definitions of AI sometimes have the flavour of a committee trying to define a circus, but without ever having visited one. As a designer of AI products, I've 'been to the circus'. I propose a definition that gets to the heart of the matter:

“Artificial intelligence is the property of computational systems that can learn.”

Notes

[1] Kurzweil is the writer who popularised the idea of the technological singularity. One version of this idea says that as soon as an AI learns to make an improved version of itself, an explosion of intelligence will immediately follow. A superintelligence will arise almost instantly, with calamitous results for humanity. Is it churlish to ask why, after animals began making more intelligent versions of themselves 580 million years ago, the explosion that produced human intelligence took about 579.8 million years to complete?

Share to LinkedIn