What is – and isn’t – AI?

The following is an excerpt from the book I’ve been working on for the better part of a year.  I may or may not finish it (as is both my prerogative and my style), but recent discussions elsewhere have prompted me to post this here, for the time being.

AI-lowres-300x285What it is:

Artificial Intelligence is a complicated topic, and as such the pursuit of Artificial Intelligence is equally complicated.  A complicated pursuit of complexity, one could say.  So what then is Artificial Intelligence?

It does us little good to provide a single definition, as there are many.

  1. The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
  2. An area of computer science that deals with giving machines the ability to seem like they have human intelligence.
  3. The capability of a machine to imitate intelligent human behavior.

Or, from the Association for the Advancement of Artificial Intelligence: “the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.”

The sad fact that few in the field of AI research can agree on a single definition for their work is a testament to just how complex the pursuit is in real terms.  It’s no wonder then that even fewer people agree on what will constitute an artificial intelligence, should one eventually emerge.  This is a problem that plagues other branches of science too.  One in particular bears a special relevance to this discussion.  Just as we’re thus far unable to narrowly describe AI under a single definition, biologists have the same problem with, well us.  There is no universally accepted definition or description of biological life, which stands as an ironic mirror to the quest for artificial life.

However, you should notice something missing from those definitions.  There’s an element of AI that’s common in the public sphere of conversation that isn’t represented in any of those statements, though you may not pick up on it right away.  You will not find a definition of Artificial Intelligence research (from the researchers) that mentions, implies, or otherwise hints at the creation of a race of thinking, self-aware robots bent on our destruction.  The study of AI is not a race toward the creation of a slave class of machines, it is and always has been the study of intelligence, period.  Now, that isn’t to say that the men and women involved in this research aren’t working toward such an ideal, but it’s not something you’ll find anyone in the field discussing with defined plans, with the pointed exception of those who’d criticise the effort.

Artificial Intelligence research is in fact a highly compartmentalised and divided field, perhaps owing to the sheer amount of information available and the progress that each sub-group is making.

Even in popular discussion few can agree on just what Artificial Intelligence is.  One person or group will claim that it’s simply machines or computers that are able to appear as intelligent as humans.  While others claim AI is the creation of a purely mechanical being(s), with sentience, emotion, and personhood.  And others still will claim that it’s actually the integration of man and machine, or a realisation of transhumanist goals.  All three of these loose definitions are correct, to a point, but it could be said that they each either fall short of the big picture or they put the proverbial cart before the horse.

There are many sub-topics or sub-groups in the area of AI research, few of which can be agreed upon by the masses.  The goals of AI research, which one would think are self-evident, are broken into at least eight different sub-groups (sometimes more, depending on who you ask), ranging from deduction, reasoning and problem-solving, to planning and learning, to perception and movement and social interaction.  Each is an end unto itself, though the topic of AI is inextricably tied to the notion that we are, collectively, trying to build robots that are sentient, or self-aware (or a host of other terms that essentially mean the same thing: robot-humans).  It would be foolish to deny that our goal is anything but, and in that vein, the entire enterprise can actually be viewed according to two main categories: a top-down approach, and a bottom-up approach.

Top-down and bottom-up are terms you’ve likely heard applied to many different ideas.  Often the concept is known as deductive and inductive reasoning.  This is a logic theory that refers to the two basic ways of looking at research methodology.  Deductive reasoning builds observation out of theory, whereas inductive reasoning builds theory out of observation.  It’s fairly easy to see that they are essentially polar opposite ways of viewing a complex system of any kind, but when applied to the pursuit of AI, both are based on a single assumption.  That assumption is simply that the product of a system is greater than the sum of that system’s parts.  Emergence, in a word.

Emergence, or emergent properties, are qualities, such as patterns or regularities, that arise in complex systems through the interaction of smaller, individual components who themselves do not possess those qualities.  And in many cases, the nature of those qualities cannot be predicted based on observations about the nature of the constituent elements.  In this case, emergence states simply that consciousness (or the simulation of consciousness) could arise, or emerge, as a property of a system of cognitive or pseudo-cognitive parts working in concert.

In each case, top-down and bottom-up approaches to engineering Artificial Intelligence (or what we’ll call TB – Top-to-Bottom, and BT – Bottom-to-Top respectively), the basic assumption is that we can develop the foundation of intelligence in discrete components, ultimately connecting each to create a network of components, and eventually that network, as its complexity grows, will become an intelligent entity.  It is, as mentioned, a little more complicated than that, but this is what that basic assumption amounts to.  The difference being the way researchers look at the network.

In short, the TB approach focuses on simulating the behaviours of intelligence independently.  This is the traditional way of looking at AI research.  A TB approach says that we can build an intelligent psychology in discreet parts, such as software packets or applications that allow a computer to recognise, analyse, and take action based on sensory patterns or mathematical algorithms (the way a chess-playing computer might do).  On its own, such software emulates the individual abilities of a thinking entity (a human) to creatively decide on a course of action based on available information.  And this presents the appearance of intelligence, though in reality it’s nothing more than the product of a difference engine sorting through various pre-set parameters.  “Nothing more” may suggest to you that this is no special accomplishment, though please don’t think of it that way.  It is indeed an astounding feat that a computer can use such software to navigate the landscape of our abstract reality, ultimately emulating human behaviour – and indeed this may be a more apt description of human intelligence than we are comfortable with – but this is where emergence…emerges.

As mentioned, both approaches assume that an intelligent system is more than the sum of its parts.  Whereas the computer imagined above may perform in such a way as to emulate certain intelligent behaviours, it isn’t, in reality, an intelligent computer by definition.  Which provides us our first glimpse of the difference between clever programming and Artificial Intelligence.  However, if one were to combine software such as a facial recognition application, with other discreet applications designed to emulate other behaviours – reasoning, language, spatial dexterity, social relativity, etcetera – connecting each into a vast network, not only might intelligence ultimately emerge, but self-awareness, consciousness, and non-imposed agency could result.  In this approach we rely on a mostly unknowable element; at what point does a complex system of behaviour become an intelligent entity?  Where do the parts stop being discreet elements of a system and become integrated parts of the whole?

This notion is embodied in the ghost in the machine concept.  When enormously complex networks interact, there is a point where they begin to resemble the most complex network in the universe: the human brain.  That resemblance is hardly superficial.  Emergence, in this manner, is precisely what gives rise to our conscious agency, it is what’s responsible for endowing us with sentience, though you’d be hard-pressed to find two people who agree on just how that works.  Thus it makes sense that the same process would seed sentience, with the same effect, in an artificial brain.  This idea has been explored in literature and cinema to great effect; a machine simply doing what it’s programmed to do, suddenly undergoes a wondrous and little understood transformation from automaton to thinking, feeling artificial life form.  Ultimately, in those stories, the newly realised life is usually denied its existence by fearful elements in human society, but for a short time true Artificial Intelligence is achieved.  This, of course, puts the achievement squarely on the shoulders of chance, however, and not on our ability to build and program.  Perhaps that’s where it belongs.

“You realize that there is no free will in what we create with AI. Everything functions within rules and parameters”

― Clyde DeSouza, Maya

In contrast to the TB approach, the inductive or BT approach, while still relying on emergence, attempts a much more holistic method of coaxing agency out of the machine.

As mentioned, the TB approach, which has historically been the more popular or traditional way of viewing the pursuit of AI, entails the simulation of psychology.  The BT approach, however, does almost the exact opposite.  In this approach researchers are trying to simulate neurology, or in other words, to provide a physical framework in which intelligence might create itself…an actual robot brain.

In all other approaches, the idea that we can simply provide the components of a thinking network and somehow, once assembled, they will become more than the sum reigns supreme.  But in the BT approach to AI, the focus is on building brain-spaces, rather than teaching components to emulate behaviours and hoping they spark sentience in the whole.  Of course, this tends to be a touch more complicated than the TB approach.

In large part it relies on our rapidly advancing knowledge and ability in the area of brain mapping.  That is the process of scanning brains – of humans and other mammals, but also of less complex lifeforms, such as nematodes (more on this later) – and determining where each neuron and synapse is located in that brain with the highest accuracy possible.  Then with that information – that map – the brain can effectively be reproduced, neuron by neuron, inside a digital framework.  It’s important to understand what that means though, before we proceed.

When you look at the creation of software applications, let’s say a clock app for your smart-phone, there are two ways it can be programmed.  You can either provide the code to tell the device how to calculate the time and to display it with the smallest number of steps involved in the process, which is the way most such apps operate.  Or you can take the time to program a virtual watch, with all of its internal workings (gears, cogs, springs, etcetera), each represented by a different coded function within the whole.  If done properly, the clock would work as any old-fashioned mechanical watch does.  Its various discreet background functions would each work in concert to produce the desired effect (displaying the time).  Of course, to program a virtual watch that way would be exceedingly complicated, and in that context, a somewhat frivolous waste of time and effort.  Unless it’s not a watch you’re building.  If it’s a brain you want to simulate, then the effort might be worth it.

In that context, the creation of coded functions that represent not only the physical properties of neurons, but also emulate their position and individual relationship to hundreds, thousands, or even billions of other virtual neurons could/should ultimately duplicate the way in which biological brains function.  Which means, almost incidentally, that consciousness, sentience, or intelligence could also arise as an emergent feature of that virtual neural-network in the same way it has for biological life on Earth.  That’s the theory, but of course, doing this in practise is no walk in the park.  The BT approach requires the creation of new programming languages, faster processors, more efficient computer memory, and the development of newer and better materials (i.e. superconductors), but we are already capable of doing exactly this with the smaller, simpler brains of worms and other basic life forms.

Again, the BT approach also relies on the idea of emergence, in that neuron placement and connections, and the resulting synaptic function of those connections are the fundamental mechanics of the brain.  The languages you speak, the memories you hold, the skills you possess, all are physically represented by groups of neurons and synapses in your brain.  It’s the interaction of those various neurons and groups of neurons that ultimately emerges as intelligence.  The same is true of all brains; the human brain contains somewhere in the area of 100 billion neurons, supported by a network of 300 trillion connections between them, but the brains of nematodes (in this case C. Elegans) contain just 302 neurons, supported by a few thousand connections.  And while the relative intelligence of a worm is nothing to get excited about, the point of this approach is to understand how those neuromechanics relate to sentience, consciousness, or intelligence, so as to reproduce the effect inside a computer.  And ultimately understand how the same works in our brains.  Therefore, at this point of the game, simpler is better.

Both the deductive and inductive approaches to AI have borne fruit, though in each case, as with most such research, the fruit itself bore more questions than answers.  As different as these approaches are, even though they’re essentially working toward the same goal, the product of the research could potentially be quite different in terms of both the wholeness of the intelligence they might create, and their relative usefulness in the commercial market.  And each method offers its own pros and cons.

In reality the concept of Artificial Intelligence, independent from its more clinical definitions, is an idea with more than one face.  AI is all of the above; it’s the creation of a technological being that is, at least self-aware, and at most embodies and surpasses human intelligence.  It’s also the development of less tangible – though no less profound – software packets and algorithms that govern networks like the World Wide Web, or the massive intranets that manage financial markets, corporate and government databases, and global communication systems.  It is speech and facial recognition, it is personality emulation, it is gaming, and it’s virtual reality.  It’s both the flashy cutting-edge science and the near-invisible background technology that truly makes the world turn.  The difficulty we have in identifying just what Artificial Intelligence is, it seems, is the same difficulty one would have in identifying what human intelligence is, prior to the first human ever being born.  So we should feel no shame in our failure to declare that AI is this, or AI is that, especially when you consider how far we’ve come in our quest to create it.

“Human beings, viewed as behaving systems, are quite simple. The apparent complexity of our behavior over time is largely a reflection of the complexity of the environment in which we find ourselves.”

― Herbert A. Simon, The Sciences of the Artificial

 

What it isn’t:

Imagine you’ve been presented with two devices: one is an interactive child’s toy doll that learns about the child the more it’s played with in order to create a stronger emotional bond; and the other is a life-sized humanoid robot with a silicon outer skin that’s been designed to look and feel as close to human as is possible, and with the ability to tap dance, tell jokes and participate in rudimentary conversation.

Which of those two devices is an example of Artificial Intelligence?

At first glance it might be an easy choice, after all, the language used to describe the second device practically leads you by the hand to that conclusion, but if you really think about it, the answer isn’t so clear.  In purely technical terms, both are examples of AI, but there is a clear difference that may warrant eliminating one, or both of them from that category.

This, of course, brings us back to the ultimate question; what is intelligence?  We can go over the traditional definitions, though we already know they aren’t terribly helpful.  We can dive down the philosophical rabbit hole of epistemology, drinking potions and having tea with hatters of questionable sanity, but perhaps we needn’t go that far to answer these questions, at least in a casual sense.

What about the humanoid robot makes it seem like it has intelligence?  Well, first off, we can completely disregard its well-crafted imitation meatsuit, as that has nothing to do with the question.  Though it’s interesting that the presence of that information in the initial question did have some sway on your decision making process (even if you choose to deny it).  No, it’s what the device does that’s important, not what it looks like.  And in that case, its repertoire of intelligent actions is a little less impressive than we might have thought.  Those first two tasks, tap dancing and telling jokes, while something not all humans can achieve with any grace, aren’t exactly difficult to achieve in terms of robotics.

OK, maybe that’s a little oversimplified.  Designing and building the robotic frame, the servo-mechanisms, the micro-hydraulic or pneumatic control systems, and everything else that goes along with that is anything but easy.  And programming the multitude of tasks for each action would be, and is, a monumental achievement, not to mention assembly and troubleshooting.  It’s a very complex and impressive creation, but is it intelligent?  Is the programming that went into it – to control its movements, to allow it to talk in a discernable language, etcetera – complex enough to constitute intelligence?

Now let’s look at the doll.  Again, we can disregard the fact that it’s a doll, as this bears no relevance to the question at hand.  From the description, it seems like it might be a small self-contained computer with the ability to sense its surroundings, listen to the child’s voice, recognise words and speech patterns, and respond appropriately.  It says that it can also learn.  Oh how the tables have turned, but have they really?  In contrast to the light-on-its-toes robo-dancer, the doll’s computer is relatively simple as hardware goes, and by virtue of it being a toy, it’s likely a lot easier to build.  But is the programming comparable to the robot?

As mentioned, the robot can maintain a rudimentary conversation, thus it must have some level of intelligence.  But the doll is able to not only hold conversation, but it can learn from those conversations and, in the future, anticipate the child’s responses.  Both scenarios – which are hypothetical here, but do actually exist in the real world – embody the same idea: imitating human behaviour equals artificial intelligence.

It’s my opinion that neither is artificial intelligence.

In the case of the robot, it’s relatively easy to flush out the reason; the robot has no autonomy, no agency.  Yes, it operates remotely, but remoteness isn’t a qualifier for AI.  Drones are remotely operated, children’s toy cars are remotely operated, hell, even your Smart TV is remotely operated.  And as for the behaviours themselves, the robot is simply executing preprogrammed tasks that result in the desired behaviour.  It’s impressive, but if anything this is only an extension of the builder’s intelligence.

The same is true of the toy doll, though that’s a little harder to explain.  As mentioned, the doll is capable of learning, it’s capable of anticipating the behaviour of the user and of altering its own responses accordingly.  That seems on the surface to be a clear quality of intelligence.  But I submit that it is not.  That toy doll learns in the same way that Google learns what you’re looking for when you start typing in the search field.  It uses an algorithm to detect patterns within massive datasets (not just data related to your search history, but related to everyone’s) and cross references them with other patterns or metrics (of which there are many, to say the least).  It then analyses the resulting matrix of options using mathematical formulas that allow it to best decide the most likely search-phrase you intend to type.  In this way, it predicts what your behaviour is going to be and pre-empts you by making suggestions.

In the most basic terms, computer programming is simply a human giving the computer instructions in a language it can understand, instructions that amount to “if this, then that”.  And in the case of the examples above, each time that decision is made, it triggers another decision, and so on.  But in addition, all of the previous decisions are remembered and become a part of the dataset used to detect and analyse those patterns.  This process has a cumulative effect – the more decisions the device makes, the better its predictive ability becomes – and this is why these devices are said to be learning.

The reason that I deny these devices the status of artificially intelligent is quite simply; neither are doing anything they weren’t told to do by a human through preprogramming.  The fact that they can perform their tasks independent of human interaction or manipulation, is no different than da Vinci’s wind-up mechanical lion going through the motions once it was activated by the King of France.

I anticipate that the casual reader, the so-called layman, won’t object to my assertion here, though I suspect others might.  I admit that my opinion here is contingent on which of the many definitions of AI one chooses to adopt.  Clearly I’m partial to a definition that explicitly includes some form of independent agency on the part of the device, whatever that may be.  But I acknowledge that this is not the only valid position to take on the subject.

Realistically, there is no form of computer technology that cannot be construed as artificially intelligent, and this underscores the problem that lies at the heart of the issue; we don’t know what intelligence is.  Sure, there are descriptions and definitions, all of which fall short of an adequate answer.  The standard definition – the ability to acquire and apply knowledge and skills – offers us no help at all in deciding what may or may not be intelligent, artificially or not.  If we take this definition as true, then we’re already surrounded by innumerable examples of artificial intelligence, which is to say that the quest is over.  We’re there, we’ve reached the finish line.  But there’s something still lacking: autonomy.

All of the public misconceptions about AI, the ones that cause visions of Arnold Schwarzenegger as an indestructible android bent on death and destruction, are rooted in the idea that intelligence, on the human level, is something more than what’s described above.  And in turn, we assign that same ineffable requirement to Artificial Intelligence, because we’ve been told that AI is supposed to be emulating us.  But this isn’t necessarily true.

In 2014, scientists from the OpenWorm Project, which is a neuroscience think-tank of sorts, successfully mapped and digitally recreated the brain of a nematode, specifically C. Elegans. (Gepeto Simulation Engine, 2013)  They then used that neural facsimile to control a small robot.  They provided no programming input, beyond what was required to simulate the physical structure of the worm’s brain, and to everyone’s amazement, the robot began to move.  Not only did it move, but it responded to outside stimulus with appropriate action.

If there is one example of true artificial intelligence, that has to be it.  An artificial brain, albeit modeled on a real brain, embodied by a completely artificial entity, behaved in precisely the same way as its biological counterpart.  Intelligence and autonomy.  The fact that the brain involved was perhaps one of the lowest forms of live on the planet isn’t terribly important, since the concept and overall method of replicating the brain would seem to be scalable without a ceiling.  As brain mapping technology develops to support the advancement of this pursuit, so too shall the product of this science.

But that isn’t what you think of when you hear the term Artificial Intelligence, is it?

It’s more accurate, and indeed, more reasonable to think of AI as a graduated scale of intelligence, with devices and technology occupying all points between basic computation and almost- or peri-sentience.  If we are ultimately trying to emulate human-level consciousness and even human psychology in an artificial entity, then we must first recognise that in nature – of which we are most definitely still a part – intelligence exists with as much variation as there are species in the zoological catalogue.  Therefore, to emulate such intelligence, we have also to admit that there are going to be mirror images in the canon of artificial entities along that path.  The different ways we might reach the ultimate goal of high-level artificial intelligence are largely irrelevant to the definition, interesting as they are, but they are still an integral part of the discussion.

Advertisements

One thought on “What is – and isn’t – AI?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s