Letter From The Editor - Issue 69 - June 2019

Bookmark and Share

About IGMS / Staff
Write to Us

  Science Fact-ion by Randall Hayes
March 2019

Conscious AI, part 1

Theodore Twombly: What does a baby computer call its father?

Samantha: I don't know what?

Theodore Twombly: Data

- Spike Jonze, Her

There's a long tradition in science fiction of self-aware robots like C3PO and Commander Data (long a) who have personalities in addition to information-processing capabilities, who can carry on deep philosophical conversations (or bitch & moan, an equally important marker of humanity). There is an almost equally long tradition of Pinocchio computers that were not intended to be self-aware, but which transcend their programming and "wake up" to sentience, or consciousness, or whatever we're calling it this year. Recently several very smart people have tried to raise an alarm about artificial intelligences, echoing things Alan Turing said during the 1940s when computers were still mostly theoretical (like AI is now). Others, including Palm Pilot inventor Jeff Hawkins, have tried to downplay the risks of AI. What's going on?

What we have here is a failure to communicate. Several scientific disciplines are converging on some very old questions of philosophy, and beginning to answer them in productive ways. However, the people involved are not necessarily talking to one another. This is where generalists like SF writers can be especially valuable, in knocking ideas together to see what sparks.

We'll start with neuroscience, because that's what I know best. There are now many converging lines of evidence that say consciousness is not a single all-or-nothing switch, like a phase change from physics. Rather, the word describes a complex mixture of brain-states that varies over time. Think more in terms of rendering a hearty crockpot stew than boiling a pan of pure water. Even the awake / asleep / dreaming triangle is much too simple. As a f'rinstance, it's unusual to be aware during a dream, but it's certainly possible in a state called lucid dreaming. There are many of these odd in-between states, grouped under the medical term parasomnias (sleepwalking and its opposite, being temporarily paralyzed on waking, are especially scary). It is equally possible to lose all sense of self while awake, during various forms of meditation. Things can get even more exotic with drugs or sleep deprivation. Jeff Warren's excellent book The Head Trip is a good place to start exploring this Wheel of Consciousness. Just to throw it out there, I can't think of any stories about computers that display any of these states, beyond Do Androids Dream of Electric Sheep? Well, maybe one, about a prosthetic that caused its owner to dream he was a road . . . in any case, lots of possibilities to explore.

Let's add a spatial dimension. Tononi's lab group has magnetically "pinged" the brains of patients under anesthesia and in various states of coma after massive brain damage. They have then recorded how far their ping spreads through the patient's ongoing brain activity, as a measure of how integrated the different circuits are. Generally, the more integrated the individual neural networks are to one another, the more awake and conscious the person is. However, brain damage can also be spatially localized to particular networks and body parts, which can cause symptoms of neglect, removing them from consciousness, as when a parietal lobe lesion leads a patient forget to shave one half of his face--or, more rarely, to believe that one of her legs no longer belongs to her and to want it removed! Future technologies that can achieve the same kind of spatially specific effects, but in a controlled way, offer all sorts of narrative possibilities.

Finally, we can consider what all this complication is for. After all, bacteria don't need a nervous system. Fungi and plants can control their growth and development just fine without a single neuron. And yet they can do many of the things required of a basic self. They recognize and defend their own boundaries, for instance, whether those are a single cell membrane or a multicellular body. They recognize and react to others, as being separate from themselves. This fact has led a growing number of scientists and philosophers in a sub-field called embodied cognition to recognize that not only is consciousness graded, but it may be almost infinitely graded, that anything alive may have some small degree of consciousness. In I Am a Strange Loop, Douglas Hofstadter proposed the Huneker as a unit of consciousness. SF authors love that kind of shorthand, and I hope to see it popping up here and there in stories, like mushrooms after a rain.

Hofstadter also proposed my favorite model of our own highly developed sense of ourselves, what Freud called the ego. Most of us believe that we are in charge, that our ego is the driver of our actions. Buddhists have spent a couple of millennia now trying to tell us that this is not true, that our actions bubble up from below, and that our conscious minds are really just storytellers, commenting on the action as it happens, and generally trying to take credit for the good stuff. In their philosophy, and some modern cognitive neuroscience, the ego is more like a politician mugging for the cameras than a general efficiently and effectively giving orders to her soldiers.

Which didn't seem particularly useful, does it? Just wait. This gets interesting.

Hofstadter proposed that the ego is predictive, an outgrowth of our primate social circuitry. You're familiar with what happens during a conversation, when people who know one another well can finish one another's sentences. Similarly, according to Hofstadter, the story I am constantly telling myself is mostly an attempt by my cerebral cortex to predict what I will do next. Since I don't have direct access to those deep, old, evolutionarily conserved brain circuits under the cortex that actually drive my behavior, I have to guess about my own actions and motivations. The fact that I'm usually right about what I will do next says more about the consistent patterns of my behavior than it does about my ego's ability to control that behavior.

A particularly striking example happened in Roger Sperry's experiments with split-brain patients during the 1960s. If the right half of the brain decided to do something, like pushing a button based on a visual stimulus that the left hemisphere couldn't see, the left simply made up a totally fictional story about how (and why) the button got pushed. Neurologists call this behavior of compulsive and unconscious storytelling confabulation, and it seems bizarre at first, until you realize that we're all doing it, all the time, to justify our actions to ourselves and to other people.

So, if consciousness is something of an illusion (although a useful one under many circumstances), where does that leave AI? Oh, so sorry, we are out of space for this month. Next time I'll swing around and come back for another bombing run at the problem, dropping names in robotics and computer science. If you can't wait, check out Hawkins's book On Intelligence for a preview.

Randall Hayes, Ph.D. is a neuroscientist, currently running an education company, Agnosia Media, LLC. He helps organize the Greensboro Science Cafe, and in between columns posts ephemera to the PlotBot page on Facebook.









This book is something of an exception.








Tononi says that artificial systems can be either conscious or unconscious, depending on their architecture--feed-forward systems are necessarily unconscious.

Lots of smart people admit to being bewildered with Tononi's theory.



"Talk about yer boundary issues," he said, waggling a cigar.



For a decidedly NOT brief guide,








Read more by Randall Hayes

Home | About IGMS
        Copyright © 2023 Hatrack River Enterprises   Web Site Hosted and Designed by WebBoulevard.com