Letter From The Editor - Issue 69 - June 2019

Bookmark and Share

About IGMS / Staff
Write to Us

  
Plotbot
  Science Fact-ion by Randall Hayes
May 2019

Conscious AI, part 2: Electric Boogaloo

A snowclone is a cliché and phrasal template that can be used and recognized in multiple variants. The term was coined as a neologism in 2004, derived from journalistic clichés that referred to the number of Eskimo words for snow.

--Wikipedia

Now, where were we? Oh, yeah--awareness . . . being . . . consciousness!

Don't worry, I won't go all the way to Z, like the Mos from Adventure Time with Finn & Jake, although it's probably possible. Humans have been obsessed with this concept for such a long time that we've invented lots of words to describe different aspects of it.

For most of the twentieth century, though, it was a taboo subject in the laboratory sciences, abandoned as impractical, left for philosophers to argue endlessly, like Vroomfondel and Majikthise in the Hitchiker's Guide to the Galaxy. Only computer scientists like Marvin Minsky were ambitious enough to stake out a claim on "the conscious mind" by trying to build one, although they usually labeled it artificial intelligence. Progress has been rapid in one sense. We've learned a huge amount about what doesn't work. For instance, it's simply not possible to build a complete internal model of the world to guide a robot's actions in every expected circumstance, the kind of thing that SF wrote about in the 1940s and 1950s. The necessary database is just too large and too complex. Even electronic brains, much faster than biological neurons, can't compute fast enough to overcome the combinatorial explosion in possible situations.

Since then, computer scientists have focused on helping their creations to learn about only those features of the environment that matter to them consistently. Giving the computers bodies (ie, robots) sped this process up considerably by revealing many interesting problems that we ignorant humans hadn't thought much about because we exist as (not in but as, to recall the very first column) exquisitely evolved bodies that solve those problems for us automatically. Things like knowing where we are in relation to objects in our environment, and--more difficult and more important--knowing where our various body parts are in relation to one another (in both space and time). Oh, man, time! The constant updating of everything in relation to everything else is a nightmare. This is one very good reason that biological brains don't bother. They have special-purpose modules that concern themselves with limited domains. I don't only mean vision vs. hearing. I also mean sub-modules such as objects vs. motion in the visual system, and even further, to faces as particularly important objects, and changes in facial expressions as particularly important aspects of faces.

Computer vision people recognized this modular nature of the visual system decades ago, based on electrical recordings of individual neurons in different areas of the cerebral cortex, and have been very busy building special-purpose algorithms for text recognition, and face recognition, and gait recognition, based on insights from neuroscience. That approach has spread to other domains such as speech recognition. Those algorithms are kind of dumb, in that they are fragile and rigid, although under the right conditions of extremely limited information (like a text-based chat room), they're now good enough to fool humans in a Turing test, which has been a trope in SF for a long time, almost since Alan Turing first proposed it as a research goal in 1950. Personally, I'm waiting for the first Nigerian scam-bot, which can learn a victim's specific emotional response patterns and tailor its pitch more effectively. Now, that's progress.

But how do we make the jump from special-purpose computation to general purpose intelligence, or even (dare I say it?) consciousness? Enter Jeff Hawkins, whose under-appreciated book I mentioned in last month's column. Hawkins made the inference that because all of those biological cortical circuits are built by a developmental process that consists of copying (in the form of neural stem cells dividing), then it's likely that the information-processing algorithms are likewise mutated copies of one another--not unique, from-scratch solutions to specific problems. What kind of general-purpose algorithm could learn from any input pattern? Hawkins settled on something similar to the solutions studied by two guys at my alma mater in Rochester--Dana Ballard and Raj Rao. A Kalman filter is a standard engineering construct that essentially attempts to remember its previous inputs in order to predict its next input. This is basically what electrical engineers all over the world have been doing since before Norbert Wiener invented the word cybernetics; they process an incoming signal (S), compare it to a template (T, which might be as simple as a signal received some time in the past), and then adjust some aspect of the circuit in an attempt to zero out the error signal (S - T = 0). Error reduction is how your cerebellum keeps your finger moving on a smooth curve towards your nose during a sobriety test. Unless you've drunk enough to de-activate the cerebellum, in which case you're likely to poke yourself in the eye (S - T = doh!).

OK . . . so what? How does that help? Well, it means that the same basic algorithm, applied to different datasets, could adjust its own processes to learn any small domain-specific problem, based on its database of "memories." This could be very useful. This is also why Hawkins is not worried about any one of his specialized problem-solvers waking up and destroying human civilization, whether in a fit of childish pique like AM, or in self-preservation like SkyNet.

Recall from my last column what Hofstadter said about consciousness, that it was an attempt to predict the actions of an entire, embodied person. For a set of Hawkins's problem-solvers to coordinate themselves into a sentient network, someone would have to set one very large module to the task of predicting the output of the entire network, while allowing each of the lower-level modules to continue their lower-level computations without interference. Animals had very good reasons to invest in that high-level prediction, because we have bodies that have to move as a unit. A bodiless, spatially distributed computer network has no reason to evolve that higher-level representation of itself, unless we are dumb enough to tell it to. Even then, as we saw last time, the higher-level model would not necessarily control the network, any more than our egos really control our own behavior. If we were talking about an embodied robot, then it might work at that level, which means that Avengers: Age of Ultron got it exactly backwards.

All this makes it seem less likely that AIs will intentionally murder us. This does not in any way mean that we're safe, however. As evolutionary biologist Suzanne Sadedin points out, the effects of AI on the ecology of computation, on our economy and our environment, are probably way more important than the AI's intention (or lack thereof). Accidents are potentially just as dangerous as villains.

Randall Hayes is neither a robot nor a Buddha, but a regular unenlightened human doing. He helps run Greensboro Science Café, and in between columns, he posts mostly relevant stuff to the PlotBot Facebook page. Thanks to Bryan Thompson of Blazegraph for reading an earlier version of this column.

References

https://en.wikipedia.org/wiki/Snowclone

https://en.wikipedia.org/wiki/Be_More_(Adventure_Time)

https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy

http://tvtropes.org/pmwiki/pmwiki.php/Main/ArtificialIntelligence

https://en.wikipedia.org/wiki/History_of_artificial_intelligence

http://sf-encyclopedia.com/entry/ai

http://www.technovelgy.com/ct/Science_List_Detail.asp?BT=Artificial%20Intelligence

http://tvtropes.org/pmwiki/pmwiki.php/Main/CombinatorialExplosion

https://en.wikipedia.org/wiki/Eugene_Goostman

TURING TESTS

https://www.jstor.org/stable/25475177?seq=1#page_scan_tab_contents

https://tvtropes.org/pmwiki/pmwiki.php/Main/TuringTest

http://www.lightspeedmagazine.com/fiction/the-turing-test/

https://en.wikipedia.org/wiki/Kalman_filter

https://owlcation.com/stem/Norbert-Wiener-Father-of-Cybernetics

Wiener, mostly forgotten now, spent most of his career at MIT, making Claude Shannon look like a layabout.

https://www.nndb.com/people/229/000103917/

From the Greek kubernetike, "the art of the steersman"

http://tvtropes.org/pmwiki/pmwiki.php/Literature/IHaveNoMouthAndIMustScream

https://en.wikipedia.org/wiki/Skynet_(Terminator)

http://tvtropes.org/pmwiki/pmwiki.php/ComicBook/Ultron

https://xkcd.com/1046/

http://qz.com/580080/evolutionary-biologist-elon-musk-is-right-about-the-threat-of-ai-but-hes-wrong-about-why/

https://www.facebook.com/GreensboroScienceCafe/

https://www.facebook.com/PlotBot-Column-562920973855007/

https://www.blazegraph.com

Read more by Randall Hayes


Home | About IGMS
        Copyright © 2024 Hatrack River Enterprises   Web Site Hosted and Designed by WebBoulevard.com