Cyber Defense Advisors

Personal AI Assistants and Privacy

MIThttps://www.technologyreview.com/2021/08/25/1032111/conscious-ai-can-machines-think/

“Machines with minds are mainstays of science fiction—the idea of a robot thatsomehow replicates consciousness through its hardware or software has been around so long it feels familiar.

Such machines don’t exist, of course, and maybe never will. Indeed, the concept of a machine with a subjective experience of the world and a first-person view of itself goes against the grain of mainstream AI research.

It collides with questions about the nature of consciousness and self—things westill don’t entirely understand. Even imagining such a machine’s existence raises serious ethical questions that we may never be able to answer. What rights would such a being have, and how might we safeguard them? And yet, while conscious machines may still be mythical, their very possibility shapes how we think about the machines we are building today.

As Christof Koch, a neuroscientist studying consciousness, has put it: “We know of no fundamental law or principle operating in this universe that forbids the existence of subjective feelings in artifacts designed or evolved by humans.”

…what goes on inside other people’s heads is forever out of reach. No matter how strong my conviction that other people are just like me—with conscious minds atwork behind the scenes, looking out through those eyes, feeling hopeful ortired—impressions are all we have to go on. Everything else is guesswork.

“Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it.”

…it is logically possible that a being can act intelligent when there is nothing going on “inside.”

But intelligence and consciousness are different things: intelligence is aboutdoing, while consciousness is about being. The history of AI has focused on theformer and ignored the latter. If a machine ever did exist as a conscious being, how would we ever know? The answer is entangled with some of the biggestmysteries about how our brains—and minds—work.

One of the problems with testing a machine’s apparent consciousness is that we don’t really have a good idea of what it means for anything to be conscious.

Emerging theories from neuroscience typically group things like attention,memory, and problem-solving as forms of “functional” consciousness: in other words, how our brains carry out the activities with which we fill our wakinglives.

But there’s another side to consciousness that remains mysterious. First-person, subjective experience—the feeling of being in the world—is known as “phenomenal” consciousness. Here we can group everything from sensations like pleasure and pain, to emotions like fear and anger and joy, to the peculiar private experiences of hearing a dog bark or tasting a salty pretzel or seeing a blue door.

Philosophers like Chalmers suggest that consciousness cannot be explained bytoday’s science. Understanding it may even require a new physics—perhaps one that includes a different type of stuff from which consciousness is made.

Information is one candidate. Chalmers has pointed out that explanations of theuniverse have a lot to say about the external properties of objects and how they interact, but very little about the internal properties of those objects.

A theory of consciousness might require cracking open a window into this hiddenworld.

Today’s AI is nowhere close to being intelligent, never mind conscious. Eventhe most impressive deep neural networks —such as DeepMind’s game-playing AlphaZero or large language models like OpenAI’s GPT-3—are totally mindless.

…as Turing predicted, people often refer to these AIs as intelligent machines, or talk about them as if they truly understood the world—simply because they can appear to do so.

There is a lot of hype about natural-language processing, says Bender. But thatword “processing” hides a mechanistic truth.

For all their sophistication, today’s AIs are intelligent in the same way acalculator might be said to be intelligent: they are both machines designed to convert input into output in ways that humans—who have minds—choose tointerpret as meaningful. While neural networks may be loosely modeled on brains, the very best of them are vastly less complex than a mouse’s brain.

Another way of approaching the question is by considering cephalopods, especially octopuses. These animals are known to be smart and curious—it’s no coincidence Bender used them to make her point. But they have a very different kind of intelligence that evolved entirely separately from that of all otherintelligent species. The last common ancestor that we share with an octopus wasprobably a tiny worm-like creature that lived 600 million years ago. Since then, the myriad forms of vertebrate life—fish, reptiles, birds, and mammals among them—have developed their own kinds of mind along one branch, while cephalopods developed another.

It’s no surprise, then, that the octopus brain is quite different from our own.

Instead of a single lump of neurons governing the animal like a central control unit, an octopus has multiple brain-like organs that seem to control each arm separately. For all practical purposes, these creatures are as close to an alien intelligence as anything we are likely to meet. And yet Peter Godfrey-Smith, a philosopher who studies the evolution of minds, says that when youcome face to face with a curious cephalopod, there is no doubt there is aconscious being looking back.

In humans, a sense of self that persists over time forms the bedrock of oursubjective experience. We are the same person we were this morning and last week and two years ago, back as far as we can remember. We recall places we visited, things we did. This kind of first-person outlook allows us to see ourselves as agents interacting with an external world that has other agents in it—we understand that we are a thing that does stuff and has stuff done to it.

Whether octopuses, much less other animals, think that way isn’t clear.

In a similar way, we cannot be sure if having a sense of self in relation to the world is a prerequisite for being a conscious machine.

Machines cooperating as a swarm may perform better by experiencing themselves as parts of a group than as individuals, for example. At any rate, if a potentially conscious machine were ever to exist, we’d run into the same problem assessing whether it really was conscious that we do when trying to determine intelligence: asTuring suggested, defining intelligence requires an intelligent observer. In other words, the intelligence we see in today’s machines is projected on them by us—in a very similar way that we project meaning onto messages written by Bender’s octopus or GPT-3. The same will be true for consciousness: we may claim to see it, but only the machines will know for sure.

If AIs ever do gain consciousness (and we take their word for it), we will haveimportant decisions to make. We will have to consider whether their subjectiveexperience includes the ability to suffer pain, boredom, depression, loneliness, or any other unpleasant sensation or emotion. We might decide a degree of suffering is acceptable, depending on whether we view these AIs more like livestock or humans.

Many researchers, including Dennett, think that we shouldn’t try to make conscious machines even if we can. The philosopher Thomas Metzinger has gone as far as calling for a moratorium on work that could lead to consciousness, even if it isn’t the intended goal.

Could an AI be expected to behave ethically itself, and would we punish it if it didn’t? These questions push into yet more thorny territory, raising problems about free will and the nature of choice.

Animals have conscious experiences and we allow them certain rights, but they do not have responsibilities. Still, these boundaries shift over time. With consciousmachines, we can expect entirely new boundaries to be drawn.

As Dennett has argued, we want our AIs to be tools, not colleagues. “You can turn them off, you can tear them apart, the same way you can with an automobile,” he says. “And that’s the way we should keep it.”