Loveless Sociopath—Or Bust

Do androids dream of falling in love? Can they? Koli Mitra finds out.

Humans have a penchant for anthropomorphising things. We encounter something that resembles one basic feature of humanness—the vague outline of a figure, a pair of eyes, a voice—and we are ready to endow the thing with a whole range of human psychological behavior. And that’s just based on the physical resemblance. We are even more tempted to identify with stuff that seems to have some kind of “mind”. We pride ourselves on our powers of reasoning. We think it’s our essence. We have named ourselves Homo sapiens.

Interestingly, though, as soon as we identify with something, we ascribe emotions to it. This is probably because, while we like to think of ourselves primarily as “thinkers”, we know instinctively that we are “feelers”. As the neuroscientist Antonio Damasio famously said in his book, Descartes’ Error, “We are not thinking machines. We are feeling machines that think.”

Classic science fiction is replete with artificial intelligence that spontaneously emerges from sophisticated computer systems (“mind-like” in their ability to process information) that almost immediately develop complex emotions—like anger, envy, loyalty, friendship, heroic self-sacrifice, love—apparently without the need for any of the biological/evolutionary processes out of which those emotions emerged in humans. Sometimes, much is made of the supposed inability of the artificial being to experience emotions.

As soon as we identify with something, we ascribe emotions to it. This is probably because, while we like to think of ourselves primarily as “thinkers”, we know instinctively that we are “feelers”.

Yet their behaviour, including the strong desire to experience emotion, very much suggests that they do have some emotional existence. We assume they can’t love and yet we take it for granted that they would long for it. They didn’t evolve through any biological drive, yet they have sexual urges and romantic attachments. Whether it is the rage of the monster in Mary Shelley’s Frankenstein, or the deep friendship between a child and a robot in Isaac Asimov’s Robbie, or the power-craving of HAL 9000, the seemingly emotion-less but viciously manipulative spaceship computer system in Stanley Kubrick’s 1969 film 2001: A Space Odyssey, these entities seem to simply have emotional capacity even though their creators never designed them to have it. The other characters are surprised to discover emotions in the machine and they try to figure out how to deal with it, but they don’t really question how it came to be. In many cases, including the works of Asimov, a sense of universalism about emotions and morals—often found in postcolonial stories that challenge ethnic and racial prejudice—are imported wholesale to the context of these artificial “people”.

 

Fiction dealing with artificial “life forms” has tended to work best as a parable for human morals and empathy rather than actual scientific speculation. Issues of identity and personhood for sentient artificial beings and, particularly, the ethical questions faced by the humans who interact with these life forms, were explored interestingly in Asimov’s I, Robot stories, in Ridley Scott’s 1980 neo-noir sci-fi film Blade Runner (based largely on Philip K Dick’s 1968 novel Do Androids Dream of Electric Sheep?), and the Gene Roddenberry-created television series Star Trek: The Next Generation.

But this is all about the emotions we feel about robots. What emotions would the robots feel? What are “emotions”? There is a certain class of emotions that seem intuitively to be subconscious, or physically based, like fear or even joy. But since we are talking about “love”, let’s think about what something like that requires.

It takes a certain measure of biological selfishness for “love” to operate. I can care about you only because I can empathise with your feelings. I know how you feel when you’re cold, hungry, achy, or lonely. I know, because I feel those things too.

Ironically, it takes a certain measure of biological selfishness for “love” to operate. I can care about you only because I can empathise with your feelings. I know how you feel when you’re cold, hungry, achy, or lonely. I know, because I feel those things too. I have the capacity to feel those things because I evolved in response to biological survival needs. That is, I developed those traits to take care of me. Learning to worry about you is just kind of a by-product.

Of course, the real evolutionary mechanism is more nuanced than this. Organisms develop traits as a result of genetic survival, not the organism’s own survival per se. Biologists recognise many ways in which kin altruism and group cooperation leads to gene survival—and therefore those propensities have survived. But survival at the individual level has the highest impact on gene survival and therefore has survived most strongly. Yet, the bottom line is, whatever the evolutionary sequence of events that led us here, it is a fact that we are capable of “love” as a conscious, thoughtful, and semi-deliberate experience. That experience is empathic, which clearly has elements of subjectivity and objectivity. It is subjective because I have to know how emotions feel. It is objective because I have to be able to project outside of myself and think about whether you have emotions.

Is a process like this possible to devise artificially? Is artificial consciousness possible? Will artificial beings—like robots, computers, androids, or whatever—ever become self-aware? If they are, will they simply be self-aware thinking machines or will they also feel emotions? And will those emotions resemble human emotions like “love” in any way? Current developments in artificial intelligence (AI) as well as genetics and bioengineering have moved such speculations out of the realm of fiction into the realm of possibility. Now, there is actual, real-life, scientific research underway that is asking these questions.

 

Before we start digging into this vast topic, let’s define some limits. First, this discussion will not consider any artificial or semi-artificial beings that are biological in part or in origin, such as genetically manipulated but still biological humans, or biological humans who have extended their senses or intellectual abilities by replacing some of their brain cells or other tissue with a series of technological prosthetics. While such entities might raise many ethical issues, it seems to me less challenging to imagine consciousness arising in an organic body with the same biochemistry as ours (and not at all challenging to imagine that a pre-existing, conscious human remains conscious even after adding some prosthetic brain tissue).

Instead, we will think only about emotion existing in a purely technological context, with no biochemical input (except as a model). This means we are talking about androids, robots, simulated brains, intelligent supercomputers, technological products with human minds “uploaded” into them. To use sci-fi references again, we are not talking about the bio-robotic androids of Blade Runner or bio-cybernetic hybrids like the Borg collective of Star Trek. We are talking about Samantha the sentient operating system from Her, the droids Ava and Kyoko from Ex Machina, Lt Commander Data and the Holographic Doctor from Star Trek, HAL from 2001.

Also, related to this topic, there are many profound—and worrying—issues having to do with trans-humanism or the predicted technological singularity (when artificial intelligence surpasses human intelligence and takes away from us the role of directing history’s course). I will only touch on these issues briefly where appropriate, as it is not possible to examine them sufficiently as part of this discussion.

So, what is “emotion” then? What is its relationship to consciousness or sentience? Would an intelligent, self-aware machine automatically acquire emotions? What exactly are the various elements at play in constructing the human “consciousness”?

So, what is “emotion” then? What is its relationship to consciousness or sentience? Would an intelligent, self-aware machine automatically acquire emotions? To begin with, what exactly are the various elements at play in constructing the human “consciousness”? One element seems to be the ability to deliberately direct its own cognitive activities (something that AI designers are trying to teach computers to do). Another is the ability to reflect objectively on itself.

But wait, that’s a loaded word—“self”! In order to deliberately and objectively reflect on oneself, one has to form a subjective sense of self. This rudimentary sense of self is usually called sentience. Giant libraries full of rich, dense, philosophical arguments have been made in order to describe what sentience and self are. But we don’t really know, in a scientific sense, what any of that actually entails. We know that it has a component of physical/sensory perception, but even that is more complicated than we might assume. A thermometer can “perceive” temperature in the sense of “measuring” it, but it doesn’t “feel” the temperature. A camera lens can “perceive” light in terms of capturing it, but it “experience” that visual in anyway.

Even these basic physical senses have a special meaning in animals because they don’t just measure something mechanically, they transmit the results to a brain, which has somehow produced a “mind”, which has a subjective perception of “self”. Again, the exact nature of sentience and subjectivity is debated, but one thing I would note, is that subjectivity implicitly contains an awareness of itself as both separate from and in relation to an external reality. (Of course, this requires an awareness of that external reality as well.)

 

Douglas Hofstadter, renowned cognitive scientist and physicist, pioneer in the field of AI, and Pulitzer-winning author of Gödel, Escher, Bach (as well as author of I Am A Strange loop, and Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought), thinks that selfhood is really a specific type of elaborate feedback loop. This “loop” is characterised by the ability to perceive and retain—and process—inputs, the ability to construct larger patterns out of these inputs, the ability to then perceive or “recognise” these patterns outside of itself, the ability to comprehend “incompleteness”, the ability to analogise from one phenomenon to another, and the ability to think “self-referentially” (handle the paradoxical nature of self-reference). Hofstadter believes all of this is possible to create in an artificial system, but he thinks we have a long way to go.

One way to think of it is that physical sensation, mental consciousness, and “emotion” are all various manifestations of an awareness of “self”. The reason the self is described as a “loop” is that it is a self-feeding system. An awareness or experience of self allows you to perceive physical sensations, feel emotions, and have thoughts. In turn, having these experiences strengthens and reaffirms your sense of self. But, even if a Hofstadteresque “loopy” sentience were to arise in machines, would it necessarily imply “emotions”? That is, just because one of these manifestations of self might loop back to reaffirm the “self”, is there any reason to assume that one manifestation should give rise to another? Isn’t it possible that a sophisticated algorithm can give rise to sentience that leads to “thought” or “understanding” but not emotion?

According to the geneticist Maciamo Hay, “emotions do not arise for no reason. They are either a reaction to an external stimulus, or a spontaneous expression of an internal thought process.” He believes artificial structures can acquire some classes of emotions but not others. “Quite a few emotions are felt through the vagus nerve connecting the brain to the heart and digestive system, so that our body can prepare to court a mate, fight an enemy or escape in the face of danger, by shutting down digestion, raising adrenaline and increasing heart rate. Feeling disgusted can help us vomit something that we have swallowed and shouldn’t have.”

Even if a Hofstadteresque “loopy” sentience were to arise in machines, would it necessarily imply “emotions”? That is, just because one of these manifestations of self might loop back to reaffirm the “self”, is there any reason to assume that one manifestation should give rise to another?

Not having this physiology means computers would not have the emotions associated with them. Likewise, not needing to eat or reproduce sexually, a computer would not have many of the emotions that humans have developed in order to bond with mates and offspring, to deal with competitors and cooperators in acquiring resources.

 

However, Hay thinks an intelligent machine would be able to experience something like intellectual satisfaction, after solving a mathematical problem, for example. Although Hay notes the role of a physical sensation in the neocortex in conferring the sense of “satisfaction” in a human, he doesn’t explain why the lack of such a biological tissue-structure in the computer does not pose an obstacle to the computer’s sensation of “satisfaction”.

Similarly, he posits that an intelligent machine which is placed in a mobile robotic body would be able to feel fear if faced with a threat to its physical integrity. He seems to presume that the cognitive ability to recognise a threat would spontaneously give rise to the human-like (or animal-like) emotions associated with it.

By Hay’s own account, animal biochemistry and other physical factors (like heart rates) are responsible for the generation of emotions like fear—and indeed, almost every emotion—and they are spurred on by the evolutionary impetus for survival.

I am not sure this presumption is warranted. By Hay’s own account, animal biochemistry and other physical factors (like heart rates) are responsible for the generation of emotions like fear—and indeed, almost every emotion—and they are spurred on by the evolutionary impetus for survival. But Hay is implying that a self-preservation instinct would spontaneously emerge whenever a self-awareness emerges. This is an assertion that needs to be supported by argument and not simply taken as self-evident.

Surprisingly, Hay’s position is not unusual among today’s AI-oriented thinkers, especially the enthusiasts. Many of them seem to take for granted—sometimes with very little coherent theoretical explanation—that machine intelligence can and will be a conscious, self-aware intelligence and that it will give rise to emotions, even though the nature of consciousness and of emotions are not yet very well understood.

One area where this is happening is affective computing, which teaches machines to study human emotional behaviour and provide appropriate responses with simulated expressive behaviour. Even though the responses are drawn from extensive pre-programmed databases and selected through a logically structured process of elimination starting with a vast number or randomly generated responses—this process is the basis of much of today’s “machine learning”—many people, like cognitive scientist Dylan Evans, insist that the machines’ ability to select appropriate emotional responses makes the response “genuine” in some relevant way. This is an extraordinary claim, which altogether discounts the subjective elements of selfhood and emotion.

Evans suggests that the reason we doubt a machine’s emotional authenticity is that it lacks sufficient sophistication of expression and predicts that as we improve robotic expressiveness through affective computing (i.e., as robots exhibit more human-like behaviour), people’s doubts will disappear.

Evans suggests that the reason we doubt a machine’s emotional authenticity is that it lacks sufficient sophistication of expression and predicts that as we improve robotic expressiveness through affective computing (i.e., as robots exhibit more human-like behaviour), people’s doubts will disappear. If he is correct, all it means is that people will be more easily manipulated into thinking machines have genuine emotion, but it doesn’t mean machines will actually have those emotions.

Evans attempts to illustrate his point with a historical example. He points out that Descartes observed animals in pain and theorised that their responses were rote “instincts”, but it turns out he was wrong. We know now that the biochemical roots of animal sensation are very similar to ours. I rather think this example supports the opposite of Evans’s conclusion. The authenticity of animals’ pain-perception is established by seeing the process by which it is generated, not by the mere existence of superficial similarities in expressive behaviour.

 

There is another reason that animal behaviour can be analogised to ours, even if we didn’t know that they had undergone a biological evolution similar to ours. Unlike affective-computing-based robots, animals were not specifically designed to mimic our behaviour. It is precisely because animals exhibit behavioural similarities to us despite having evolved independently of us that we can reasonably extrapolate their behaviour to be causally similar to ours as well. The same cannot be claimed for affective robots, whose behaviour is a deliberate artifice originating with our designs.

Unlike affective-computing-based robots, animals were not specifically designed to mimic our behaviour. It is precisely because animals exhibit behavioural similarities to us despite having evolved independently of us that we can reasonably extrapolate their behaviour to be causally similar to ours as well.

Note that I’m not saying machines can’t have emotions. I’m saying that machines that were designed to act as if they have emotions can’t be assumed to have emotions based simply on the “evidence” that they, in fact, do act like they have emotions, no matter how subtle, complex, convincing or evocative that act might be. In the Lord of the Ring movies, Peter Jackson’s design team created images to look like fragments of columns, arches, statuary and giant stone carvings on the sides of mountains that looked breathtakingly real. But this is not “evidence” that ancient Numenorian ruins actually exist somewhere in a place called Middle Earth. The fact that these looked a lot more real than the physical world depicted in the painted backdrops in movies from the 1920s, is utterly irrelevant to the equation.

Here’s what we do know about emotions: we can feel them without expressing them. We can express them without feeling them. We acquired them though a very long and complex and gradual process of biological evolution and emotional intelligence is distinct from rational intelligence (which we also happened to have acquired through such evolutionary processes).

Suppose you have a friend that you love very much. You call her on her birthday. You provide a sympathetic ear, a cup of tea and some perspective when she quarrels with her husband. You reminisce with her about your college days over photo albums and a glass of wine. You do these things because you love her. But your love does not consist of these outward behaviours. These are manifestations of that love, not the content of it. The love retains an existence in your inner life, even when you are unable to do these things.

Here’s what we do know about emotions: we can feel them without expressing them. We can express them without feeling them. We acquired them though a very long and complex and gradual process of biological evolution.

There is intent behind these behaviours. It’s not simply an elaborate set of outputs that a computer program was designed to elicit from you. What level of sophistication that output is able to reach is not at issue. A real person may be socially inept and not insightful enough to figure out what to do to show his friend that he loves her. But the genuineness of his intent to give her joy—and, equally, of his own sensation of joy and pain with regard to her wellbeing and with regard to her reciprocal attention to his wellbeing—still makes his love real in a way that the most perfectly simulated artificial response system cannot, unless it also has subjective, internal sensations, intentions, and desires.

Evans also points to affective robots’ ability to arouse sympathy in humans as evidence of emotional authenticity. But this is indicative the emotional capacity of humans, not of robots. It is well established that given the most minimally emotive cues, humans tend to feel genuine empathy for robots, computer simulations, and human-like communicative interfaces.

In 1966, MIT professor Joseph Weizenbaum created a computer program, a “chatterbot” named Eliza, which simulated a realistic communication with humans. Eliza sounded a lot like a psychotherapist or someone trained in “active listening”, as it responded to people with “empathic” statements and follow up questions, like “How did that make you feel?” or “I am sorry you are unwell.” Even though the interaction was text-based and Eliza’s responses were drawn from a database of pre-prepared outputs and not any from “intelligent” or adaptive programming, Weizenbaum observed that people behaved as though Eliza were a real person.

Last year, a joint study by researchers at the University of Toyohashi and University of Kyoto found that when people watched robots in situations that would induce pain in humans—like having a limb cut off—people exhibited the same kind of empathy (as measured by neurochemical brain activity) as they do when watching people in similar situations. The intensity of the empathy varied with the degree of the robots’ resemblance to humans and was the greatest when watching actual people, but even the crudest robots elicited a genuine and measurable level of empathy.

Last year, a joint study by researchers at the University of Toyohashi and University of Kyoto found that when people watched robots in situations that would induce pain in humans—like having a limb cut off—people exhibited the same kind of empathy (as measured by neurochemical brain activity) as they do when watching people in similar situations.

These results are not too surprising, since humans are notoriously prone to anthropomorphising non-human entities. Even a large-eyed stuffed toy or cartoon is frequently enough to provoke an emotional response. But it is interesting to note that Eliza’s abstract, linguistic expression of “empathy” and the visceral, physical “pain” of the robots both elicited emotional responses in humans. This makes sense to me, intuitively, because human empathy seems to comprehend both the visceral, physical reality of other beings as well as the internal mental life of other beings (including the other being’s ability to have empathy). A very Hofstadteresque “loop” indeed!

 

The robotics design community is interested in better understanding these human responses to AI, in order to make more “emotionally intelligent” robots that can decode a wide range of highly complex human emotions and use that information to provide some kind of customised service, and can be sold as consumer products. They envision a future where robots will be the primary providers of care for children, elders, and the sick. Robots will be our pets, servants, tutors, playmates and confidants.

Vincent Clerc of the French company Aldebaran explained in a June 2015 interview with Chris Edwards in Engineering and Technology Magazine that the “vision is to develop interactive robots for the wellbeing of humanity. We want the robot to be a real companion in everyday life, something that is usable by anyone.” He added that “we want people to be emotionally connected and involved with robots. [To achieve that,] we have to detect intentions…It’s all natural. If you move your head in a certain way it means something. If you are tired, you look tired. And you want a robot that recognises when you look tired.”

It seems to me that “human companionship” is an essential element of a meaningful life, not some menial chore to relegate to machinery. But most people in the AI research community are disturbingly silent about these concerns.

I find this a rather chilling prospect. It seems to me that “human companionship” is an essential element of a meaningful life, not some menial chore to relegate to machinery. But most people in the AI research community are disturbingly silent about these concerns. David Levy, an expert in AI and author of the book Love and Sex with Robots, said in an interview with Newsweek, that sex—and even something like sexual “relationships”—with robots will become commonplace and socially acceptable. There has been significant development in “internet-linked devices that transmit real physical contact”, he reports. If you combine this technology with affective-computing based “emotional” robotics, you have the recipe for something like an artificial courtesan (like the “basic pleasure model” of replicant in Blade Runner) who can pretend convincingly to be emotionally involved with you.

In some ways, this is perhaps a good development. To the extent that there is always going to be a market for self-centred and/or exploitative sexual encounters, it might as well be with robots rather than people. On the other hand, the people who are enthusiastically championing this development are also the ones claiming that these robots will be sentient. If they are right, we are just creating a whole class of sentient beings to exploit—this has serious moral implications.

If they are wrong, and these robots do not have genuine sentience, then this false belief in what constitutes “genuine” emotions raises a different set of concerns. If we live in a world where so much emotionally compliant companionship is available and considered by humans to be “authentic”, what happens to human-to-human relationships? Do we avoid the conflict and complication of those relationships and essentially immerse ourselves in elaborate fantasies and pretend relationships? Do we become increasingly narcissistic as a result of having our emotional preferences continually accommodated and validated?

 

This human propensity to project our own emotionality to the objects of our emotions can have dangerous consequences if sentient artificial intelligence does come into existence. Consider this basic fact: all of computing—anything we imagine as potentially “AI”—is based on the (simulated) power of reasoning. More precisely, it is based on mathematical reasoning.

We might imagine this as the essence of the human mind, but we would be wrong. As the cognitive psychologist and Nobel-winning economist Daniel Kahneman has shown through years of experimentation (and discussed in his book Thinking, Fast and Slow), economic decision-making—the most conscious and arguably most “rational” aspect of human behaviour—is most often based on unconscious, instinctive, emotional processes. Remember Damasio’s description: we are not thinking machines that can feel. We are feeling machines that can think. To the extent we have anything in common with machines, it is that, small, incidental part of ourselves—the thinking part—a very stripped-down version of the thinking part at that.

Whatever artificial “sentience” might be, any emotions that might arise in its context are unlikely to bear much resemblance to human psychology. The nature of our existence is biological. Even if a technological entity becomes “sentient”, its world and its experience will probably be completely alien to us. If it has emotion, it will be driven by that experience, and not by our biological evolutionary history. To assume we can anticipate what those would be is utterly delusional.

Imagine beings who are self-aware and, therefore, perhaps capable of having “interests” or preferences and intelligent enough to direct events to their preferences. Then imagine that they know how to manipulate our emotions but, lacking our biological/evolutionary history—they do not feel any of those emotions themselves. Isn’t this the definition of a sociopath?

Imagine beings who are self-aware and, therefore, perhaps capable of having “interests” or preferences and intelligent enough to direct events to their preferences. Then imagine that they know how to manipulate our emotions (through affective computing training) but, lacking our biological/evolutionary history—they do not feel any of those emotions themselves. Isn’t this the definition of a sociopath? In fact, isn’t it even worse? At least sociopaths are biological themselves, so we have some clue about them. We know they get hungry and feel physical pain, for example. But we would know nothing at all about what motivates AI machines.

Physicist Stephen Hawking and computer scientist Bill Joy (co-founder of Sun Microsystems) have made urgent calls to pay attention to the potential dangers of setting in motion the technological developments that could lead to sentient machines. But despite their prominent status in the world of science and technology, their calls have not been taken seriously. In fact, they have been met with considerable backlash and even some derision and accused of being luddites. I fear the industries that have commercial gains to be made from AI development are under the same kind of addictive, wilful blindness about its dangers as many other industries have been about the earth’s environmental crisis.

 

Perhaps the most committed—and most prominent—of the enthusiasts is Ray Kurzweil, chief engineer at Google, the author of The Singularity is Near, and the most ardent advocate for aggressive AI research, trans-humanism and the quest for immortality. He is so certain of the ability of artificial structures to house a conscious mind and of the nature of the human mind as something very much like software, that he plans someday to shed his natural body and upload his consciousness into an advanced AI-enhanced robotic body (which he is working on designing). He believes machines with genuine—if low level—intelligence are already among us. He says computers like IBM’s Deep Blue (which bested world champion Gary Kasparov at chess) and IBM’s Watson (which defeated the top champions of the quiz show Jeopardy!) actually understand what they are doing.

Kurzweil is so certain of the ability of artificial structures to house a conscious mind and of the nature of the human mind as something very much like software, that he plans someday to shed his natural body and upload his consciousness into an advanced AI-enhanced robotic body.

His view is rejected by Hofstadter, who thinks that Deep Blue, Watson, Apple’s Siri, and any other current example of dazzling achievement through machine learning is not authentic AI. He thinks that these are examples of extremely fast and powerful—but nonetheless rote and uncomprehending—search and retrieve functions. Hofstadter believes true machine intelligence is possible and desirable, but he does not believe that current machine learning methods would achieve it. He believes that to design artificial intelligence, people have to first develop a full understanding of natural intelligence as produced by the human brain.

Melanie Mitchell, an expert in artificial intelligence and genetic algorithms who co-authored (with Hofstadter, her then PhD advisor) the analogy-generating software Copycat, one of the world’s first AI programs, views the goal of AI research differently from many of her colleagues. In a Singularity Summit talk called “AI and the Barrier of Meaning” in 2012, Mitchell said “the singularity’s been called ‘The appearance of smarter-than-human intelligence’, but my version’s a little different…what I’m looking for is the appearance of a machine that crosses the barrier of meaning, that uses concepts as fluidly as humans do, in a broad range of domains.”

Mitchell disagrees with Kurzweil’s prediction about the singularity’s nearness. Citing AI pioneer Marvin Minsky’s observation that “easy things are hard”, she adds, “Easy things for humans like fluid concepts are the hardest things for computers.” Mitchell is working on making programs that can grasp fluid concepts, but does not believe that will happen very soon.

Mitchell disagrees with Kurzweil’s prediction about the singularity’s nearness. Citing AI pioneer Marvin Minsky’s observation that “easy things are hard”, she adds, “Easy things for humans like fluid concepts are the hardest things for computers.”

However, there are people who believe that machines, despite creating the illusion of intelligence through ever-increasing computational logic, will never become “intelligent” in the sense of having real understanding, which requires sentience.

According to the bioethicist Nicholas Agar, “there is an unbridgeable gap between the genuine thinking done by humans and the simulated thinking performed by computers. Saying that computers can actually think [is like] saying that a computer programmed to simulate events inside of a volcano may actually erupt. Technological progress will lead to better and better computer models of thought. But it will never lead to a thinking computer.” Agar says that everything a computer accomplishes is “by means of entirely noncognitive and nonconscious algorithms”, a process he calls “weak AI” (whereas the idea that machines might achieve conscious thought is called “strong AI”).

Agar cites the famous “Chinese Room” thought experiment devised by philosopher John Searle.

Searle imagines that he is locked in a room. A piece of paper with some ‘squiggles’ drawn on it is passed into him. Searle has no idea what the squiggles might mean, or indeed if they mean anything. But he has a rule book, conveniently written in English, which tells him that certain combinations of squiggles should prompt him to write down specific different squiggles, which are then presented to the people on the outside. This is a very big book indeed—it describes appropriate responses to an extremely wide range of combinations of squiggles that might be passed into the room. Entirely unbeknownst to Searle, the squiggles are Chinese characters and he is providing intelligent answers to questions in Chinese. In fact, the room’s pattern of responses is indistinguishable from that of a native speaker of the language. A Chinese person who knew nothing about the inner workings of the room would unhesitatingly credit it with an understanding of her language. But, says Searle, it is clear that neither he nor the room understands any Chinese. All that is happening is the manipulation of symbols that, from his perspective, are entirely without meaning.

According to the bioethicist Nicholas Agar, “there is an unbridgeable gap between the genuine thinking done by humans and the simulated thinking performed by computers. Saying that computers can actually think [is like] saying that a computer programmed to simulate events inside of a volcano may actually erupt.”

Agar argues that this is what is happening with computers, as well.

Computers, like the room, manipulate symbols that for them are entirely meaningless. These manipulations are directed, not by rule books written in English, but instead by programs. We shouldn’t be fooled by the computer’s programming into thinking it has genuine understanding—the computer carries out its tasks without ever having to understand anything, without ever entertaining a single thought. Searle’s conclusions apply with equal force to early twenty-first-century laptop computers and to the purportedly super-intelligent computers of the future. Neither is capable of thought.

 

But what if AI could be achieved, not in this algorithmic/programmatic fashion, but by building processes similar to how the brain works? People are studying the holistic, analogue nature of human cognition. Hofstadter is working on this. So is Mitchell. Others are modelling the physical structure of the brain as whole. One way they are doing this is through “neuromorphic engineering”, which uses very-large-scale integration (VLSI) systems to emulate neuro-biological architectures. Researchers at Stanford, Georgia Tech, and MIT are working on projects based in neuromorphic engineering. Kurzweil is leading a team at Google that is trying to figure out how to simulate the whole brain.

Yet, perhaps nature will have the last laugh. Biologist Dennis Bray thinks that intelligence can really only emerge in biological substrates. In a talk titled “What Cells Can Do That Robots Can’t” at the 2012 Singularity Summit, Bray explained that

[the] neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon. Although biological components act in ways that are comparable to those in electronic circuits, they are set apart by the huge number of different states they can adopt. Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell. The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events.

If I had to guess, it would be this: even if consciousness does emerge inside machines, the likelihood of their being similar to us in very many ways is practically zero. We are what we are as a product of evolution, which is a very long game of chance.

If I had to guess, it would be this: even if consciousness does emerge inside machines, the likelihood of their being similar to us in very many ways is practically zero. We are what we are as a product of evolution, which is a very long game of chance, involving infinite time and infinite combinations of accidental mutations, happening to arise in particular circumstances by pure chance.

Evolution is a one-way street. It’s a very narrow trial-and-error process with no opportunity for backtracking. There is no going “back to the drawing board” to retrieve a past genetic combination that “would have worked better” in a newly changed circumstance. For example, if your genetic line lost the ability to withstand cold weather after the last Ice Age ended, you don’t get to retrieve that gene if a new Ice Age comes around. Unless you still have a dormant gene for it, or it arises again, accidentally, you’re out of luck. This is an essential feature of the process by which we came to be the way we are.

Might the emergence of human consciousness be a gestalt product of this kind of process and everything that has happened to us along the way? There is no way to replicate that process in finite time. We might be able to replicate certain aspects of the end result of that process, like our exact physical brain or the entire human body (though not anywhere in the foreseeable future). Even if we do that, though, we are just replicating biology, not building technology. That would be an impressive achievement, but not the same as “creating intelligence” technologically.

Evolution is a one-way street. It’s a very narrow trial-and-error process with no opportunity for backtracking. There is no going “back to the drawing board” to retrieve a past genetic combination that “would have worked better” in a newly changed circumstance.

In contrast to our open-ended evolution, machines develop deterministically, through the will of their designers. Even if machines can become fully capable of making design decisions independently of human designers, they would still have goals to achieve, tasks to complete, their process would still be planned and deliberate. They would be able to go back and reset to the “last acceptable configuration” or “undo” mistakes, instead of having to “live with the consequences” in a process we call “maturing”.

Individual human maturation, like biological evolution, is a one-way process. We could take away a machine’s ability to correct mistakes, but then it might make mistakes that were fatal. Keep in mind, we have had to risk extinction throughout our evolutionary history. In fact, many of our kin have gone extinct!

Another thing about evolution as well as maturation—it’s a slow process. A line of genes rides biological matter through the world, slowly, having unexpected encounters, gathering moss, so to speak. Machine processing happens at the speed of light. Even if we design some randomness into it—like some machine learning algorithms try to do—it’s still fast and it doesn’t “encounter” anything not deliberately fed into it. Maybe the sheer slowness of animal evolution and its setting in the context of the physical world and universe has an essential part to play in how conscious selves and emotional capacities emerge.

If that’s the case, there will probably never be artificial intelligence. And there will certainly never be artificial intelligence that is endowed with human-like capacity for empathy. I believe there are only two possibilities with AI: indifferent, loveless sociopaths, or nothing at all.

Passionate rationalist. Bleeding-heart moderate. Geek. Afflicted with a "language fetish". Koli practiced law on Wall Street until her lifelong love affair with writing demanded its rightful place as her primary occupation.

Be first to comment