
Illustration by Yelena Bryksenkova
You can’t miss me—I’ll be the 6-foot Korean dressed in black,” Minsoo Kang says when we arrange to meet at Starbucks.
He’s easy to spot, not to characterize. He grew up in South Korea, Austria, New Zealand, Iran, Germany, and Brunei. His father was a South Korean diplomat; his mother, a professor of French literature. At the University of California, Los Angeles, he studied history, philosophy, and religion. Now he’s an associate professor of European intellectual history at the University of Missouri–St. Louis—and he uses all of those warm, nuanced humanities to study Western attitudes toward cold, hard technology.
Kang’s most recent book, Sublime Dreams of Living Machines, traces our ambivalence toward everything from golems and mechanical ducks to robots and cyborgs. We recoil from lifelike machines, he says, because they defy our easy categories. We use pairs of opposites—male/female, night/day, animate/inanimate—to organize our perceptions. Granted, reality’s never that clean; we’ve learned to deal with twilight and tomboys and E.T. But when something’s both opposites at once—a woman with a penis, Alaska’s midnight sun, an android who looks and acts just like us—we pull back and stare. “They are disruptive and at the same time fascinating,” he says, “because deep inside, we know these are arbitrary categories, and part of us wants to be freed from them.”
The line does seem to be blurring: We engineer our traits into machines and treat them like they’re flesh and blood; we talk about our fragile, spongy gray brains as “hard-wired” and “programmed.” But that’s just convenient metaphor, Kang says. For one thing, our brains are inseparable from our bodies. “The entire human being is one organic whole, not a series of parts, and consciousness is the result of the encounter between what is within us and the outside world. Machines don’t have that kind of organic connection with the rest of the world.” He pauses. “That’s one theory. Of course, there are also people who suggest that the Internet will take the place of the nervous system, and when we develop computers that are complicated enough, emotion will be brought about.”
Maybe there’s also a Christian underlay to the West’s nervousness about lifelike technology, I suggest. Maybe we’re afraid that all of this is hubris, and if we rebelled against our creator and ate that damned apple, our creatures will surely rebel against us.
Kang shrugs. “There were eras where the fear was not so exaggerated, where people thought, ‘Maybe God wants us to ape Him.” In the Jewish tradition of the golem, brilliant rabbis figured out exactly how God made Adam. They shaped special clay into a new being—and there they got stuck, because they couldn’t breathe in the soul of life. But they realized that if they wrote the Hebrew word for truth, emet, on its forehead, it would come alive.
“They taught it to clean the house and do laundry,” says Kang. “The golem became, by all contemporary definitions, a robot servant.” You’ve probably already guessed the stories’ ending: The golem goes out of control, the rabbi realizes he’s violated the laws of God and nature, and he erases emet and lets the golem crumble back into dust. That’s the ending—if the story was written after the late 1800s.
“Every version of the myth that ends badly dates from the late 19th century,” says Kang. “In older stories, the idea is very clear that God wants us to do this, and the golem does not go out of control. The theory is that the myth was influenced by the general Western myth of technology, and possibly by the story of Frankenstein.”
Do other parts of the world have complicated attitudes toward robots, too? “In Korea, I couldn’t write a book about it,” Kang says. “Maybe a few essays.” Modern Japan, he says, is interesting: It’s way ahead in robotics research, possibly because it’s worried about its low birthrate and graying population. Japan has robot nurses, even a robot baby seal that calms people with Alzheimer’s disease. “It’s warm and soft and looks up at you with big eyes and purrs when you pet it,” Kang says. “But do we want to live in that kind of society, when we’re getting further and further from each other already?”
A few years ago, he wrote an essay titled “Building the Sex Machine: the Subversive Fantasy of the Female Robot.” He’d noticed a particular fantasy recurring in literature: “A man for one reason or another finds women unsatisfactory. Let’s say they are too independent, not subservient enough, too conniving. So the solution is to create an artificial woman built to the specifications of his ideal woman.” Feminist scholars regularly denounce this plotline, he says, “but what they don’t deal with is the fact that in every single story, something always goes wrong at the end.” Sometimes the robots become independent and jilt the guy. Sometimes they murder him. And sometimes the guy just gets really, really bored. “There is a recognition,” Kang says, “even by the male writers, that this is an impossible fantasy.”
There are three basic theories about our future relationship with robots. One predicts inevitable conflict: “Sooner or later, machines are going to gain sentience and rebel, and if we are lucky, we will win.” The second says yes, machines will probably become self-aware, but it’s possible that we could peacefully coexist. That prospect intrigues Kang mightily: “Maybe it’ll turn out that this whole thing about the desire for power and mastery and beating your opponent is ingrained in us, and they’ll constantly be telling us, ‘You don’t have to go to war. Calm down!’”
The third theory says the line will blur so completely, as we freely exchange biological and mechanical parts, that we’ll stop even trying to distinguish between what’s artificial and what’s natural. “We will use computer chips in our brains,” says Kang. “And meanwhile, our machines are going to become increasingly organic, with genetically engineered material, so we will have androids that are not necessarily made of gears, but of artificially grown parts.”
Cool? Terrifying? Kang once made a list of every Hollywood movie with a robot in its cast, from Blade Runner and The Terminator right up to WALL-E and Hugo. From the early to late 1980s, there was a surge of technophobic movies, like The Terminator. “Even in Alien, there’s an evil android,” he notes. “That was the beginning of the personal-computer revolution, and it was an era of great anxiety—‘Can I learn this?’—and this fear of machines taking over.
“In the late ’80s, a strange thing occurred. I found this weird proliferation of technophilic movies. In the second Terminator movie, the Terminator turns into a good robot. The robot in the second Alien is a good android. I think people realized that computers are not only really easy to learn, but they make your life easier. And this coincided with that golden moment when the Cold War ended, apartheid was abolished, the Warsaw Pact dissolved...”
Between ’95 and ’97, technophobia came back. “By then, people were used to the benefits of computers,” he says. “Now, they felt chained to the computer. They were getting carpal tunnel, and pedophiles were targeting children online, and there were viruses and computer porn and identity theft—Sandra Bullock in The Net—and people weren’t finding computers fun anymore. Indispensable, but not fun. And the optimistic period that followed the ending of the Cold War all went to crap. Technology didn’t save us after all.
“It all really culminated in 9/11. Now we’re in a much more dangerous time—yet I’ve been seeing a lot of technophilic movies again.” His best explanation? The cause of our biggest problems—climate change, recession, terrorism—might be technology, “but their solution will also be technology. And therefore, you get WALL-E, where it won’t have to end with man versus machine, and Hugo.”
Robots are our friends again.
For a time.