Advertisement

2001 Is Near, but ‘Hal’ Is Not : Artificial intelligence has not fulfilled the awesome--and scary--promise of the sci-fi classic. But researchers are still eagerly pursuing the goal of machines that can think.

Share via
TIMES STAFF WRITER

I am a HAL 9000 computer, Production No. 3. I became operational at the HAL Plant in Urbana, Ill., on Jan. 12, 1997. My first instructor was Dr. Chandra. He taught me to sing a song. It goes like this: ‘Daisy, Daisy, give me your answer do. I’m half crazy all for the love of you.’

Dave: Open the pod bay doors, Hal.

Hal: I’m sorry, Dave, I’m afraid I can’t do that.

--”2001: A Space Odyssey”

*

For nearly three decades, science fiction’s most famous thinking machine has seduced us with the double-edged promise of silicon sentience.

Intelligent enough to run a spaceship and converse on any topic, Hal was the perfect companion, “foolproof and incapable of error.” Then he murdered four of his crew mates.

Advertisement

Arthur C. Clarke’s novel, which served as the blueprint for Stanley Kubrick’s epic 1968 film, heightened anxieties about technology run amok. Still, Hal’s psychotic reaction to his programming, which required him to lie to the crew about the true purpose of their mission, made him in some way more human than the wooden astronauts manning Discovery.

Hal was both scary and endearing. And he was the future, in Cinerama splendor, preparing us for what awaited. With its meticulous attention to scientific detail, “2001” suggested that such a cybernetic companion might join us in the lonely state of the self-aware by, say, this Sunday.

Infused with Space Age optimism, artificial intelligence researchers at the time enthusiastically agreed.

Advertisement

“In from three to eight years we will have a machine with the general intelligence of an average human being,” Marvin Minsky, one of the field’s early pioneers, told Life magazine in 1970. “I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight.”

But with Hal’s birthday coming this month, space scientists are about as close to discovering alien intelligence as computer scientists are to creating anything that resembles our own.

The smartest computers to date can’t tell the difference between a cat and a dog, much less carry on a conversation. AI companies, buoyed by cash injections from Wall Street in the early 1980s, quickly crashed and burned when their technology proved too “brittle” to function in the real world.

Advertisement

And the one scientist still trying to develop a Hal-like entity is considered an eccentric by many of his colleagues--even as they privately cheer him on.

Even AI’s greatest triumph of recent years, last summer’s victory by IBM’s “Deep Blue” computer over world chess champion Gary Kasparov, was the result of “brute force” number-crunching, its creators admit, not independent thought.

“The big dream of creating a Hal failed,” says Rodney Brooks of MIT’s Artificial Intelligence Lab. “People backed off. You didn’t say things in polite society anymore about building robots with human-level intelligence.”

Still, the quest to sire synthetic consciousness has by no means been abandoned. As the Champaign-Urbana stamp club prepares to honor Hal’s birth with a commemorative envelope and celebrations of the thinking computer are planned at the University of Illinois and elsewhere, AI visions, shaped by past mistakes and future technologies, are taking on new forms.

Brooks, a leader in one strand of the new AI, believes the effort to build computer brains with no bodies was the fatal error. He is constructing humanoid robots that can bump into walls and learn from their environment--and predicts a real-life version of Star Trek’s android Cmdr. Data in 50 years.

Others say metal men are passe, and favor creating “intelligent” software that adapts and evolves in the nether world of cyberspace. A breakaway branch of AI called artificial life even posits that the Internet can serve as a breeding ground for intelligence that will arise via Darwinian evolution.

Advertisement

And some still believe “gofai” (good old-fashioned AI), with its focus on logic and representation, just needed time to develop and is on the cusp of important breakthroughs.

“I would say there’s been disillusionment overall,” says David Stork, editor of “Hal’s Legacy,” an anthology of AI articles published last month. “But there’s renewed excitement. We know it’s going to come.”

As always, AI believers can point to near-miracles of contemporary computer technology to support their optimism. “Massively parallel” computers that use thousands of powerful microprocessors can now closely simulate the architecture of the human brain, for example.

Longtime AI researcher Raymond Kurzweil, a pioneer in getting computers to recognize human speech, notes that if microprocessor speed continues to improve at its current rate, by 2030 a computer will be able to process data at quadrillions of operations per second, or “pedaflops”--a rough estimate of the speed of the human brain.

A Gap Between Data Processing, Intelligence

Such measures, of course, beg the question in many ways. It is precisely because we know very little about how the brain bridges the gap between data processing and intelligence that it has proved so difficult to teach computers to think, or infuse them with the ephemerality of consciousness.

Still, more calculating power should make it easier to teach computers to learn, and in particular to parse language.

Advertisement

By some lights, that would be enough. The most widely accepted definition of AI is still the one proposed by British mathematician Alan Turing in 1950. If a human interrogator conversing by teletype with a computer cannot tell if it is human or machine, he said, that constitutes intelligence.

But Turing’s test, a Holy Grail of AI, has so far proved impossible to pass. An annual contest for authors of programs that can fool human judges at least some of the time was suspended last year. Robert Epstein, a scientist at UC San Diego who administered the contest, says that is largely because “there are simply no breakthroughs.”

Still, he takes heart in Turing’s prediction that by 2000 a computer program would be able to fool the average questioner for five minutes about 70% of the time. That’s about where we are now, he says.

“We’re moving exactly on course,” Epstein says, noting that chip maker Intel Corp. passed another major computer speed barrier last month. “I’m quite certain there will eventually be two intelligent species on the planet.”

The Turing Test has been criticized for encouraging the appearance of thought and not the real thing. But its persistent allure--the contest, when it existed, drew media attention well beyond its newsworthiness--speaks to a fundamental human fascination with AI.

ELIZA, a program that simulates a psychotherapist by rephrasing patient questions into questions of “her” own, was written as a joke in 1966. But “she” still attracts hundreds of Internet surfers eager for badly simulated conversation.

Advertisement

So do descendants such as Newt, Nurleen and Julia, software programs known as “chatterbots” that live on the Internet.

“Maybe it has to do with this innate loneliness we have,” says Kurzweil. “We’re always seeking others to make contact with.”

A Quest That Raises Existential Questions

Indeed, it is perhaps no accident that Hal was invoked in a story about mankind’s search for its own origins on a secret mission to pursue the first evidence of intelligent extraterrestrials.

Those driven by the urge to breathe life into machines--whose mixed progeny include the likes of Pinocchio and Frankenstein--say it is rooted at once in a yen for companionship and a desire to better understand ourselves. A dose of the will to power may also have something to do with it.

“I’m interested in how minds work,” Minsky says. “The trouble is that minds are so complicated . . . the only really effective method is to build machines that embody your theory and see how lifelike it behaves.”

If mankind can create machine intelligence, that raises existential questions about the unique nature of our own. Some AI proponents maintain we are simply carbon-based machines. But philosophers such as John Searle and Hubert Dreyfuss have argued that machines at best can mimic human thought, but never be truly “intelligent” themselves.

Advertisement

It is a staple of AI lore that then-IBM Chairman Thomas Watson Jr. fired Nathaniel Rochester, one of the field’s early champions and the designer of IBM’s first successful commercial computer, for referring to his machine as “smart.” Some say Big Blue didn’t want customers worrying that its products would do anything other than what they were programmed for. Others say management considered it a religious offense. (Rochester was later hired back.)

Daniel Dennett, a philosopher who has written extensively about the implications of computer technology, says the real issue is not whether machines will take over and enslave us, since the day that might be possible remains exceedingly remote. Rather, the ethical questions today are more subtle.

“As machines prove their superiority to human beings in dealing with problems that human beings have always taken pride in dealing with, will we feel a moral obligation to rely on them?” asks Dennett.

Science fiction has been preoccupied with such matters for years.

Isaac Asimov’s famous robots had to obey his Three Laws, which required that they allow no harm to come to humans. And the disturbing power of “2001” had much to do with the unanswered question of what happened to Hal.

In a telephone interview from his home in Sri Lanka, Clarke, author of more than 70 books and, at 79, one of science fiction’s most revered authors, declined to provide a definitive answer:

“All of civilization has been man meddling with nature. And all tools are double-edged. The question is, would Hal be a tool or an independent entity, and I’m afraid that’s something we’ll just have to let the future decide.”

Advertisement

The dawn of computers seemed to bring us closer than ever to posing as God. But one of the problems, experts say, was trying to create AI in our own image. It turned out the course of AI development has been the opposite of the development of human intelligence.

Building computers that could solve difficult math problems and play a mean game of chess was relatively easy. Software that can make sophisticated decisions is embedded in the new Mars probe, used to handle financial transactions on Wall Street, woven into video games--and is often given the moniker of AI.

Yet imbuing a computer with the kind of “world knowledge” that children pick up through osmosis in their first years has so far proven impossible.

Doug Lenat, the last of the old-school AI researchers, is still trying. Closeted away in an anonymous office park in Austin, Lenat has been spoon-feeding “common sense” into a computer called Cyc (as in encyclopedia) for more than a decade. It is not unlike the education of Hal, although Cyc’s inputs are generally more prosaic than “Daisy.”

If nothing else, the painstaking process has allowed Lenat’s small staff of “Cyclists” to confront the complexity of the human condition. In a recent brainstorming session, they pondered such imponderables as, “If Joe is Doug’s enemy, does that mean Joe knows Doug?”

“What if Joe is Doug’s friend?”

Lenat says Cyc has nearly absorbed enough of life’s subtle rules to be able to learn on its own. At that point, its accumulation of knowledge is expected to accelerate sharply. Yet there are still gaps. Recently, Cyc came up with the axiom: “If something is a nonhuman resident of a house it is vermin.” Somebody, it seemed, had forgotten about pets.

Advertisement

Lenat is disappointed in onetime compatriots who have abandoned the dream: “They’ve settled for thin slivers, for safe and relatively mundane applications that have the veneer of intelligence but don’t have the breadth. We’re the only thing that’s qualitatively similar to the feel of what Hal was.”

But other approaches are ambitious in different ways, and may prove more fruitful. Brooks, a former student of Lenat’s, has pioneered a behavioral model that has slowly been endorsed by much of the discipline.

“I wanted to build a human, which is why Cog is embodied with human form,” he says in reference to his young robot. Brooks says the inspiration for Cog came at a party he held for Hal on Jan. 12, 1992--the birth date given in the movie, which differed from Clarke’s original, apparently as a result of a misreading of the script.

But how long Cog’s learning process will take is unknown.

A Different Kind of Evolution?

Patty Maes, a former student of Brooks’, thinks it will take too long. So she has crafted software “agents” that also adapt and learn--but in the virtual environment of the Internet. Her agents may never be able to attain the breadth of intelligence of a Cog, Cyc or Hal, she admits. But they will more immediately be able to work with humans to extend their abilities and serve their needs.

Tom Ray, a professor of biology and computer science at the University of Denver, says the focus on human intelligence is all wrong, and he’s now trying to use the Internet as a kind of biological breeding ground for software.

“Scientists have so far made the mistake of equating ‘intelligence’ with ‘human,’ and so they are trying to design an intelligence in machines that is like human intelligence,” Ray says. “An intelligence is too complex to simply design. I think only the process of evolution can create such complexity.”

Advertisement

Chris Langton, head of the artificial life project at the Santa Fe Institute, an unorthodox scientific think tank, argues that the whole notion of machine intelligence is misplaced. Hal, he notes, distinctly told a BBC interviewer in the movie that he was “conscious.” Drawn to AI research soon after he saw “2001” (“A lot of us were in a very good position to receive the movie’s attempt to convey altered states of consciousness,” he recalls), Langton now believes that the only way to understand how to grow an artificial consciousness is to study its evolution in nature.

“The part of Hal that I’m most interested in is the part where he went psychotic,” Langton says. “That made me believe much more that it was some sort of intelligent, conscious entity because it tapped into much more of human experience. We have some idea about how to achieve intelligence now, but I don’t think we could have anything go so wonderfully psychotic as Hal did. That’s what I’m after.”

Many Links Between Sci-Fi and Science

Today, though, our expectations about AI have fallen so far that nightmares about computers wresting control from humanity are now reserved for the truly delusional. So are many of the visions of thinking machines performing life’s chores, speeding the pace of scientific progress and teaching us the secret of humanity.

All of this is particularly distressing to “2001” fans because so many of the movie’s technology predictions did come true. Men went to the moon. Space stations were set in orbit. Computers are capable of controlling all the functions of a spaceship. We may not be going to Jupiter yet, but plans are in place to visit Mars.

“What I am trying to understand is, how could they have been so wrong about Hal?” asks a plaintive fan on a World Wide Web site where “2001” aficionados still debate things like the metaphysical significance of Hal’s name.

(It was initially assumed to be an allusion to IBM because of the sequence of the three letters in the alphabet, but Clarke has since insisted it stands for Heuristic Algorithmic. And IBM, which at first tried to distance itself from the film, has since embraced the association.)

Advertisement

Unlike the schlocky sci-fi flicks that came before it and the flashy, trashy ones that came after, “2001” was a movie even a scientist could love. Kubrick was known to have consulted with NASA and other technologists--Minsky among them--to get the most minute details right about clothes and food. And Clarke, who came up with the idea for geosynchronous satellites in 1945, is revered by fans as a “hard” science fiction writer, one who understands the science he writes about.

This was no “Star Wars” galaxy far, far away, where explosions can be heard in the silence of space. Nor was it the sterile “Star Trek” world of the 24th century, where the laws of physics apparently no longer apply. This was the near future, our future. Or so we thought.

“Some things take much longer than others,” says Clarke. “The tentative title for my next book is ‘Greetings, Carbon-Based Bipeds.’ That should give you an indication of what I expect.”

In a field where science fiction and science seem almost uncannily linked, Clarke’s forthcoming “3001: The Final Odyssey” may also offer some clues to the future.

In the new novel, Hal and astronaut Dave Bowman return and are joined as one entity, a psychic cyborg with no physical presence. Hal’s heirs, it seems, won’t be your basic 9000 series upgrades. In today’s science fiction, as in much AI research, walking, talking androids and planet-sized computer brains have been largely forsaken for intangible entities that spring from the sprawling computer network.

And if Hal’s fatal assertion of control over his human creators was alarming, the notion of an intelligence in the Net may raise more even more profoundly disturbing questions about the relations between man and machine.

Advertisement

“What’s happened since Hal is that none of us are worried that our PC is too smart, since it’s clearly not smart enough, but we worry about these huge interconnected systems we’re creating which in some ways do have lives of their own,” says cyberpunk author William Gibson.

“I think the scary thing now is not so much that they will become us as we will become them. The dubious things the digital world can do are the subtle changes we’re not even aware of.”

During the first screening of “2001” in New York, attended by scientists and science fiction writers from around the world, Asimov is said to have stood up at the point where Hal refuses to open the pod bay doors to save astronaut Frank Poole and yelled, “He can’t do that. He’s breaking the Three Laws of Robotics.”

Hal shattered a previous generation’s fantasies about artificially intelligent life. On his birthday, Hal himself may hand the reins to a new science fiction image of the future.

As for what reality will bring, as Clarke says in his introduction to the novel “2001”: “Please remember, this is only a work of fiction. The truth will be far stranger.”

Advertisement