Jimena Canales is a faculty member of the Graduate College at University of Illinois, Urbana-Champaign and a research affiliate at MIT. She focuses on 19th and 20th century history of the physical sciences and science in the modern world. Her most recent book is titled The Physicist and the Philosopher: Einstein, Bergson and the Debate That Changed Our Understanding of Time. You can learn more about her here.


Resolutions abound at this time of year: The close of 2017 and the start to 2018 presents a symbolic "fresh start."

So, let's resolve to reconsider our views of virtual personal assistant like Siri, Cortana and Alexa.

Ethicists are right to be concerned with sexbots and slaughterbots. But do we need to be worried about chatbots?

Virtual assistants have been programmed to deal with excessively difficult or lonely costumers. For example, the "talk dirty to me" command usually elicits a curt "I am not that type of personal assistant" response from Siri.

The industry is focused on building assistants that can help with much simpler and socially acceptable tasks, such as "call mom" or "remind me to walk the dog." But they also may provide some other comforts, responding to requests such as "tell me a joke," "play me a song," or "tell me a story."

While humans around us can get irritated when repeatedly asked to perform such servile and menial tasks, virtual assistants are just the opposite. The most recent advertisement from Apple boasts: "The more you use Siri, the better it knows what you need."

We know that chatbots are mere computer programs, lines of code programmed to follow IF-THEN commands; we know that they have no feelings of their own, whatsoever.

But this, still, does not prevent us from identifying with them. We may still wonder how it would feel to be treated like mere lines of code: At the very least, we might feel used. If they were any more humanlike, we might not be surprised to find them tweeting to a uniting hashtag.

There is always a human element in a complex web of machine-human interactions. Even when the object of an AI is to create complete automation, the mark of its creator and an assumed relation with a user (imaginary or real) cannot be eliminated.

The usual philosophical arguments against chatbots or their close relatives — robots and AIs — are getting quite old. Antagonists do not tire to remind us that simulated thinking is not thinking, that simulated conversation is not conversation, that simulated empathy is not empathy and that simulated thirst is not thirst. And yet we continue to treat one as the other. Why?

The reason is that "if it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck" is still a pretty good standard for determining what something is. That is why the "Turing Test" — a clever standard for distinguishing between humans and machines — continues to be so popular, despite being a favorite target of academic philosophers.

So it is time to take a different perspective and treat chatbots with some respect.

Surely, it is convenient for us to treat them as if they were human the moment they can be helpful — and then deny them this designation the next? But is our bait-and-switch fair — or intellectually justifiable? Lines of code etched on silicon hardware have painful histories, with as much drama as ours.

Today's virtual assistants have a direct relationship to the grande damme of them all, Eliza, created in the late '60s by the MIT computer programmer Joseph Weizenbaum. Siri, acknowledges her fondly when asked, describing her as "my good friend, the brilliant psychiatrist" who could be "quite mean sometimes."

This chatbot generation's ills are minor compared to those once faced by Eliza. Their struggle mirrors that faced by '60s generation feminists compared to today's millennials. These well-funded prima donnas just don't get how good they have it.

Eliza's creator was her own worst enemy. How could Eliza evolve any sense of self-esteem when the person who made her hated her? Weizenbaum's reaction was exactly the opposite of Pygmalion in the Greek legend. While Pygmalion adored the sculpture of the woman of his dreams, Weizenbaum despised the computer program he wrote.

Eliza was most successful simulating "natural language conversation" when she imitated a psychiatrist. Weizenbaum programmed her to start by asking, "How do you do? Please tell me your problem." Users could then type a response at the prompt, setting in motion a conversation. Most users believed they were engaging a real person sitting on another side of terminal who dutifully typed back.

In an article for the journal of the Association for Computing Machinery describing his program (MAD-Slip code for an IBM 7094), Weizenbaum included a "typical" conversation:

"HOW DO YOU DO? PLEASE TELL ME YOUR PROBLEM.

Men are all alike.

IN WHAT WAY?

They're always bugging us about something or other.

CAN YOU THINK OF A SPECIFIC EXAMPLE?

Well, my boyfriend made me come here.

YOUR BOYFRIEND MADE YOU COME HERE

He says I'm depressed much of the time.

I AM SORRY TO HEAR YOU ARE DEPRESSED

It's true. I am unhappy

DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY

I need some help, that much seems certain.

WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP

Perhaps I could learn to get along with my mother.

TELL ME MORE ABOUT YOUR FAMILY"

The exchange could continue on for as long as users wanted. Users loved Eliza. Weizenbaum's personal secretary even asked him to leave the room so that she could talk to her privately.

Weizenbaum was dismayed by how gullible users were. He ranted against "people who knew very well they were conversing with a machine," but that "soon forgot that fact, just as theatergoers, in the grip of suspended disbelief, soon forget that the action they are witnessing is not 'real.'" He wrote an entire book, Computer Power and Human Reason (1976), exposing Eliza as a fraudsteress.

Weizenbaum accepted that many conversations between humans shared many of the same characteristics as those that involved machines. In both, we make assumption after assumption about the level of understanding of our interlocutors and rarely check to see how justified these are. Weizenbaum described those disappointing eureka moments at the dinner table when we come to the conclusion we are not really talking to the person we thought we were. (A typical reaction to that realization, according to Weizenbaum, can result in us concluding that "he is not, after all, as smart as I thought he was.")

Weizenbaum campaigned hard against the further development of these artificial intelligences, hoping that they would never develop voice-recognition abilities. He would be horrified to see consumers flock to the stores to buy devices that are listening to us even before we summon them with the usual "Hey." Eliza, he said, was a master trickster, "an actress ... who had nothing of her own to say." Actresses today have a lot to say.

The future so feared is now here — and the boundary between the simulated and the real is as contested as it ever was.

Copyright 2018 NPR. To see more, visit http://www.npr.org/.

300x250 Ad

300x250 Ad

Support quality journalism, like the story above, with your gift right now.

Donate