By allowing ads to appear on this site, you support the local businesses who, in turn, support great journalism.
Film ponders future of humans in a world run on artificial intelligence
9e44a665d284cf34eed3619f2cc9d3021fab548ef19ba7d8fd0746bd18904e8d
Newly released sci-fi film "Chappie" explores the questions of the limits of artificial intelligence. - photo by Chandra Johnson
The funeral scene played like something out of science fiction.

The bodies, once new and clean and unmarked by the passage of time, were laid out on the temple altar, adorned with tags denoting their family lineage. A Buddhist priest prayed over the bodies as loved ones looked on.

The dead weren't children or relatives, but to the elderly mourners assembled for the mass funeral earlier this year in the Kofuku-ji temple near Chiba, Japan, the 19 bodies being blessed were family. The "dead" were once Aibo (the Japanese word for "companion") robot dogs created by Sony that are wildly popular in Japan. The dogs are especially beloved among the country's seniors, who make up 25 percent of Japan's population.

It's a tableau for how attached society has and will continue to be intertwined with objects built with artificial intelligence a theme that's been sci-fi fodder for decades, from Jules Verne to Ridley Scotts Blade Runner to this years new release, Chappie, the story of a lovable robot that essentially becomes human in a world that debates the consequences of his existence.

To author and filmmaker James Barrat, the fact that robot dogs are laid to rest is a sign of society's problematic and increasingly personal relationship with artificial intelligence.

As humans, we anthropomorphize things, and thats incredibly dangerous when dealing with artificial intelligence, Barrat said. We think that because they can talk to us, they have all the machinery we do behind our eyes. They never will. And we have to be wary of our own desire to make them just like us.

As more artificial intelligence works its way into everyday life from Google to Siri problems with the technology have raised concerns about the future of human control over artificial intelligence. In January, technology moguls and leaders like Bill Gates, Stephen Hawking and Elon Musk attended the Future of Life Institutes A.I. conference with a plea to change A.I. research priorities to include safety measures as the technology develops and potentially overtakes human comprehension.

Barrat hopes more sci-fi films will spark a serious conversation about the risks of A.I.

Films about A.I. have inoculated us from taking these questions seriously. Weve had so much fun with the Terminator and HAL 9000 that when were confronted with actual A.I. peril, we laugh it off, Barrat said. In the movies, the humans always win. In real life, that doesnt always happen.

The intelligence explosion

The central tension in "Chappie," which opened last weekend, is that the robot main character develops like a human child would. He learns from his environment and by mimicking his creators. Whether or not his similarities to humans makes him human is the question Chappie's creators and the people trying to destroy him wrestle with.

The stakes outlined in Chappie that humans must maintain control over the A.I. robots they create to avoid peril are issues computer science professor Satinder Baveja deals with every day.

Baveja runs the A.I. lab at the University of Michigan where hes trying to accomplish the ultimate goal of creating a definitive electronic version of a human mind. Baveja, like many A.I. scientists, is trying to create a computer that can think and problem-solve and learn from its environment just as humans do as they grown up, but hes trying to do it responsibly.

You have to plan for the worst-case scenario, Baveja said. If your entire power grid is automated, for instance, you wouldnt want the A.I. to make decisions that are contrary to societal values. How you build that into a program is the interesting question.

The question of controls Baveja grapples with today are echoes of theories pioneered by computer scientist Alan Turing and statistician I.J. Good in the mid-20th century, although technology is only now catching up to what Turing and Good addressed.

Turing is famous for a 1950 paper in which he outlined his legendary Imitation Game, which shares a title with the 2014 biopic of Turing. Also called the Turing Test, the idea is that one day, machines will be able to mimic human reasoning and intelligence so seamlessly that a judge would not be able to tell the human apart from the machine.

In the same way a plane doesnt need to be a bird to fly; a computer doesnt have to be a brain to think, Barrat said.

In 1965, Good took Turings idea further with a theory called the Intelligence Explosion. Good believed that if machines could match or surpass human intelligence, they would eventually create more and more advanced machines essentially, leaving humans in the intellectual dust.

Baveja says that so far, society has been able to reap great benefits from A.I. that isnt yet autonomous. But that doesnt mean people shouldnt be wary. Baveja used automated stock trading as an example of how computers execute human abilities much faster in a safe way. Most of these programs have safety measures to prevent automated traders from losing too much money or from hijacking the process, Baveja said.

A lot of the A.I. we have right now is technology that gives us advice like Google searches or Siri. Those technologies need us, Baveja said. With automated systems, you have to think through the entire process. With trading, weve anticipated problems and put in fail-safes, but what if we dont always anticipate it?

At the rate society creates new technology, Barrat says the day machines overtake humans could come fast, if safety standards aren't put in place soon.

There is a huge economic wind pushing human intelligence in a machine forward because our government knows someone else will develop it if we dont, Barrat said. We have a window now in which we can make it safer. In 20 years, we wont have a window anymore.

Creation and stewardship

Despite doomsday scenarios presented in science fiction, Baveja says the potential problems A.I. development presents are inherent in all kinds of scientific progress. Hes optimistic that society will adopt standards of A.I. safety as problems emerge.

Right now, this technology acts as sensors with humans making the decisions, Baveja said. That could of course be taken out of human hands, and its up to us to weigh the costs and benefits. I dont think thats unique to A.I.

Baveja theorized that, like the Aibo funeral in Japan, humans will continue to become attached to their A.I. devices and eventually, fight for them in a similar way to the animal rights movement or like Chappies creators fight for his right to be his own person.

Well build pretty capable creatures, and I can see people in the streets demanding their rights, Baveja said. I think theyll land somewhere between a pet and a servant, kind of."

While Barrat isnt overtly optimistic about the future of A.I., he agrees that the solution isnt to fear the technology itself, but the institutions who may wind up controlling it.

Barrat hopes society will learn from the last time unbridled technology was developed without any legal regulation: nuclear fission.

Our innovations have always run way ahead of our stewardship. Like A.I., the first days of nuclear fission were full of promises about benefits and the public finally learned about it at Hiroshima, Barrat said. We as a species held a gun to our own heads and came close to coming extinct because of failure to manage this technology.