By allowing ads to appear on this site, you support the local businesses who, in turn, support great journalism.
Will your family robot share your family's values?
29f9ffcebf5d538dbbebe0306c1005aba7e46b3e8651d1e74bac33ed838a9ebd
Susan and Michael Anderson with NAO, a robot they've programmed to make ethical decisions about elder care. - photo by Jennifer Graham
HARTFORD, Conn. When a Czech playwright coined the word "robot" in 1921, he described mechanical creatures that have "no passion, no history, no soul.

The absence of soul might be good if your goal is to design a killing machine or simply a robot that will mop floors 24-7 without complaint.

But as manufacturers begin to introduce companion robots that play with children and look after the elderly, some scientists and ethicists are thinking that if robots cant have a soul, they should at least have some foundational ethics to govern their behavior.

The need for so-called "moral machines" encompasses not only the coming household robots that will patrol our homes, remember our birthdays and turn on the lights, but also the disembodied voices of Siri and Alexa, who fetch information for us on demand and in turn, share information about us with their manufacturers. It also expands to single-purpose robotic devices such as Roomba, which maps our homes while it vacuums our floors; the Laundroid, which folds and sorts laundry; and the Landroid, which cuts grass.

Susan and Michael Anderson, retired college professors in Connecticut, are at the forefront of this widening conversation about ancient values and modern machines.

She, a philosopher, and he, a computer scientist, merged their talents to create what they call the worlds first ethical robot. They programmed NAO, a blue-and-white plastic robot just shy of 2 feet tall, to operate within ethical constraints, and they believe other robots can be created to not only reflect human values, but also to deduce them through a process called machine learning.

In doing so, the couple finds themselves at odds with some academics who reject their belief that there are universal standards of morality. They also collide with those who believe such standards can be derived from popular consensus, such as the Moral Machine project at the Massachusetts Institute of Technology.

Researchers there devised an online test in which people can grapple with ethical quandaries a self-driving car might face when a fatal accident is unavoidable. But ethical challenges in robotics go beyond self-driving cars.

Should a companion robot report a childs confidences to parents, or more troublingly, to the manufacturer of the device? How should a self-driving wheelchair respond if its occupant tries to commit suicide by plunging down stairs? What should a robot do if a child tells him to pour a hot drink on a younger sibling? And what should manufacturers of devices already ubiquitous in homes, such as Amazon's Echo and iRobot's Roomba, be allowed to do with the data they're collecting?

The sinister aspects of robotics are already embedded into the public imagination, thanks to movies such as "2001: A Space Odyssey" and "The Terminator." The burgeoning field of ethical robots can help alleviate some of that fear, the Andersons say.

And robots, it turns out, might help human beings become more ethical by spurring a global conversation about the values that diverse cultures cherish and want their robots to share.

Rules of robots

When Karel Capek, who coined the word robot, was writing a science-fiction fantasy about automatons performing human work, he didnt envision robots being so adorable, like the ones that charmed visitors at the Consumer Electronics Show in Las Vegas in January.

With wide, child-like eyes and perpetually smiling faces, these robots and their peers may one day replace Alexa and Siri in our homes.

The robots soon to vie for a place in our homes and businesses include Buddy, billed as the first companion robot; Zora, $17,000 "humanoid entertainment"; Kuri, who has "emotive eyes and a friendly disposition;" and Pepper, a business robot appropriately named after Ironmans personal assistant.

Other robots already in use are purposefully more threatening, such as the robotic police that patrol in Dubai and, for a time, in San Francisco. And robot strippers introduced in Las Vegas were described by some observers as mesmerizing, others as creepy.

Regardless of how friendly or disturbing their countenance, robots have the potential for behaving in unexpected ways, particularly as they develop the ability to learn through innovations called deep learning and reinforced learning.

Until recently, however, the only code of ethics that existed for robots was one devised by the Boston biochemist and science fiction writer Isaac Asimov in a short story published in 1942. Asimov devised three rules: A robot may not injure a human being or allow a human being to be harmed; a robot must obey orders given to it by a human unless the orders conflict with rule No. 1; and a robot must protect its own existence so long as its protection does not conflict with rules No. 1 and 2.

Over time, however, other principles have entered into the discussion about robot ethics, such as the human need for autonomy, said the Andersons in an interview at the University of Hartford, about an hour from their home in New Milford, Connecticut.

The seven duties

The Andersons came to the subject of ethics and robots because of a natural intersection of their interests. Michael Anderson is a professor emeritus of computer science at the University of Hartford; Susan Anderson is a professor emeritus of philosophy at the University of Connecticut at Stamford.

When they first began this work, Michael Anderson had just completed 10 years of research into how computers might deal with diagrammatic information and was searching for a new project. Long intrigued by the movie 2001: A Space Odyssey, he was fascinated with HAL, the artificial intelligence gone bad in that film, and later read a book about how the movie was made.

Reading The Making of 2001 gave me the thought that it was time to start taking the ethics of AI systems seriously, and the fact that I had a live-in ethicist gave me the confidence to think we might be able to accomplish something in this area, he said.

Susan, in true philosopher fashion, was skeptical, Michael said, but working together, they devised a program based on utilitarianism, Jeremy Benthams belief that the most ethical action is the one that is likely to result in the greatest net good.

But then they realized that a better system for this task was one proposed by the late Scottish philosopher David Ross, who posited seven ethical considerations known as prima facie. (Prima facie is Latin for at first sight.)

The prima facie duties, according to Ross, are fidelity, reparation, gratitude, non-maleficense (doing no harm, or the least possible harm to obtain a good), justice, beneficence and self-improvement. But these duties often conflict, and when that happens, there's no established set of rules for deciding which value trumps another.

Could artificial intelligence sort through these duties and decide which ones mattered most?

Their test case was a medical dilemma: A robot is supposed to remind a patient to take her medicine. The patient refuses. When is it OK for the robot to honor the patients wishes, and when is it appropriate for the robot to keep asking, or to notify a doctor?

Working together, the Andersons wrote a program that would allow an elder care robot to respond using three prima facie duties: minimize harm, maximize good and respect the patients autonomy. From a foundation of a few clear cases, the machine could later tease out good decisions on unfamiliar cases.

They then partnered with a roboticist in Germany, Vincent Berenz of the Max Planck Institute for Intelligent Systems, who embedded the program into NAO, an endearing plastic robot who looked around and said, "I think I'm going to like it here" when Michael Anderson took him out of the shipping box.

NAO, which sells for about $9,000, had become the world's first ethical robot, the Andersons said.

Machine learning worked, Michael Anderson said.

The couple continues to work within the realm of elder care and has since expanded NAO's ethical choices from three to seven: honor commitments, maintain readiness, minimize harm to the patient, maximize good to the patient, minimize non-interaction, maximize respect of autonomy and maximize prevention of immobility.

Ethics or 'value alignment'?

On the website of Blue Frog Robotics, you can watch a family interact with Buddy, a toddler-sized robot who wakes the children, patrols the house, helps mom make dinner and plays hide-and-seek. "Buddy is always there for what really matters," a narrator says.

Robots like Buddy and NAO, however, are years and possibly decades away from being ubiquitous in American homes, robotics experts say. Although robots electrified audiences at the 2018 Consumer Electronics Show, one writer for Quartz dismissed most of them as "iPads on wheels."

Wendell Wallach, chairman of technology and ethics at Yale Universitys Interdisciplinary Center for Bioethics and senior adviser to the Hastings Center, said today's robots are largely single-purpose machines unable to reason, make decisions or acquire language.

There are all kinds of real limitations, and unfortunately, theres too much hype around what the present-day machines can and cannot do and how quickly well see more advanced forms of cognitive capability, Wallach said.

That's giving researchers more time to consider how to program ethics in the machines, and even whether its OK to use that word. As robots become more advanced, spurring more interest in the field, many people in artificial intelligence shun the word ethics and instead prefer to talk about value alignment.

A lot of scientists dont like the words ethics and morals; they think the words have been discredited because even philosophers dont agree in their application in all situations," Wallach said. "I think thats a simplistic way of looking at it that doesnt acknowledge that values pervade all of our actions.

Michael Anderson came to a similar conclusion in his work with NAO. You are always in an ethical situation, he said. Maybe its the tiniest thing, like a robot is wasting battery when it could be charging itself," so it's ready to help someone later.

That's similar to the seventh of Ross's prima facie duties self-improvement and also the seventh habit of the late Stephen Covey's character-based "Seven Habits of Highly Successful People": renewal. In fact, Covey's teaching, which encouraged people to consciously live by moral principles, is similar to what the Andersons expect of their robot.

"We drive the behavior of the robot with an ethical principle, and it is determining every hundredth of second what the right thing to do is, and then doing that thing," Michael Anderson said.

The quest for 'moral competence'

The late Alan Turing, an English mathematician widely considered the founder of computer science, famously asked Can machines think? and posited what became known as the Turing Test. According to the test, a computer is intelligent if it can convince a human being that it is human.

In their book Moral Machines: Teaching Robots Right from Wrong, Wallach and Colin Allen proposed a Moral Turing Test, and suggested that machines might prove to be more moral than humans.

Daniel Estrada, who teaches ethics at the New Jersey Institute of Technology, was one of the presenters at a January conference on artificial intelligence and ethics in New Orleans. Hes not a fan of the Moral Turing Test for the same reason he doesnt like the original Turing Test: Its too easy for an intelligent machine to deceive a human being.

Just talking to a machine doesnt really tell you anything; it might be fooling you, or it might be a really clever program, Estrada said.

The trend of using the term value alignment instead of morals or ethics, he said, stems from the belief that it doesnt matter that a machine itself is ethical. They dont care if machines are moral agents; all they care about is if they are behaving in a way that fits with human expectations and human values, Estrada said.

Matthias Scheutz, director of the Human-Robot Interaction Lab at Tufts University near Boston, doesnt back away from using terms like morality and ethics when it comes to creating robots that best serve humanity. He believes value alignment to be a term so vague that it is almost meaningless.

Scheutz, who earned Ph.D.s in both philosophy and computer science, leads a multi-university research initiative funded by the U.S. Department of Defense called Moral Competence in Computational Architectures for Robots. His team is developing algorithms that allow robots to abide by social and moral norms and to temporarily suspend norms when necessary to improve outcomes. One part of this is teaching robots when to say no to a human command.

These systems need to basically understand that there are norms that we use all the time. Norms are what makes our societies work, Scheutz said.

The problem is, our current robotic systems have no such notion; they dont know what theyre doing, they dont know what their relationship is with other individuals, they dont understand what principles there are that ought to guide their behavior. They just act, and thats insufficient. Its insufficient because if they dont have a sense of norms, theyre very likely to violate norms, Scheutz said.

The challenge before artificial intelligence developers is create robots that have awareness of what types of jobs theyre performing. Right now, most robots that are out there or any (artificial intelligence) system dont know what they're doing. AlphaGo doesnt know that its playing Go. Autonomous cars are driving, but they dont know that theyre driving.

Robots as teachers

When it comes to teaching ethics to robots, Susan Anderson believes that not everyone is qualified to be a teacher.

Its important that we dont look to ordinary humans, but rather to ethicists, who have had a lot more experience and are better judges. If we look at ordinary humans and see how they behave, its a pretty bad record," she said.

That was horrifically evident on Twitter in 2016, when a chatbot designed by Microsoft debuted. Named Tay, the bot was created to interact with teens and to learn from its interactions. But within hours, Tay had learned hate speech by interacting with other Twitter users, forcing Microsoft to shut it down in less than one day.

The quest to build moral machines, however, stands to better our own understanding of the importance of ethics and values, and how consciousness of them should drive our every action, just like the Andersons' robot constantly evaluates its next action within an ethical framework.

In his book "Heartificial Intelligence: Embracing Our Humanity to Maximize Machines," John C. Havens urges people to conduct a thorough assessment of their top 10 values and to examine whether their patterns of behavior reflect those values.

"How will machines know what we value if we don't know ourselves?" Havens asks.

Likewise, Susan Anderson believes that the challenge of creating ethical robots will force human beings to reach consensus on what ethical behavior looks like across cultures.

My overall goal is to see if we can improve ethical theory, come up with a good ethical theory that I hope could be accepted by all rational people worldwide, she said. I think we could come up with quite a bit of agreement if we think about how do we want robots to treat us."

Estrada takes it one step further, saying that humans have an obligation to act ethically toward machines. This could include granting them rights for example, the right to make deliveries on public streets, which is no longer legal in San Francisco. Turing worried that humans were unfairly biased against machines, and recent incidents, including a hitchhiking robot that was beheaded in Philadelphia, have shown that they can be violent toward them, too.

"A lot of the discussion is about how to make machines conform to human values," Estrada said. "Alignment goes both ways.