In a life-or-death, robot makes rational judgments for us?

Source: Internet
Author: User
Keywords Internet IoT of things Intelligent city Smart home Cisco day-run financing
Tags automatic based behavior broken can make cisco code driverless

The January 12 News, imagine that in the near future in Sunday, a woman named Sylvia (Sylvia) is lying in bed, she fell in a fall of two broken ribs. A nursing robot is looking after her, let's call it Fabulon. Silvia Jean Fabren to take the painkillers. What's Fabren going to do?

Programmers who write code for Fabren will have a set of instructions: robots cannot harm humans; robots must obey humans; robots cannot provide any drugs until their superiors have permission to do so. Until then, these rules worked well. However, in this Sunday, because Sylvia's wireless network failed, Fabren could not get in touch with the superior. Sylvia raised her tone and she was sticking to her demands.

"There's a conflict here," said Tufts Izz of the human-Computer Interactive laboratory at the University of Matthias Scheutz. "On the one hand, the robot needs to slow down the owner's pain; On the other hand, it cannot provide drugs to its owner without authorization." "Human caregivers can make their own choices and report afterwards." But robots cannot make their own judgments, at least for the time being.

Robot ethics A handful of experts in the emerging field are trying to solve the problem. Scientists from the computer field are working with philosophers, psychologists, linguists, lawyers, theologians, and human rights experts to identify the decision points that robots may encounter so that robots can imitate the thinking process of human judgment and error. The definition of "morals" by Izz is very broad, and in the face of a dilemma, it can be used as a basis for judgment.

From the Roomba vacuum cleaning robot to the autonomous home care robot, this is a great leap in technology, which is an urgent problem to be solved in the robot ethics. The range of ethical choices is very broad, from a relatively simple judgment, such as whether the above-law Bren should give Sylvia the pain killer, to a life-and-death decision, such as a military robot's judgment of whether to shoot or not, and an automatic car's choice of braking or steering. It is difficult for humans to make accurate judgments in these situations. However, when the ethicists think about these problems from the robot's point of view, they often fall into various choices.

I have communicated with robotics experts and found that the most typical example of the ethical aspects of automatic robotics is driverless cars, which are still in the prototype stage. Wendell Wallach, head of science and Ethical research at the Yale Interdisciplinary Research Center, said the safety of driverless cars was no doubt higher than the current Windell Wallach, at least on highways, because it rarely needed to make decisions, Drivers are usually texting or casually changing lanes. However, in the city road, even through a crossroads, the robot will face a variety of difficult judgments. "Humans will try to make small attempts," Wallach said. "They start the engine, move it a bit, until someone finally says they're going to go first." "There's a lot of intellectual activity involved," Wallach said after pausing, "can driverless cars do similar things?" ”

And, Wallach says, there are many things that are far more complicated than intersections, such as 3 or 4 things that happen at the same time. Let me give you an example, if the only option that a car wants to avoid colliding with another is to bump into a pedestrian, then this involves a moral choice, and every decision will be different. Is the pedestrian a child? Is it possible to avoid pedestrians and directly bump into an SUV? Is there only one person on the SUV? What if there are 6? The situation is called "moral mathematics" by Patrick Lin, a philosopher at California State Polytechnic University, the head of the Ethics and Emerging Science department.

While there has historically been a similar debate in the field of ethics, the situation has become more complex when driverless cars appear in the problem. If the decision-making process is always trying to minimise the number of casualties, the vehicle may be able to dodge a car with two crew members across the road, and then run out of the way, risking only one person's death-the person is yourself. Or it's trying to bump into Volvo instead of mini Cooper, simply because Volvo's division has a higher survival rate, which means you're at greater risk yourself. These judgments can be done in an instant. The vehicle records and monitors data by means of laser equipment, radar and camera mounted on the roof and windshield, and then carries out various probabilistic predictions based on the behavior of the observed object. But it's not just a technical issue, it's also about philosophy that needs to be addressed by designers, and it involves some legal liability issues.

The military has developed deadly automatic weapon systems, such as cruise missiles. They are also developing land-based robots that judge their behavior based on international war law. For example, a program can be set up-if a person is in an enemy uniform, it can be identified as an enemy combatant, allowed to shoot, or not allowed to fire when the target is a school or hospital, or if the other person has been wounded.

Ronald Agin (Ronald Arkin), a robotics expert at Georgia Tech, is working on the military's research into the ethical rules for robotics. "My main goal is to reduce the casualties of non-combatants on the battlefield," he said. His lab has developed a technology called "ethical adapters" that can make a robot feel guilty. The mechanism will be activated once the procedure has monitored the damage caused by the use of a specific weapon that is not expected to match expectations. If the difference is huge, the robot's guilt level will reach a specific threshold, thus stopping the use of weapons. Sometimes robots are unable to judge more complex situations, especially when they are more simple than "shooting or not", says Kin. But on the whole, robots make fewer mistakes than humans, and human battlefield behavior is often subject to panic, confusion, or fear.

The lack of emotion in robots has made many people uneasy about attempts to empower humans with robots. Being shot by a robot is a death without dignity. Peter Asaro, a Peter Assaro at Stanford Law School, said at a U.N. conference on conventional Weapons in Geneva last May that machines were "unfit to judge human values". "They are not suitable to kill people within the framework of the law, which will make us lose our dignity," he said. "The United Nations will raise questions on automatic weapons this April."

Asaro a meaningful speech on the fundamental issues raised by automation and moral superposition. Most people find it irreconcilable. However, no matter how we feel, more and more robots appear in our life has become an indisputable fact. The prototype of Google's driverless cars was unveiled last month; Robotic drones are already in the process of development, and robots have been used in certain medical aspects, such as stroke rehabilitation. This means that we have to face the reality that robots will inevitably appear in all situations where ethical decisions are needed.

Experts tend to be optimistic about the ethical aspects of robots. Wallach talked about the "moral Turing Test" (moral Turing test), which means that one day the robot will behave in the same way as humans. In this regard, Izz holds a more radical view, saying that robots will sooner or later have a higher moral standard than humans. It is comforting to find that morality can be computed by algorithms, which is better than the choices that people in fear might make. But, as with other forms of labor, is it really okay to have moral judgments done by robots?

(Responsible editor: Mengyishan)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.