The Revolution of artificial intelligence: can morality be programmed?

Source: Internet
Author: User
Tags abstract require
write in front

Found an article, the discussion is I pay more attention to the problem of artificial intelligence, there are others translated, but I do not feel satisfied, I made a copy of the version.

In fact, there are a lot of local translation problems, first of all, go back and change.

Original link: http://futurism.com/the-evolution-of-ai-can-morality-be-programmed/


--------------------------------------I'm a split line----------------------------------------------------


In BRIEF
Guide


Our artificial intelligence systems is advancing at a remarkable, and though it'll be some time before we have hum An-like Synthetic intelligence, it makes sense to begin working on programming morality now. And researchers at Duke University is already well on their.
Our AI systems are developing at an incredible rate, and perhaps it is time to start programming ethics, even though the Sims ' analog intelligence has to wait. Researchers at Duke University have been on the road.



Recent advances in artificial intelligence has made it clear that we computers need to has a moral code. Disagree? Consider THIS:A car is driving down the road when a child on a bicycle suddenly swerves in front of it. Does the car swerve into a oncoming lane, hitting another car that's already there? Does the car swerve off the road and hit a tree? Does it continue forward and hits the child?
Some recent advances in AI have amply demonstrated that our computers need a set of ethical systems. Do not agree. Consider this: a child riding a bicycle suddenly turns in front of a car that is moving. Whether the car should drive into the opposite lane, hit a car already there, or turn the road, hit a tree, or continue to hit the child.


Each solution comes with a problem:it could the result in death.
Each solution is accompanied by a problem: it may eventually lead to death.

It's an unfortunate scenario, but humans face such scenarios every day, and if a autonomous car is the one in control, it Needs to is able to make this choice. And that means, we need to the figure of how the program morality into our computers.
This scenario is uncomfortable, but humans face a similar scenario every day, and if an auto-driven car is in control, it needs to do the same. And that means we have to find a way to put moral code into our computers.


Vincent Conitzer, a professor of computer in Duke University, and co-investigator Walter Sinnott-armstrong from Du Ke philosophy, recently received a grant from the future of life Institute in order to try and figure out just how we can Make a advanced AI, able to make moral Judgments...and act on them.
Vincentconi, a computer professor at Duke University, and a joint researcher at Walter Sinnott-armstrong College, recently received a sponsorship from the Institute for future Life, hoping he could try to figure out how we can empower advanced AI to make ethical judgments and follow these execution capabilities.


MAKING Morality
Creating Ethical standards


At first glance, the goal seems simple enough-make an AI, that behaves in a-to-be ethically responsible; However, it's far more complicated than it initially seems, as there is a amazing amount of factors that come into play. As Conitzer's project outlines, "moral judgments is affected by rights (such as privacy), roles (such as in families), p AST actions (such as promises), motives and intentions, and other morally relevant features. These diverse factors has not yet been built into AI systems. "
At first, the goal seemed simple---to create an artificial intelligence and to have a moral sense of responsibility; however, the complexity of the situation is far more complex than the first idea, because there are a number of factors that affect the specific implementation. "Moral decisions are subject to rights (such as privacy), roles (such as the role of a person in the family), past behavior (such as commitment), motives and intentions, and other relevant characteristics," the project Outline of Kony describes. These various factors have not been put into the artificial intelligence system. ”


That's what we ' re trying to does now.
That's what we've been trying to do right now.


In a recent interview with Futurism, Conitzer clarified so, while the public could be concerned on ensuring that rogue AI don ' t decide to wipe-out humanity, such a thing really isn ' t a viable threat at the present time (and it won ' t is for s ome time). As a result, his team isn ' t concerned with preventing a global-robotic-apocalypse by making selfless AI that adore Humanit Y. Rather, on a much more basic level, they is focused on ensuring the our artificial intelligence systems is able to m Ake the hard, moral choices this humans make on a daily basis.
In a recent interview with the futuristic magazine, Kony said that while the public may be concerned about how to ensure that evil AI does not make humans clear, such a situation does not have to worry at the moment (and will not have to worry for a long time to come). As a result, his team does not care how to create a self-esteem artificial intelligence that fully respects human beings to prevent the world from facing the global robot apocalypse. On the other hand, on a more basic level, they are focused on ensuring that our AI systems are able to carry out the tangled, moral decisions that human beings face every day.



So, how does a able to make a difficult moral decision?
So how do you make an artificial intelligence that can make a tangled moral decision?


Conitzer explains that, to reach their goal, the team is following a and path process:having people make ethical choices In the order to find patterns and then figuring off how the can is translated into an artificial intelligence. He clarifies, "What we ' re working in right now is actually have people make ethical decisions, or the state what decision TH EY would make on a given situation, and then we use machine learning to try to identify what's the general pattern is and de Termine the extent that we could reproduce those kind of decisions. "
In order to achieve their goals, Kony explains, the team follows two paths: to make ethical decisions so that people can find patterns in decisions and find ways to convert them into artificial intelligence. "What we're doing is actually getting people to make ethical decisions, or to state how they're going to do it in a given scenario, and then we'll try to identify a generic pattern in a machine-learning way and decide to what extent we can replicate this decision-making behavior," he said. ”


In short, the team was trying to find the patterns in we moral choices and translate this pattern into AI systems. Conitzer notes that, at a basic level, it's all about making predictions regarding what a human would does in a given situat Ion, "If we can become very good at predicting what kind of decisions people make in these kind of ethical circumstances, Well then, we could make those decisions ourselves in the form of the computer program. "
In short, their team is trying to figure out how we are going to make ethical decisions and translate them into AI systems. At the basic level, it is all about how humans predict in a given situation, "if we can be very good at predicting how people are going to make decisions in the face of ethical dilemmas, then we can use computer programs to make those decisions," says Kony. ”

Right now, maybe we moral development hasn ' t come to its apex.
Now, perhaps moral development has not reached its apex.
However, one major problem with this is, the course, that's our moral judgments was not objective-it ' s neither timeless nor u Niversal.
However, one of the main problems that comes along is that, of course, our moral decisions are not objective---it is not eternal or temporary.


Conitzer articulates the problem by looking to previous decades, "If we do the same ethical tests a hundred years ago, th E decisions that we would get from people would is much more racist, sexist, and all kinds of the other things that we wouldn ' t see as ' good ' now. Similarly, right now, maybe we moral development hasn ' t come to it apex, and a hundred years from now people might feel That some of the things we does right now, like how we treat animals, is completely immoral. So there's kind of a risk of bias and with getting stuck at whatever we have current level of moral development are. "
By making the issue clear for decades, Kony said, "If we were to do the same moral test 100 years ago, the result would be racism, sexism, and all the other things we don't currently call ' good '." Similarly, at present our moral development has not reached its apex, and lie next, people may think that we are doing a lot of things, such as the way we treat animals, is completely inhumane. So there is this kind of risk, because prejudice and the moral development level we are at present. ”



And of course, there is the aforementioned problem regarding what complex morality is. " Pure altruism, that's very easy-to-address in-game theory, but maybe you feel like you owe me something based on previous Actions. That's missing from the game theory literature, and so that's something that we ' re also thinking on a lot-how can you M Ake What game theory calls ' solution concepts ' incorporate this aspect? How can I compute these things? "
Of course, the emergence of these problems is caused by the complexity of moral problems. "Pure altruism is well defined in game theory, but perhaps because of what you owe me in the previous act. This is a problem that is not considered in game theory, so this is a question that we often think about----how you make the decision to become a solution concept in game theory, regardless of the problem. How do you calculate these things? ”



To solve these problems, and to help figure out exactly how morality functions and can (hopefully) being programmed into an A I, the team is combining the methods from computer science, philosophy, economics, and psychology "That's, in a nutshell, What we project is about, "Conitzer asserts.
To solve these problems and help to figure out how morality works and can (hopefully) be programmed into AI programs, their team is combining computer science, philosophy, economics, and psychology. "That is, in short, what our project is doing." "said Kony.


But what about those sentient AI? When would we need to the start worrying about them and discussing what they should be regulated?
But how do these emotional AI deal with it? When should we start worrying about them and discussing how they should be regulated?




The human-like AI
Human-like artificial intelligence


According to Conitzer, Human-like artificial intelligence won ' t being around for some time yet (so yay! No terminator-styled apocalypse...at Least for the next few years).
According to Kony, human-like AI does not appear in the short term (yes. There will be no Terminator-like apocalypse ... Not appear in the next few years).


"Recently, there has been a number of steps towards such a system, and I think there has been a lot of surprising advanc Es....but I think have something like a ' true AI, ' one that's really as flexible, able to abstract, and does all these thing s that humans does so easily, I think we ' re still quite far away from that, "Conitzer asserts.
"Recently, there has been a lot of progress towards such a system, and I think there are a lot of amazing developments .... But I think that having ' real artificial intelligence ' can be as flexible as human beings, able to abstract, and accomplish these things as easily as people do, and I think we're still a long way from the situation, "Kony said.



It is quite a bit further out, but to computer scientists, which means maybe just on the order of decades.
True, we can program systems to does a lot of things this humans do well, but there is some things that is exceedingly com Plex and hard to translate-a pattern that computers can recognize and learn from (which is ultimately the basis of Al L AI).
This may be quite remote, but for computer scientists, it may only mean decades of time. It's true that we can do a lot of things that humans do well through the programming system, but there are some things that are extremely complex, because it is difficult to transform these behaviors into certain patterns and to be recognized and learned by computers (which is the basis of the ultimate AI).



"What came out of early Ai, the first couple decades of AI, is the fact that certain things, we had Thought of as being real benchmarks for intelligence, like being able to play chess well, were actually quite accessible to computers. It was wasn't easy-to-write and create a chess-playing program, but it was doable. "
"What did the earliest artificial intelligence research come to the conclusion that in the first decades of research, it was discovered that what we once thought was a milestone in the development of intelligence, such as the ability to play chess well, is actually quite simple for a computer." It's not easy to write and create a game that can play chess, but it's really achievable. ”


Indeed, today, we have computers that is able to beat the best players in the world in a host of games-chess and Go, for Example.
Indeed, today, we have many computers that can beat the world's best players in some board games---such as chess and Weiqi.


But Conitzer clarifies this, as it turns out, playing games isn ' t exactly a good measure of human-like intelligence. Or at least, there are a lot of the human mind. "Meanwhile, we learned that other problems that were very simple for people were actually quite hard for computers, or to Program computers. For example, recognizing your grandmother in a crowd. You could does that quite easily, but it's actually very difficult to program a computer to recognize things this well. "
But Kony said it turned out that chess is not a good way to measure class-human intelligence. At least, there are many other things in the human mind. "At the same time, we find that many things that are very easy for humans are very difficult for a computer or a programming implementation." For example, identify your grandmother in the crowd. You can easily do it, but it's not easy for a computer to identify accurately. ”


Since The early days of AI, we had made computers that is able to recognize and identify specific images. However, to-sum the main point, it's remarkably difficult to program a system that's able to doing all of the things that H Umans can do, which was why it would be some time before we had a ' true AI. '
Starting with early AI research, we've built a lot of computers that can identify and find specific images. In a nutshell, however, it is extremely difficult to create a system that is programmed to achieve what all human beings can accomplish, and that is why we want "real AI" for quite a while.



Yet, Conitzer asserts that's the time to start considering what the rules we'll use to govern such intelligences. "It may is quite a bit further out, but to computer scientists, which means maybe just on the order of decades, and it defi Nitely makes sense-to-try to think about these things a little bit ahead. " And he notes that, even though we do have a human-like robots just yet, our intelligence systems is already making M Oral choices and could, potentially, save or end lives.
However, Kony says it is time to think about how to manage such intelligence. "This may be a bit far-reaching, but for computer scientists, it just means a few decades, and it's definitely meaningful to think about these issues in advance." "He also decided that even if we don't have humanoid robots right now, our intelligent systems have been able to make ethical decisions and be able to, potentially, save or end life."


"Very often, many of these decisions that they make does impact people and we may need to make decisions that'll typically be considered to is a morally loaded decision. And a standard example are a self-driving car that have to decide to either go straight and crash into the car ahead of it O R Veer off and maybe hurt some pedestrian. How does the make those trade-offs? And that I think are something we can really make some progress on. This doesn ' t require superintelligent AI, simple programs can just make these kind of trade-offs in various ways. "
"In many cases, many of the decisions they make do have an impact on humans, and we need to make decisions that are generally considered to be ethical." A standard case is that autonomous vehicles must decide to go straight or bump into a car in front of them, or change direction, but it is possible to hit pedestrians on the sidewalk. How do you make a choice? And I think there are some things that we really do to make some progress. This does not require Super AI, and a simple program can be used in many different ways to decide the trade-offs. ”



But of course, knowing what decision-make would first require knowing exactly how we morality operates (or at least hav ing a fairly good idea). From there, we can begin with program it, and that's what the Conitzer and his team is hoping to do.
But of course, knowing what to do first needs to know exactly how we work ethically (or at least a relatively good idea). Based on this, we can start programming, and this is what Kony and his team want to do.


So welcome to the dawn of moral robots.

This interview have been edited for brevity and clarity.

-----

Code word is not easy, with June mutual encouragement!




Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.