Tiger Sniff: Original from MIT Marvell Review, Tiger sniffing compiler.
A few years ago, I was having coffee with a friend who was starting a business. He had just passed his 40 birthday, his father was ill, and he often had backache, and he felt himself being crushed by life. "Don't laugh at me," he said to me, "I'm looking forward to the singularity to save me." ”
My friend, who works in the tech industry, has witnessed the rapid development of microprocessors and the rise of the Internet. Even before the midlife crisis came, it was not so hard to convince him that machine intelligence transcended humanity-the moment that the futurists called the Singularity. A fraternity of super wisdom may quickly speed up the analysis of human genetic code, to crack the secret of the eternal youth of mankind. Even if you do not, you may at least be able to figure out how to solve the problem of back pain.
But what if it's not so friendly? Director of the Future Human College of Oxford University (Future of humanity at the University of Oxford) Nick Bostem (Nick Bostrom) in his new book Super Wisdom (Superintelligence) Describes the following situation, which also sparked a lively debate about future AI. Imagine a machine that we can call the "paper Clip Max" (Paper-clip Maximizer), and its program is set to produce as many paper clips as possible. Now let's assume that this machine has become super smart. Because the goal has been set, it may then decide to make its own decision to produce a new, more efficient paper clip-making machine-until one day, like Midas (King Midas), it essentially turns everything into a cardboard clip.
Don't worry, you might say: we just need to set the program to produce 1 million clips and stop immediately. But what if the machine makes the paper clips and decides to check the manufactured products? What if the check count is accurate? This machine will become more intelligent to confirm these problems. This super intelligent machine produces some rough computational material that has not yet been invented-called "Computational Quality" (Computronium) and uses it to judge each question. But every new problem follows, so repeatedly, until the whole earth is transformed into "computational quality." Except for the 1 million paper clips, of course.
Bostem does not think that the "paper clip Max" will develop into the same situation as the above, this is just a thinking experiment. The experiment was conducted to show that even careful system design may not control the extremes of machine intelligence. But he believes that the super intelligence will form, and that this kind of intelligence can play a great role, but also will decide to no longer need the surrounding human.
If that sounds ridiculous to you, you're not alone. Rodney Rodney Brooks, a robot expert, is one of the critics who says that people who fear AI can get out of control misunderstand what we call computer thinking or computers becoming smarter. From this point of view, Bostem described the so-called super intelligent coming of the day is still far, even probably impossible to achieve.
But there are many wise and thoughtful scholars who agree with Bostem and show concern. Why is that?
Self-will (volition)
"Can a computer think?" "This problem has neglected computer science from the start. 1950 Alan (Alan Turing) proposed that a machine could think like a child, and that John McCarthy, the inventor of the programming language LISP, proposed in 1955 to define the field with "Artificial intelligence". As artificial intelligence researchers in the 1960 and 1970 began using computers to discriminate images, translate languages, and understand natural language directives beyond the veins, a view that computers would eventually develop the ability to speak and think and do bad things began to emerge in the mainstream mass culture. The film 2001 Space Rover (HAL from 2001:a Odyssey) and the 1970 Giants: Hostel plan (colossus:the Forbin project) showed a large computer that brought the world to the brink of nuclear annihilation. The same plot also appeared in the War Game (Wargames) 13 years later, in the 1973 West World (Westworld), robots lost control and began to kill humans.
When artificial intelligence research is far from ambitious goals, financial support begins to dry up and "artificial intelligence winter" comes. Even so, the smart machines were still popular in science fiction writers of the 1980 and 90, and it was science fiction writer Vina Winge (Vernor Vinge) that made the concept of singularity popular. There are also robotics researchers Hans Molavic (Hans Moravec), engineer/entrepreneur Rey Couzville (Ray Kurzweil), three people who think that when a computer can think independently of different goals, it is likely to have the ability to self-reflection-so that it can change the software To become smarter. Soon, the computer will be able to design its own hardware.
As Couzville described, this situation will begin a beautiful new world. This machine will have insight and patience (one-zero seconds) to solve problems in nanotechnology and space flight, they can improve human condition, let us upload our consciousness into a digital form of immortality. Wisdom will spread freely in the universe.
You can also find the opposite of this kind of sunshine optimism. Hawking, Stephen Hawking, warned that the singularity "may be synonymous with the end of mankind" because humans cannot compete with advanced AI. After reading the book "Super Wisdom," Ilon Masc (Elon Musk) sent Twitter messages to alert the world, after which he donated 10 million of dollars to the Future Life Research Institute (Future, Cato). The agency's goal is to "work to eliminate the current crisis facing humankind" (sharable to mitigate existential mandates facing humanity).
No one has yet said that something like this super intelligence already exists. In fact, we do not have this generic artificial intelligence technology, even a clear technology to achieve the route scheme. Recent advances in the field of artificial intelligence, from Apple's automation assistant Siri to Google's driverless cars, have all exposed a serious flaw in science and technology: Both products are in a situation never met. Artificial neural networks can learn to judge the Cats in the photos themselves, but they have to look at hundreds of photos, and the accuracy of the cat's judgment is lower than that of a child.
It's also a source of criticism like Brooks, even though artificial intelligence is impressive-compared to what earlier computers can do-such as the ability to judge a cat in a picture, but the machine has no autonomous will (volition), and it doesn't know what it is like with a cat, What is described in the picture is the kind of situation in which human beings do not count their insights. In this sense, AI may bring in smart machines, but it takes too much work to achieve Bostem's imagined scenarios. Even if that day does appear, intelligence does not necessarily bring about perceptual ability. Based on the current development of artificial intelligence, it is speculated that super intelligence will come, as if "the warp drive will soon be judged by the presence of more efficient internal combustion engines." "There is absolutely no need to worry about" evil ai "(malevolent ai), he says, at least for the next hundreds of years.
Protection measures
Even if it takes a long time for the super intelligence to appear, it may be irresponsible not to take action. Strester La Selle Stuart, a computer science professor at the University of California, Berkeley, shares the same concerns with Bostem, who co-authored Couzville with Peter Novig, a colleague Norvig at Google Artificial : A Xiandai approach book, which has been the standard textbook of artificial intelligence for 20 of years.
"There are many public intellectuals who are supposed to be very smart and now have no idea what's going on," Russell told me. He pointed out that progress in the field of artificial intelligence has been very impressive over the past decade, the public's understanding of it is also limited to Moore's law, where today's artificial intelligence technology is already fundamental, and the use of technologies such as deep learning on computers allows them to increase their understanding of the world.
Given that Google, Facebook and other companies are actively developing intelligent "learning-oriented" machines, he explains, "I think one of the things that human beings should not do is to not do anything to make such a super smart machine before considering potential dangers." This is a bit unwise. Russell made a comparison: "It's like a synthetic experiment." If you ask a researcher what he is doing, they will say that they are doing container reactants. If you want unlimited energy, you'd better control the synthetic reaction. "In the same way, he says, if you want unlimited intelligent technology, you'd better figure out how to align computers with human needs."
Bostem's book is a research report. Super AI will be omnipotent and what it does is entirely up to us (ie, engineers). Like any parent, we have to set certain values for our children. Not just any values, but those that are based on the best interests of humankind. We are basically telling a god how to treat us. So how does it work?
Bostem to a great extent the idea of Lizey Judkauski (Elizer Yudkowsky), a "coherent inference of will" (coherent extrapolated volition)--that is, from the perspective of all people ( consensus-derived) out of a "best self". We hope that artificial intelligence can bring us a rich, happy, fulfilling life: a cure for back pain and help us migrate to Mars. Given that humanity has never agreed on anything at all, we will need to make a decision for ourselves one day-the best decision for all of humanity. So how do we set these values into programs that are super smart? What mathematics can define them? These are the problems, and Bostem believes that researchers should now begin to solve them. He thought it was "the essence of our time".
For ordinary people, there is no reason to worry about scary robots. We haven't yet achieved even a bit of technology that approaches super intelligence. But it should be stressed that many of the world's largest technology companies are investing in resources to make their computers smarter; a true artificial intelligence technology can give these companies an incredible advantage. They should also take into account the potential drawbacks of the technology and try to avoid them.
There is an open letter on the website of the Future Life Institute. Instead of warning about possible disasters, the letter calls for more research to reap the benefits of artificial intelligence technology "while avoiding potential problems". The letter was signed in addition to people outside the artificial intelligence field such as Hawking, Musk and Bostem, and some famous computer scientists (including top AI scholar Demmis Har).
After all, if these scholars design an artificial intelligence but fail to achieve the best human values, then they are not smart enough to control their own work.