This article attempts to give an appropriate perspective on artificial intelligence (AI) and review the work we have done and the achievements we have made. We have listed achievements in the field of artificial intelligence for half a century and discussed the recent IBM Watson-Dangerous Edge Challenge. We also weigh the prospect of artificial intelligence that has never reached human level.
First, we review the importance of search, knowledge representation, and learning in the construction of artificial intelligence systems, and give examples of how appropriate knowledge representations can help solve problems.
Secondly, we introduce a recurring theme in mythology and literature—the attempt to create life or intelligence always has dire consequences. Perhaps we should make some warnings to the artificial intelligence community.
This paper illustrates the concept of a problem that cannot be solved in computer science, that is, there is no problem with the solution algorithm. We ask ourselves whether we can create artificial intelligence at the human level. This is the problem.
Next, we reviewed the achievements in the field of artificial intelligence for half a century.
Then we discussed IBM's Watson system. In March 2011, in a video game with a large audience, IBM Computer defeated the two permanent winners of the Jeopardy Championship in the Dangerous Edge Challenge.
Finally, we review several theories about creating life and explain intelligence and consciousness.
Artificial intelligence overview
Before we started the artificial intelligence journey. At the time we said that if you want to design intelligent software, then the software needs to have the following characteristics.
(1) Search ability.
(2) The language of knowledge representation.
(3) The ability to learn.
In the early work, it became apparent that blind search algorithms (ie, no domain knowledge), such as breadth-first search and depth-first search algorithms, could not effectively and successfully overcome the obstacles of the large-scale search space they faced.
As mentioned in this book, a useful guideline is that if you want to design a system for performing a task, first check to see if a similar system already exists in nature.
If it is now 1902 and you want to design a "flight machine", then your attention should be focused on birds. In 1903, the Wright brothers successfully built the aircraft. It is not surprising that the aircraft's fuselage is relatively thin and has two large flying wings.
Blind search algorithms do not have the necessary functionality to cope with large-scale search problems in the field of artificial intelligence. However, human beings are expert-level "problem solving machines." Newell and Simon recognized this feature and studied humans who were asked to “speak their own thoughts” during the problem-solving process.
In 1957, this research eventually led to the invention of the General Problem Solver (GPS). The general problem solver has heuristics extracted from the human discipline and successfully solves the following problems: kettle problems, missionaries and cannibal issues, and the K?nigsberg bridge problem.
In 1736, Leonhard Euler wrote the first paper on graph theory, giving the conclusion that if and only if the bridge shown in Figure 17.3 contains a ring, and this ring contains all the edges And the vertices, the bridge shown in Figure 17.2 can be traversed as described. Euler concludes that this graph contains such a ring (now called the "Euler ring") if and only if the degree of each vertex is even.
Obviously, the representation of the problem has a huge impact on the effective discovery of the solution. The above guiding principles lead us to two learning paradigms. The human brain (and nervous system) is the most striking example of a natural learning system.
The second paradigm is evolution, which may not be so obvious. Darwin describes how plant and animal species can adapt to the environment to survive. Here, the species itself is learning rather than the individual. Chapter 12 outlines two evolutionary learning methods—genetic algorithms and genetic programming. Both methods have been successful in the field of problem solving from scheduling to optimization.
Prometheus returns
In Greek mythology, Prometheus was a god. He stole the fire from the heavens and brought the fire to the world. Some accounts also give him the responsibility of making mankind from clay. In literature, the theme of creating life with inanimate materials is ubiquitous. Perhaps the most creepy description appears in Frankenstein's book or Mary Shelley's novel The Modern Prometheus.
There is no doubt that the reader is familiar with the story of this scientist's creation of life and then panic about his creation. In 1931, in the film directed by James Whale, Boris Karlov played the role of a monster.
The first edition of Shelly's novel was published in 1818, when the industrial revolution was in full swing. Humans have used steam power to make drastic changes in the manufacturing and textile industries. The invention of the telegraph made long-distance communication actually become instant communication.
Many people think that the aftermath of this revolution is not entirely beneficial. Our dependence on steam and coal, then oil, and recent nuclear energy has seriously polluted the planet, water, and air. Others believe that the industrial revolution promoted degenerate materialism.
Literary critics are very profoundly pointed out that the morality of Frankenstein is that society must be alert to its attempts to control nature. As people continue to gain control over intelligent knowledge throughout the 21st century, this may require constant warning of this warning to the artificial intelligence community.
One of the authors (S. L.) saw the film in his childhood: but today, he still lights when he sleeps.
Computer science is a field of science involving information and computing. The focus is on the algorithmic solution to the problem. The 20th century made this new discipline modest and cautious. This discipline has become more cautious as people have discovered the basic limitations of problem solvability. That is to say, there may be some problems, and there is no algorithm solution for these problems. The famous problem is the so-called "downtime problem."
Given any process P, running arbitrary data w, will P(w) pause? For example, the four-color problem may be a well-known open question in graph theory. Its proposition is "coloring the map. Are the four colors enough to make the colors of the two adjacent areas different?" In 1976, Appel and Haken gave a positive answer to this question.
For this problem, the computer program solved for hundreds of hours. This can be beneficial if the operating system running the program can predict that the program will eventually stop. Stopping the problem tells people that this prior knowledge is not always possible.
Earlier, the book mentioned Alan Turing. In 1936 he was studying what kind of function was computable. For example, addition is a computable function, that is, a stepwise process can be given, so that if the integers X and Y are taken as inputs, their sum X + Y can be obtained after a finite calculation step.
He provides a computational model now known as the Turing machine (see Figure 1.2). The Turing machine consists of the following three parts.
(1) Input/output tape, inputting a question on the input/output tape write; at the same time, the result is also written on the tape. There are various Turing machine models; Figure 1.2 shows a model of a two-way unbounded tape. The tape is divided into cells and a symbol can be written in each cell. Each cell on the tape is preloaded with a blank symbol (B).
(2) A finite control that includes an algorithm (ie, a step-by-step process to solve a problem).
(3) Read/write head, which reads the symbol on the tape and writes the symbol to this tape. It can move left or right.
Turling discussed the concept of a universal Turing machine—a Turing machine capable of running other Turing machine programs that mimic the behavior of a “normal” Turing machine. Turling proved that for any Turing machine (T), any input (w), ie T(w), it is impossible to determine if the Turing machine (T) will stop. This is the so-called Turing machine downtime problem.
A more general version of this problem (ie, downtime problem) cannot be proven to be undecidable. People accepted it without hesitation, the view given by the Turing-Qewie paper. The paper mentions that the Turing machine and the digital computer have the same computing power. As a result, most computer scientists believe that the problems that cannot be solved on the Turing machine cannot be solved algorithmically.
Therefore, calculations have fundamental limitations. As a sub-discipline of computer science, artificial intelligence also has these basic limitations. What people want to know is whether the creation of human-level artificial intelligence also has these limitations.
The result of artificial intelligence
We are back to the feasibility of creating human-level artificial intelligence. Now, let's briefly introduce the achievements of artificial intelligence described earlier.
Search aspect
A* has been included in the video game design, which makes the game more realistic.
Mapquest, Google, and Yahoo maps use heuristic search. This technology is integrated into many GPS and smartphone applications.
Use Hopfield networks and evolutionary methods to find problems, sometimes even approximate solutions to NP-complete problems (such as TSP).
Game aspect
The Minimax evaluation allows computers to play simpler games such as tic-tac-toe and nim.
Aided by heuristics and other machine learning tools, the Minimax evaluation through alpha-beta trimming allows the computer to play tournament-level checkers and chess (Deeper Blue defeats world chess champion Garry Kasparov).
Tournament-level Othello program (Logistello, 1997), as well as backgammon (TD-Gammon, 1992), bridge (Jack and WBridge 5, 2000s) and "proficient players" in poker.
Fuzzy logic
The handheld camera automatically compensates for false hand movements.
Vehicle traction control device.
Controls for digital cameras, washing machines and other household appliances.
Expert system
Knowledge-intensive software with built-in reasoning and interpretive devices or so-called expert systems (ES) helps consumers choose the right model, browse online sites, shop, and more.
ES can also be used for analysis, control, diagnosis (what diseases do patients have?), guidance and predictions (where should we dig oil?).
ES is used in many fields such as pharmaceuticals, chemical analysis and computer configuration.
As long as the ES system is used to help rather than replace humans, it is not controversial to use ES as one of the greatest achievements in the field of artificial intelligence.
Neural network
Lexus cars have reversing cameras, sonar equipment and neural networks. With these technologies, cars can be parked in parallel automatically.
When the vehicle is too close to other vehicles or objects, Mercedes cars and other cars have automatic stop control.
The Google car is almost completely autonomous, but when it is driving, there must be someone in the car.
Optical character readers automatically route large amounts of mail.
Automatic speech recognition systems are widely used. Software Agents routinely help us navigate credit card and banking transactions.
At the airport, the software provides an automatic security alert when a person on the "no-fly" list is detected.
Neural networks assist in medical diagnosis and economic prediction.
Evolutionary approach
Orbital scheduling of telecommunications satellites to prevent communication from disappearing.
Software optimized for antenna and very large scale integration (VLSI) circuit design.
Data mining software makes data more valuable to companies.
Natural Language Processing (NLP)
The conversational agent provides travel information to individuals and assists in booking hotels and the like.
GPS systems typically issue voice commands to the user, such as "turn left at the next intersection." Some smartphones have apps that allow people to say the request: "Where is the nearest coffee shop that can make cappuccino?"
Web requests allow for cross-language information retrieval and language translation when needed.
The interactive agent provides verbal assistance to children who are learning to read.
?Machine learning applications with neural networks, natural language processing (see Chapter 13), speech understanding, and planning have made significant advances in robotics.
Overall, this is not a bad achievement for a computer science sub-discipline that began its second 50-year development.
Application window
In 1998, Stanford University graduate students Larry Page and Sergey Brin founded Google. Google was originally a search engine called BackRub, which uses links to evaluate the importance of web pages.
The Google search engine is a nickname for the word "googol," but it has been a huge success and quickly became a powerful, well-known and mainstream search engine on the planet. Over the years, Google has also developed the equally successful email system "Gmail" and the popular public video system "YouTube." Google has also developed a driverless car.
One of the engineers of Google's driverless car is Dmitry Dorgov, and the head of the project is Dr. Sebastian Tren. Thrun is the former director of the Stanford University Artificial Intelligence Laboratory and the co-inventor of Google Street View.
The Google car has been tested for several years and will continue to be presented in the form of experiments over the next few years. Although driverless cars seem to take a few years from mass production, technicians believe that in the near future, they will be as popular as mobile phones and GPS systems.
Google believes the technology may not be profitable for many years, but Google can foresee huge profits in the possible sales of information and navigation services from other driverless car manufacturers.
Google cars use artificial intelligence techniques, such as laser dot markers to sense traces of anything nearby (such as markers and signs on the ground), making decisions that human drivers should make, such as turning to avoid obstacles or parking when seeing pedestrians. .
According to the law, in order to prevent problems, there must be someone behind the steering wheel, and technicians are required to monitor the navigation system to ensure safe testing and no accidents. For different drivers, you can choose different driving styles such as “careful driving”, “defensive driving” and “active driving”.
Robots usually respond faster than humans. Based on the susceptor and device, the robot is fully aware. It is also not distracting, and there are no other factors that usually cause an accident, such as fatigue, drugs, and carelessness. The goal of engineers is to make these driverless cars more reliable than humans. Human error is the cause of many accidents.
In addition, the software used by these driverless cars must be carefully tested and must be free of viruses and malware. Other concerns are fuel efficiency and space efficiency – that is, in theory, driverless cars will not have an accident, so cars may be “crowded” on the road.
Some Google driverless cars have a record of more than 1,600 kilometers without any accidents or human intervention. These vehicles have undergone a small amount of man-made corrections and have a travel record of more than 100,000 kilometers.
One test of Google's driverless car began outside the campus near San Francisco. It uses a variety of sensors in the range of approximately 182 meters and follows the route of the global positioning satellite system or GPS incorporated into the car. The car travels at a speed of about 105 kilometers per hour at a specified speed in California.
Just like humans, when turning, the car slowed down and then accelerated a little. The device at the top of the car provides a mapped version of the detailed environment and its surroundings, so it knows which roads to use, which roads to avoid, and which roads are dead.
It can travel a few miles on a busy highway and leave the highway without incident. It can also drive through the park, stop at the red light and the stop sign, and be able to interact with pedestrians. If humans appear, it will wait for them to move. It has a voice system that announces its movements to the person or driver in the car.
The driver is also alerted when the artificial intelligence system detects a problem with the sensor. It also prevents accidents and uses a detection system to indicate what happened. The driver can also regain control of the car by pressing the red button near the right hand, touching the brakes or turning the steering wheel.
When the car is unmanned and the system is automatically controlled, this is called cruise mode. At this time, people in the car can let go of the steering wheel. In fact, it has become a kind of public transport, no cost, no crowding, no look around and no other factors (these factors will make ordinary car drivers feel distracted).
However, there are still some legal issues, such as who will be responsible for the accident. All states that allow unmanned car testing do not have laws in place for accidents when driving unmanned cars. Google has found that driving an unmanned car is legal as long as someone in the vehicle of the driverless car can control any errors that may occur.
Google cars will reduce the demand for private cars, thereby reducing traffic flow, allowing people to have more land available without the need to spread roads more widely.
Recently, Google has been building experimental electric vehicles with normal control standards that do not require driver control in addition to starting and stopping the vehicle. People can use the smartphone app to command the car to drive automatically, reach the location of the people who need it, and take people to their destination.
The car also invented a function called the traffic jam assist function, which allows the driverless car to follow another car while driving.
Google’s plan for driverless cars is to have at least 100 new electric-powered prototypes. Google’s team will limit them to travel in urban and suburban areas at a speed of about 40 kilometers per hour. Testing will be conducted by Googlers, which will help with testing in small, closed areas. Naturally, it takes a while to convince regulators that it is safe for people to accept people using driverless cars.
Artificial intelligence in the 21st century
Going back to the unanswered question raised in the previous discussion: Will the creation of human-level artificial intelligence exceed the basic boundaries of artificial intelligence? Let us first think about the origin of human intelligence, and then think about the origin of life itself.
The famous British scientist Richard Dawkins solved the latter problem. He found insights in Darwin's theory of evolution. Of course, 4 billion years ago, there were no animals or plants on the planet - just the "original soup" of the basic atoms.
Dawkins believes that Darwin's theory can be extended to "stability survival," in other words, stable atoms (and molecules) are more likely to survive on this ancient planet. He further speculated that in the early history, the planet was rich in water, carbon dioxide, methane and ammonia, and thus could form amino acids (complex molecules that are components of proteins).
Protein is a precursor to known life. Dawkins envisions that on the long road to life on this planet, the next step is the unexpected creation of the so-called "replication factor." This replication factor has a remarkable property - it can faithfully replicate itself. He believes that in this primitive environment, it is stable to be able to quickly and accurately replicate your own replication factors.
The replication (or propagation) process itself requires a stable supply of basic "raw materials." There is no doubt that different replication factors continue to compete to obtain adequate supply of water, carbon dioxide, methane and ammonia. This evolutionary process lasted for 4 billion years. Dawkins believes that after this long evolutionary turn, in today's plants and animals that inhabit the planet, we can find the successor - this is the gene.
Regarding the possible origins of life on this planet, Dawkins continues its extraordinary discourse by explaining how these genes work to ensure survival. In the past 600 million years, they have beenhave much like fictional elves.
They have been shaping human eyes, ears, lungs, etc., and the boat of life (the body) is built from these organs. In this discussion, the animal's body and plants seem to be just protective barriers to the survival of all important genes. Recently, as I read Dawkins' work in depth (SL), my thoughts returned to a scene in the "Star Wars" series.
In this scene, the enemy forces placed the soldiers in a robotic fighting machine with giant legs, which formed the protective shell of the soldier. Even if we accept Dawkins's theory, there is still a question - "Where is the origin of human consciousness?"
Dawkins may think that animals with consciousness (again produced by natural selection) will have an advantage, so relative stability can be achieved to ensure survival.
Gerard Edelman is a biologist who won the Nobel Prize. He proposed a theory of conscious biology, which is also based on Darwinism. He believes that consciousness and the mind are purely physiological phenomena. Neuronal groups self-organize into many complex and adaptable modules.
Edelman believes that the brain is functionally plastic, that is, because the human genome does not have enough coding power to fully specify the brain structure, a large amount of brain tissue is self-oriented.
In physics, the unified field theory should be a theory of everything, which attempts to unify the forces that occur in nature, such as gravity, electromagnetic force, strength, and weak force.
Marvin Minsky solved a broader problem in Society of Mind. He asked: "How is the brain organized?" "How does cognition occur?" As Dawkins tells us, the human brain evolved over hundreds of millions of years. The unified field theory cannot simply explain the function of complex organs in the human skull.
Building a kind of wisdom is like building an orchestra without a commander. Among them, musical instruments are agents, they are not playing music, but explaining the world. Some agents help to understand the language, others can explain the visual scene, and some agents provide common sense for humans.
Unless there is effective communication between the agents, it makes no sense. Minsky assumes that at any point in time, an individual's mental state can be interpreted as a function in which the subset of agents in the function is active. Perhaps artificial intelligence is still a field that is too young, and is not ready to propose a smart "unified field theory" like Minsky.
However, when artificial intelligence matures, Minsky's "Society of Mind" may play a prominent role in it.
In 2015, at the biological and chemical levels, people fully understood the function of individual neurons. In human knowledge, the remaining shortcoming is how a group of neurons deal with sensory data, coding experience, understanding language, and how to promote cognition and initiate consciousness in a more general sense. Current research uses X-rays and other scanning techniques to gain an understanding of the brain at the functional module level. Kurzweil predicts that by the middle of the 21st century, we will have a complete, architecture-like understanding of the human brain.
In addition, he speculates that the miniaturization of computer components will take it to a new level, by which time it is feasible to use hardware to fully implement the brain – this implementation may require billions of individual workers and trillions of dollars. The connection of billions of neurons. Maybe at that time, we will have enough power to achieve artificial intelligence at the human level.
For us, it is wise to remember that Prometheus created a “reward” that fully conscious of human beings, that is, he was tied so that the lion could enjoy his liver, then his liver regenerated and the lion reappeared. His liver.
Science fiction literature outlines the myriad of scenarios in which humans create human-level artificial intelligence. We hope that if artificial intelligence can always follow this lofty goal, then this reward will be more satisfying than the "reward" given to Prometheus.
Artificial Intelligence
The American classic introductory textbook is known as the encyclopedia of artificial intelligence. The most cutting-edge tutorials in the field of artificial intelligence in the past decade are more suitable for undergraduates.
This book is based on the theoretical basis of artificial intelligence, and presents readers with a comprehensive, novel, colorful and easy to understand artificial intelligence knowledge system. This book gives examples, applications, full-color pictures, and anecdotes to stimulate readers' interest in reading and learning. It also introduces advanced courses in robotics and machine learning, including neural networks, genetic algorithms, and natural language processing. , planning and complex board games, etc.