Ten masters of artificial intelligence who have changed the world

Source: Internet
Author: User
Keywords algorithm artificial intelligence robot

In 1948, Allen Turing, the father of computer science, and Claude Shannon, the father of information science, independently developed the basic algorithms still used in today's chess programs. Herbert Simon, a Nobel laureate in economics and Carnegie Mellon University, predicts that "in 10 years, computers will become chess champions" (however, it turns out that many of his and subsequent people are wrong). After many basic developments in the chess program, Newell, Simon and Shaw performed their first successful and serious work in 1959. In 1967, Richard Greenblatt of the Massachusetts Institute of Technology developed the first club-level program, Machack, which was able to play at the 1600 level (B level). Green blatt only allows its programs to play against people.

Artificial intelligence is a unique discipline that allows us to explore the many possibilities of future life. In the short history of artificial intelligence, its approach has been incorporated into the standard technology of computer science. Examples of this include search technology and expert systems generated in artificial intelligence research, and these technologies are now embedded in many control systems, financial systems, and Web-based applications.

Currently, many artificial intelligence systems are used to control financial decisions, such as buying and selling stocks. These systems use a variety of artificial intelligence technologies such as neural networks, genetic algorithms, and expert systems. Internet-based agents search the World Wide Web for news articles of interest to users.

Technological advances have significantly affected our lives, and this trend will undoubtedly continue. In the end, in the next millennium, as a human meaning, this issue is likely to become a focus of discussion. How many experts do you know about these experts in the field of artificial intelligence that change the world?

Alan Turing

Alan Turing (1912-1954) was a British mathematician who was a very prominent figure in the history of computer science. Students who have studied artificial intelligence, computer science, and cryptography should be familiar with his contributions. His contribution to artificial intelligence lies in the well-known Turing test developed for testing artificial intelligence. He tried to solve the controversial problems in artificial intelligence, such as "Is the computer smart?" In theoretical computer science, there is a course to study the computational model of the Turing machine. The Turing machine is a mathematical model that captures the nature of computing. It is designed to answer this question: “What does the function calculate?” The reader should understand that Turing essentially discussed the concept of using algorithms to solve specific problems seven or eight years before the first digital computer appeared. .

You may have seen the movie of the Second World War depicting the Battle of Britain. Between 1940 and 1944, German aircraft dropped nearly 200,000 tons of bombs in the United Kingdom. In the Bletchley Park outside London, Turing led a team of mathematicians to crack the German code – known as the “Enigma Code”. They finally cracked the password with the Enigma cipher. This device deciphers the passwords of all military commands sent to German ships and aircraft. The success of the Turing team played a decisive role in the victory of the Allies.

Alan Turing and Artificial Intelligence

Turing invented the concept of stored procedures, which is the foundation of all modern computers. Before 1935, he described an abstract computing machine with infinite storage space - it has a read head (scanner) that moves back and forth to read the storage space and reads the program specified in the storage space. symbol. This concept is called the universal Turing machine.

Turing has long provided his own insights on how to organize the nervous system to promote brain function. Craig Webster explains Turing's paper "Computing Machinery and Intelligence" (finally published on Mind in 1950) in his article, introducing the Turing B-type network as an unorganized machine. This type B network is in human babies. Can be found in the cerebral cortex.

Turing discusses two types of unorganized machines called Type A and Type B. The Type A machine consists of NAND gates, where each node has two states, two inputs and any number of outputs, represented by 0 or 1. Each Type A network intersects with another three Type A nodes in a specific manner, producing a binary pulse that forms a Type B node. Turing has recognized the possibilities of training and the need for a self-stimulus feedback loop). Turing also believes that a "genetic search" is needed to train a Type B network so that a satisfactory value (or pattern) can be found. .

At Bletchley Park, Turing often discusses with Donald Mickey (his colleagues and followers) how the machine learns and solves new problems from experience. Later, this was called heuristic problem solving and machine learning.

Turing has long had a deep understanding of the problem solving method of using chess games as an artificial intelligence test platform. Although the computing devices of his time were not enough to develop a powerful chess program, he realized the challenge presented by chess (with 10,120 possible legal games). As mentioned earlier, its 1948 paper "Computers and Intelligence" laid the foundation for all the chess programs that followed, leading to the development of master machines that could compete with world champions in the 1990s.

John McCarthy

John McCarthy (1927-2011) coined the term "artificial intelligence" at the Dartmouth conference in 1956. Without him, there was no textbook on artificial intelligence.

Professor McCarthy has worked at the Massachusetts Institute of Technology, Dartmouth College, Princeton University, and Stanford University. He was an honorary professor at Stanford University.

He contributed to the invention of the LISP programming language. Over the years, especially in the United States, LISP has become the standard language for developing artificial intelligence programs. McCarthy has a mathematical talent. He received a bachelor's degree in mathematics from the California Institute of Technology in 1948. In 1951, under the direction of Solomon Lefschetz, he received a Ph.D. in mathematics from Princeton University.

Professor McCarthy has a wide range of interests and his contributions cover many areas of artificial intelligence. For example, he has publications in a variety of fields, including logic, natural language processing, computer chess, cognition, anti-fact facts, common sense, and some philosophical questions from the artificial intelligence standpoint.

As the founding father of artificial intelligence, McCarthy often commented on his papers (such as "Some Expert Systems Need Common Sense" (1984) and "Free Will Even for Robots", pointing out what artificial intelligence systems need to be practical and effective.

In view of his contribution to artificial intelligence, McCarthy won the Turing Award in 1971. Other awards he received included the National Science Award in Mathematics, Statistics and Computational Science, and the Benjamin Franklin Prize in Computer and Cognitive Science.

George Boole

A computer program can display any type of intelligence, which first determines that it needs to be able to reason. The British mathematician Georage Boole (1815–1864) established a mathematical framework that represented the laws of human logic. His work includes about 50 individual papers. His main achievement is the well-known difference equation theory, which appeared in 1859. Subsequently, in 1860 he published the finite difference operation theory. The latter book is a sequel to its previous work. Boole gives a general approach to symbolic reasoning in Laws of Thought, which is perhaps his greatest achievement. Given a logical proposition with arbitrary terms, Boolean handles these premises with pure symbols, showing how to make reasonable logical inferences.

In the second part of Laws of Thought, Boole attempts to invent a general method of transforming the prior probability of an event system to determine the posterior probability of any other event that is logically related to a given event.

The algebraic language (or symbol) he creates allows variables to interact (or establish relationships) based on only two states (true and false). As is currently known, the Boolean algebra he has has three logical operators: AND, OR, and NOT. The combination of Boolean algebra and logic rules allows us to prove things "automatically". Therefore, a machine capable of doing this can be reasoned in a sense.

More than two centuries later, Kurt G?del (1931) proved that Leibniz's goal was overly optimistic. He proved that any branch of mathematics uses only the rules and axioms of this branch of mathematics, even if it is complete, it always contains propositions that cannot be proved true or false. The great French philosopher Rene Descartes solved the problem of physical reality through cognitive introspection in the book Meditations. He proved his existence through the reality of thought, and finally proposed the famous "Cogito ergo sum" - "I think so I am." In this way, Descartes and his philosophers established an independent spiritual world and material world. In the end, this led to the presentation of contemporary ideas that the mind and body are essentially the same.

Edsger Dijkstra

Zig Dijkstra (1930-2002) is a Dutch computer scientist who first studied theoretical physics, but his most well-known achievements are about good programming styles (such as structured programming) and good educational techniques. Writing and algorithms. There is an algorithm named after him, the algorithm that finds the shortest path to the target in a picture.

He made an important contribution to the development of programming languages, so he won the Turing Award in 1972, and from 1984 to 2000, served as the Schlumberger Centennial President of Computer Science at the University of Texas at Austin. He likes structured languages ??like Algol-60 (which helped him develop software) and doesn't like to teach BASIC. In writing, he received a very high honor, for example, his letter entitled "Go To Statement Concordred Harmful" (1986) - this is a letter to the editor of the Computer Society Communications (ACM).

Since the 1970s, most of his work has been the formal verification of the proof of correctness of the program. He wants to validate with elegant mathematics rather than with complex proof of correctness, and the complexity of proof of correctness often becomes very complicated. Dijkstra wrote more than 1,300 "EWD" (an acronym for his name), which is a handwritten personal note he wrote to himself, and after that he communicated with others to make the notes available for publication.

Before his death, he obtained the ACM Principles of Distributed Computing (ACM PODC) Impact Computing Paper (POC Influential Paper Award in Distributed Computing) in order to show him the work. Honor, this award is renamed the Dijkstra Award.

Arthur Samuel

In 1952, Arthur Samuel wrote the first version of the checkers program. Obviously, when programming the game for the IBM 704, Samuel's main interest was to develop a checker program that demonstrates machine learning. Samuel's early paper [and the importance of working in checkers is not that the results or procedures of the program must succeed, but after the program defeated the champion Robert Nealy in a single game, this is often news. The world exaggerated.

The significance of this work is that people see this process as an early model of the research and application of rational artificial intelligence technology. Samuel's work represents the earliest research in the field of machine learning. Samuel had thought about the possibility of learning the game using neural networks, but decided to adopt a more organized and structured network approach.

Dana Nau, a researcher in the field of game theory and automatic planning, is known for discovering "pathological" games in which, contrary to intuition, looking forward can lead to poor decisions.

Dana Nau

Dana Nau (born 1951) is a professor at the Department of Computer Science and Systems Research (ISR) at the University of Maryland. Dana Nau's research in automatic planning and game theory led him to discover such "pathological" games and made great achievements in this theory and its automatic planning applications. He and his students have won numerous awards for algorithms developed in AI planning, manufacturing planning, zero-sum games, and non-zero-sum games.

His SHOP and SHOP2 planning systems have been downloaded more than 13,000 times for thousands of projects worldwide. Dana has published more than 300 papers, several of which won the Best Paper Award, and he co-authored Automated Planning: Theory and Practice with other authors. He is a researcher at the Artificial Intelligence Development Association (AAAI). In addition to his role as a professor at the University of Maryland, Dana also held positions in the Department of Advanced Computer Research (UMIACS) and the Department of Mechanical Engineering, as well as in the Laboratory of Computational Culture Dynamics (LCCD).

Richard Korf studied problem solving, heuristic search, and planning in artificial intelligence. He found iteratively deepened depth-first search—a method similar to progressive deepening. This is the next step. The theme of the festival. Please see the supplement to learn more about Dr. Korf.

Richard Korf

Richard Korf (b. 1955) is a professor at the School of Computer Science at the University of California, Los Angeles. He received a bachelor's degree in MIT from the University of Massachusetts in 1977 and a master's and doctorate in computer science from Carnegie Mellon University in 1980 and 1983, respectively. From 1983 to 1985, he served as an assistant professor at Herbert M. Singer, School of Computer Science at Columbia University. His research areas are problem solving, heuristic search, and artificial intelligence planning.

Of particular note is that in 1985 he discovered an iterative deepening method that increased the efficiency of depth-first search. He also discovered the famous Rubik's Cube best solution in 1997. He is the author of "Learning to Solve Problems by Searching for Macro-Operators" (Pitman, 1985). He is a member of the editorial board of Artificial Intelligence and Applied Intelligence magazine. Dr. Korf was awarded the 1985 IBM Faculty Development Award, the NSN Presidential Young Investigator Award in 1986, and the Best UCLA Computer Science Department Distinguished Teaching Award ( 1989) and the 2005 Lockheed Martin Excellence in Teaching Award. He is a senior member of the Fellow of the American Association for Artificial Intelligence.

One of the founders of AI

Marvin Minsky

Since the Dartmouth Conference in 1956, Minsky (born in 1927) has been one of the founders of AI.

In 1950, he received a bachelor's degree in mathematics from Harvard; in 1954, he received a doctorate in mathematics from Princeton. But his area of ??expertise is cognitive science, and since 1958 he has worked hard at MIT to contribute to cognitive science.

He is obsessed with the field and continues until 2006, the 50th anniversary of the Dartmouth Conference. The book was first bred at the Dartmouth meeting. In 2003, Professor Minsky founded the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Minnes received the Turing Award in 1969, the Japan Prize in 1990, the Best Research Award in the International Joint Conference on Artificial Intelligence in 1991, and the Benjamin Franklin Medal from the Franklin Institute in 2001. He is one of the great pioneers and profound thinkers of artificial intelligence. He developed framework theory from the perspectives of mathematics, psychology, and computer science (see Section 6.8) and made many other important contributions to AI. In recent years, he has continued to work at the MIT Media Lab.

Mental society

In 1986, Marvin Minsky made a milestone contribution, and his book The Society of Mind opened the door to intelligent thinking and research. The review of this book can be found on the emcp official website, which highlights the following points.

Minsky's theory holds that the mind is composed of a large number of semi-autonomous, complex connected sets of agents, which themselves are not mental. As 闵ski said:

“This book attempts to explain how the brain works. How does intelligence come from non-intelligence? To answer this question, we will demonstrate the building of mind from many unintelligible widgets.”[43]

In Minsky's system, the mind is generated by many smaller processes, and he calls these small processes "agents." Every agent can only perform simple tasks—but when the agent joins the group to form a society, it brings intelligence “in a very special way”. Minsky's view of the brain is that it is a very complicated machine.

If we can imagine using a computer chip instead of every cell in the brain, these chips are designed to perform the same functions as brain agents, using the exact same connections in the brain. Minsky also said: "There is no reason to suspect that since the alternative machine embodies all the same processes and memories, the sense of thinking of the replacement machine is the same as you. It can be said that it is you, it has all of you. strength."

During Minsk's landmark work, people criticized artificial intelligence systems for not displaying common sense knowledge. In this regard, he has to say:

“The way we anticipate, imagine, plan, predict, and block involves thousands, perhaps millions, of small processes. However, all of these processes are automated, so we think it's 'ordinary common sense'.”

Since the late 1980s, Rodney Brooks has been building robots based on a containment architecture. He believes that intelligent behavior occurs from organized, relatively simple behavioral interactions. The containment architecture is the foundation for building a robotic control system that includes a set of task processing behaviors. It transforms the perceptual-based input into action-oriented output through the transformation of the finite state machine to realize the behavior of the robot. A simple conditional action production rule set defines a finite state machine.

Brooks' systems do not include global knowledge, but they do include some hierarchies and feedback between different levels of the architecture. Brooks enhances the system's capabilities by increasing the number of levels in the architecture. Brooks believes that the results of lower-level design and testing in the architecture produce top-level behavior. We performed experiments that revealed the best design for consistent behavior between layers and determined the appropriate communication between layers and layers. The simplicity of the architecture of the containment architecture did not prevent Brooks from succeeding in some applications.

Rodney Brooks - from rebellion to reform

Rodney Brooks (born in 1954) is versatile and humorous. In the 1980s, he broke into the AI ??field, questioning the established ideas, and proposing his own mavericks on how to build robotic systems. Years later, he became a famous AI leader, scholar and prophet. He earned a bachelor's degree in theoretical mathematics from Flinders University in Australia and a Ph.D. in computer science from Stanford University in 1981, holding research positions at Carnegie Mellon University and the Massachusetts Institute of Technology.

Prior to joining MIT, he held a professorship at Stanford University in 1984. He built his reputation through his work in robotics and artificial life. He further diversified his career through films, books and entrepreneurial activities. He established several companies, including Lucid (1984) and IROBOT (1990) (see Figure 6.21(a) to Figure 6.21(d)). In the IROBOT? company, he designed Roomba and its affiliated artificial organisms (1991) for commercial success (see Figure 6.21(c)). He is Professor of Panasonic Robotics at the Massachusetts Institute of Technology and Director of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. The robots he designs and manufactures have markets in both industry and the military. In 2008, he founded the Heartland robot, whose mission is to bring a new generation of robots to market and increase productivity in the manufacturing environment. “Heartland's goal is to bring robots to places that have not been automated, making manufacturers more efficient, workers more productive, keeping jobs and avoiding migration to low-cost locations.”

Hans J. Berliner

In the early 1970s, Dr. Hans Berliner (1966-1969), the world communication chess champion, proposed the concept of the horizon effect.

Hans J. Berliner (b. 1929) made a significant contribution to the chess game and advanced game programming. He received his Ph.D. from Carnegie Mellon University in 1969 and was a professor of computer science at the school. From 1965 to 1968, Berliner was the world's communication chess champion. In addition to developing the world's first master-level chess program in Hitech (1985), he also developed backgammon in 1979. Strong program.

Monty Newborn

Monty Newborn (born 1937) was one of the pioneers of computer chess, developed one of the earliest multiprocessor programs, OSTRICH, and organized the North American and World Computer Chess Championships in 1970. In 1977 he was also one of the co-founders of the Chess Association (ICCA). From 1976 to 1983, he was Dean of the School of Computer Science at McGill University. In the 1996 Kasparov and Deep Blue competition, he was the chief organizer. At the same time, he is also the author of books on computer chess and theorem. In his retirement, he likes to make beautiful stained glass lights and is one of the top senior tennis players in Quebec.

David Levy and Jaap Van Den Herik

In the field of computer chess and computer games, David Levy (born in 1945) is one of the most productive characters. He is a master of chess, a scholar, has published more than 30 books, and is an internationally recognized leader in artificial intelligence. Levy promoted research in the field of computer chess. In 1968, he made a famous bet with three computer science professors - he claimed that there was no program to defeat him in chess. He won several games in which D. K. was his support, but in 1989, Deep Thought defeated him with 4..0. Like D. K., Levy is also a student and friend of Donald Michie.

He published the popular "Robots Unlimited" (2005) and "Love and Sex with Robots" (2007).

Jaap van den Herik (b. 1947) is a professor of computer science at Maastricht University. In 2008, he became the leader of the Tilberg Center for Creative Computing. Professor Herik actively led and edited ICCA magazine, and finally renamed the magazine "International Computer Games Association Jour".

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.