I am going to enter the new recruitment season. For those who are interested in the field of technology, in the face of the current artificial intelligence boom that is sweeping the world, "data scientists" and "algorithm engineers" are definitely hot jobs. Keywords such as “artificial intelligence”, “machine learning”, “deep learning”, “modeling” and “convolution neural network” are not only the talk of people after a meal, but also become a necessary skill for software engineers.
?In the next few years, artificial intelligence technology will undoubtedly be fully popularized, and related talents are scarce. Many students who are studying computers or software engineers who already have certain work experience hope to be able to master the technology in related fields and become a strong competitor before artificial intelligence completely occupies the world. Hu Ge, vice president of global research and development at Hulu, has been deeply involved in the field of artificial intelligence. She is the editor-in-chief of "The 100-face Machine Learning: Algorithm Engineers Take You to the Interview" (which has been pre-sold in the whole network). The learning experience and industry knowledge in the field of artificial intelligence, I hope that the following sharing of her experience can bring inspiration.
Zhuge Yue, editor-in-chief of "Baifang Machine Learning". He is currently Vice President of Global R&D at Hulu and General Manager of China R&D Center. He graduated from the Department of Computer Science and Technology of Tsinghua University, Master of Computer Science and Ph.D. from Stanford University, and Master of Applied Mathematics from the State University of New York at Stony Brook.
Zhuge Yue: I am with artificial intelligence
My undergraduate major is artificial intelligence, so I have been exposed to many cutting-edge technologies in the field of artificial intelligence. My mentor in the introductory course of artificial intelligence is Professor Lin Yurui and the author of Introduction to Artificial Intelligence. In the fourth year of my undergraduate degree, I had the privilege of entering the artificial intelligence laboratory of Tsinghua University. I studied under Zhang Yi and did some simple research. From Mr. Zhang and the senior students, I learned a lot of internationally advanced knowledge in the field of artificial intelligence.
When I first entered Stanford, I went to a small lunch lecture (Brown Bag). The classmate was halfway through. The classroom door was suddenly opened. The bearded Professor John McCarthy came in and asked aloud: "I heard that there is no money for lunch here?" Then he walked to the front of the room, grabbed two sandwiches, and swayed out. The teacher who presided over the lecture stunned and said, “Welcome everyone to Stanford – the world’s most famous scientist will walk into your classroom to grab your food!” The word “Artificial Intelligence” From John McCarthy.
I also went to Stanford to learn an artificial intelligence class CS140. Professor Nils Nilsson, who taught this class at the time, was another discipline founder and world-class expert in artificial intelligence. Professor Nielsen’s class was very interesting. I also made a small project with him to plan a path for the sweeping robot. So far, I have kept the notes for this class.
To be honest, when I was young, I did my homework and work on a daily basis. I didn't realize how lucky I was to be with these top scientists. I didn't know that I was witnessing the world's frontier in a certain technology field. The top technology is the only one that can only be understood and appreciated by a small group. Now it seems that my encounters with artificial intelligence and artificial intelligence are just a few times related to the three waves of artificial intelligence.
Three waves of artificial intelligence
The first wave of artificial intelligence was around the 1950s. At the Dartmouth Artificial Intelligence Symposium in 1956, John McCarthy officially proposed the concept of "artificial intelligence", which was recognized as the beginning of modern artificial intelligence discipline. McCarthy and Marvin Minsky of the Massachusetts Institute of Technology are known as "the father of artificial intelligence."
In the early days of computer invention, many computer scientists seriously thought about and discussed the fundamental difference between this human-invented machine and human beings. The first batch of experts who thought about artificial intelligence took a very frontier in thinking and theory and saw the potential of computers. Many of the basic theories at this stage are not only the basic theory of artificial intelligence, but also the cornerstone of the computer profession.
The first wave of artificial intelligence was primarily based on logic. In 1958, McCarthy proposed the logical language LISP. From the 1950s to the 1980s, researchers proved that computers can play games, can understand a certain degree of natural language, invented a neural network, and can do simple language understanding and object recognition.
However, in the first two or three decades of artificial intelligence, although it was a fruitful research field, it entered the "winter" in the early 1980s because of the lack of application. By the end of the 1980s and early 1990s, artificial intelligence scientists had taken a different approach, from solving large universal intelligence problems to single problems in certain fields. After 30 years of development of computer technology, data storage and application have a certain foundation. Researchers have seen the possibility of combining artificial intelligence and data, and proposed the concept of “expert system” to see and forecast weather. The expert system of the industry brings promising, meaningful and practical application scenarios, and these research results have found the first possible commercial outlet. This is the second wave of artificial intelligence.
However, what's more interesting is that when we want to use these expert systems to do some clever diagnosis, we find that the problem is not how to diagnose, but most of the data at the time is not digital. The patient's diagnosis history is still stuck in the handwritten prescription of the doctor who cannot understand. Even if some information has already begun to be digitized, it is also in some forms or in some machines that are not connected to each other. Therefore, those who want to do automatic diagnosis have instead done some work to digitize all the information in the world.
When a group of people is committed to turning every book, every picture, and every prescription in the world into an electronic version, the widespread use of the Internet links these pieces of information together and becomes truly big data. At the same time, the increase in computational performance predicted by Moore’s law has been working. With the exponential growth of computing power, applications that can only be implemented in the lab or in limited scenarios are getting closer to real life.
The third wave of artificial intelligence is based on the tremendous growth of computing power and massive data. The huge computing power comes from the development of hardware, distributed systems, and cloud computing technologies. Recently, neural-network-based computing, which is specifically designed for neural networks, has once again promoted the advancement of the combination of artificial intelligence software and hardware. Massive data comes from the accumulation of data and the development of Internet technology in the past few decades. The combination of computing power and data promotes and catalyzes the leap in machine learning algorithms.
This wave of artificial intelligence began in the past 10 years. The most basic difference between the previous two is its universal application and the impact on the lives of ordinary people. Artificial intelligence has left the academic laboratory and truly entered the public's field of vision.
Is artificial intelligence fully approaching human capabilities?
Why is this artificial intelligence wave so fierce? Is artificial intelligence really approaching human capabilities? What stage is the artificial intelligence technology now developing? Let's look at three simple facts.
The first fact is that for the first time in history, computers have surpassed humans or are about to surpass humans in many complex tasks, such as image recognition, video understanding, machine translation, car driving, and go-go. Therefore, the topic of artificial intelligence replacing humans began to appear in various headlines.
In fact, in terms of single technology, many computing related technologies have already surpassed humans and are widely used, such as navigation, search, search, and stock trading. But these are mainly "complete a task", the computer does not involve too much human perception, thinking, complex judgment and emotion.
In recent years, the tasks completed by machines have become more and more close to humans from complexity and form. For example, autopilot technology based on machine learning tends to mature, which not only has a revolutionary impact on people's travel patterns, but also affects urban construction, personal consumption, and lifestyle. Everyone is excited and frightened by the rapid arrival of such new technologies. On the one hand, they enjoy the convenience they bring, and on the other hand, they are overwhelmed by the changes that are too fast.
In addition, the computer's self-learning ability is constantly increasing. The development of modern machine learning algorithms, especially deep learning machine learning algorithms, makes the behavior of machines no longer relatively predictable "programs" or "logics", but more like "black box thinking", which is almost incomprehensible to humans. Thinking ability.
The second fact is that, in careful view, although artificial intelligence has developed by leaps and bounds in many special fields, the general intelligence of the originator of artificial intelligence in the first wave is still far from the same. The machine is still being tasked with specific tasks, but the task is more complicated. The machine still lacks some of the most basic human intelligence. Artificial intelligence still can't understand even simple emotions. For two or three-year-old children, it is very simple to help and cooperate, and the machine can't do it.
The third fact is that the application scenarios of artificial intelligence and machine learning are very broad. In recent years, the great development of artificial intelligence and machine learning applications has made the concept of the field of academic research into the public's field of vision, and has become a topic of relevance to the future. The application of algorithmic classes has stepped out of the academic world and penetrated into every corner of society, infiltrating into every aspect of people's lives. Well-known face recognition, autonomous driving, medical diagnosis, machine assistants, smart cities, new media, games, education, etc., as well as the automation that is not often discussed, such as agricultural production, care for the elderly and children, dangerous scenarios Operation, traffic scheduling, and more. The wave has affected all aspects of society.
Looking forward to the decade, the great development of artificial intelligence and machine learning lies in the popularity and application of these technologies. A large number of new applications will be developed, and the artificial intelligence infrastructure will be rapidly improved. The original traditional software and applications need to be migrated to use new algorithms. So now is a good opportunity to become an expert in artificial intelligence and machine learning.
What do you need to do to become a member of the “big wave” of the next generation of artificial intelligence? Perhaps the book "100 Machine Learning" can help you move forward. The contents of this book are developed from simple to complex, covering all practical areas of machine learning, and taking examples and questions and answers in a lively manner. Help you become a better algorithm engineer, data scientist and practitioner of artificial intelligence.
How to read "100-face machine learning"
This book is very informative and covers various sub-areas of artificial intelligence and machine learning. Different companies, businesses, and positions may use different skills. So there are several suggestions for reading this book.
Shun reading: read from beginning to end. Read all the content, all the questions will be answered.
From simple to difficult: the difficulty is marked next to each question. One star is the simplest, and the five stars are the hardest. In this book, a list of topics is also provided. The topic of a star is mainly to introduce the basic concept, or why to do something. If you are an introductory learner of machine learning, you can start with background knowledge and simple topics.
Goal work method: Not all companies, all positions need to know all kinds of algorithms. If you are currently working or want to work in a certain field, they may use some sort of algorithm. If you are interested in a new field, you can specialize in these chapters. Regardless of the type of algorithm used, basic skills such as feature engineering and model evaluation are important.
Internet reading method: A book is difficult to cover a wide range of fields, and the questions and answers can be used in a variety of ways. Therefore, we have summarized and expanded after many chapters. Friends who are interested in a certain field can use this book as a starting point to deepen their reading and become an expert in this field.
Boss reading method: If you are a technical manager, the problem you need to solve is how the algorithm may help your existing technical system, and how to find the right person to help you make smart products. It is recommended that you take a cursory look at the book to understand the various technical areas of machine learning and find the right solution. Then, you can use this book as an interview book.
?The algorithms of artificial intelligence and machine learning are still evolving, let us keep up with the progress of this technology field. I wish you all the best in this exciting new era of technology.
The field of artificial intelligence is developing at a speed that is beyond people's imagination. It is a blessing to write this book before the artificial intelligence completely captures the world.
The book contains interview questions and answers from more than 100 machine learning algorithm engineers, most of which are derived from the real scene of the Hulu algorithm research position. This book starts from the various interesting phenomena in daily work and life. It not only covers the basic knowledge of machine learning, but also includes the relevant skills to become an excellent algorithm engineer. More importantly, it condenses the author's field of artificial intelligence. The enthusiasm is designed to cultivate readers' ability to discover problems, solve problems, expand problems, build a love for machine learning, and draw a grand blueprint for the world of artificial intelligence.
"Do not accumulate steps, no miles," this book will start from the field of classical machine learning such as feature engineering, model evaluation, dimensional reduction, etc., to build a necessary knowledge system for algorithm engineers; see neural network, reinforcement learning, generation of confrontation networks, etc. The latest scientific research progresses, knowing the pros and cons of deep learning and winning and losing; "Bobview and approx, thick and thin hair", in the last chapter to show readers the various artificial intelligence applications in the era of life.