Ai is the future, is science fiction, is part of our daily life. All the arguments are correct, just to see what you are talking about AI in the end.
For example, when Google DeepMind developed the Alphago program to defeat Lee Se-dol, a professional Weiqi player in Korea, the media used terms such as AI, machine learning, and depth learning to describe DeepMind's victories. Alphago's defeat of Lee Se-dol, these three techniques have made a contribution, but they are not the same thing.
To understand their relationship, the most intuitive way to express is concentric circle, the first appeared is the idea, and then the machine learning, when the machine learning to flourish after the depth of learning, today's AI explosion is driven by the depth of learning.
From Decay to prosperity
In 1956, at the Dartmouth Conference (Dartmouth conferences), computer scientists first put forward the term "AI", in which Ai was born, and in the days that followed, Ai became the "fantasy object" of the lab. Decades passed, people's view of AI constantly change, sometimes think AI is a harbinger, is the future of human civilization is the key, sometimes think it is only technical garbage, but a rash concept, ambition too big, doomed to failure. Frankly speaking, Ai still has both of these characteristics until 2012.
In the past few years, the AI explosion, 2015 has been more rapid development. The rapid development owes much to the widespread availability of GPU, which makes parallel processing faster, cheaper, and more powerful. Another reason is the real storage capacity of unlimited expansion, large-scale data generation, such as pictures, text, transactions, map data information.
AI: Let the machine show human intelligence
Back in the summer of 1956, at the time of the conference, the AI Pioneer's dream was to build a complex machine (driven by the computer that was just emerging), and then let the machine show the characteristics of human intelligence.
This concept is what we call "strong AI", that is, to build an awesome machine that has all the human senses, even beyond human perception, and it can think like a human. In movies we often see this kind of machine, such as C-3PO, Terminator.
There is also a concept of "weak AI" (narrow ai). In short, "weak AI" can accomplish some specific tasks like human beings, possibly better than humans, for example, Pinterest service uses AI to classify images, Facebook uses AI to identify faces, and this is "weak AI".
The above example is a case of the actual use of "weak AI", which has embodied some of the characteristics of human intelligence. How to achieve. Where the intelligence comes from. With a question we understand deeply, we come to the next circle, it is machine learning.
Machine learning: A path to reaching AI targets
Generally speaking, machine learning is to use algorithms to truly parse data, to learn, and then to make judgments and predictions about what is happening in the world. Instead of writing software, identifying special sets of instructions, and then getting programs to perform special tasks, researchers would "train" machines with lots of data and algorithms to let machines learn how to perform tasks.
The concept of machine learning is proposed by early AI researchers, and in the past few years there have been many algorithms for machine learning, including decision tree learning, inductive logic programming, clustering Analysis (clustering), reinforcement learning, Bayesian networks, etc. As we all know, no one really achieve the ultimate goal of "strong artificial intelligence", using the early machine learning methods, we even "weak artificial intelligence" goal is far from being achieved.
For many years, the best case for machine learning has been "computer vision", and to achieve computer vision, researchers still need to write large amounts of code manually to complete the task. Researchers manually write classifiers, such as edge detection filters, that allow the program to determine where the object starts and ends, and shape detection determines whether the object has 8 edges, and the classifier can recognize the character "s-t-o-p." By hand-written packets, the researcher can develop an algorithm to identify a meaningful image and then learn to judge that it is not a stop sign.
This approach can be used, but it is not very good. If it is in foggy weather, when the visibility of the sign is lower, or if a tree blocks part of the logo, its ability to recognize it will fall. Until recently, computer vision and image-detection technology were far from human capabilities because it was too easy to make mistakes.
Deep Learning: The technology of realizing machine learning
"Artificial Neural Network (Artificial neural Networks)" is another algorithm, it is also proposed by early machine learning experts, has been in existence for several decades. The idea of a neural network (neural Networks) stems from our understanding of the human brain-the connection of neurons. The two are also different, the human brain neurons are connected by a specific physical distance, the artificial neural network has a separate layer, connection, and the direction of data dissemination.
For example, you might take a picture, cut it into chunks, and then implant it into the first layer of the neural network. The first layer of independent neurons will transmit data to the second layer, and the second layer of neurons has its own mission, which continues until the last level and produces the final result.
Each neuron weighs the input, determines its weight, and how it relates to the task it performs, such as how correct or incorrect it is. The final result is decided by the ownership again. Take the stop sign as an example, we will stop the logo image cutting, let the neuron detection, such as its octagonal shape, red, distinctive characters, traffic sign size, gestures and so on.
The task of the neural network is to give the conclusion that it is not a stop sign. The neural network gives a "probability vector" that relies on a calculated conjecture and weight. In this case, the system has 86% confidence that the picture is a stop sign, 7% of the confidence determines it is the speed limit sign, there is a 5% confidence that it is a kite stuck in the tree, and so on. The network architecture then tells the neural network whether it is correctly judged.
Even this simple thing is very advanced, and not long ago, the AI research community is still avoiding neural networks. There was already a neural network in the early stages of AI development, but it did not form much "intelligence". The problem is that even a basic neural network, which requires a high amount of computation, cannot be a practical method. Still, a handful of research teams are moving forward, such as the team led by Geoffrey Hinton of the University of Toronto, who put algorithms in parallel to the supercomputer, validating their concepts until the GPU began to be widely used to see the real hope.
Back to the example of identifying stop signs, if we train the network, train the network with a lot of wrong answers, and adjust the network, the result will be better. What the researchers need to do is train them to collect tens of thousands of or even millions of pictures until the weights of the artificial neuron input are highly accurate, so that every judgment is correct-whether it's foggy or not, it's sunny or rainy. Then the neural network can "teach" itself, figuring out what the stop sign is, and it can also identify Facebook's face image and identify the cat--what Andrew Ng did at Google in 2012 was to let the neural network identify the cat, Wunda said.
Wunda's breakthrough is that it makes the neural network extremely large, increasing the number of layers and neurons, allowing the system to run a lot of data and train it. Wunda's project calls pictures from 10 million YouTube videos, and he really lets deep learning have "depth".
Today, in some scenarios, machines that have been trained in deep learning techniques are better at identifying images than humans, such as identifying cats, identifying the characteristics of cancer cells in the bloodstream, and identifying tumours in MRI-scanned images. Google Alphago learning to go, and its own constantly under the go and learn from it.
With the depth of learning Ai's future a bright
With the depth of learning, machine learning has a lot of practical applications, it also expands the overall scope of AI. Deep learning splits the task, making it possible for various types of machine aid. Driverless cars, better preventative treatments, and better film recommendations have either emerged or even appeared. AI is both now and in the future. With the help of deep learning, perhaps one day AI will reach the level of science fiction, which is what we have long awaited. You will have your own C-3PO, you have your own terminator.