The Differences between Machine Learning and Deep Learning

Source: Internet
Author: User
Keywords machine learning machine learning tutorial deep learning
They are not on the same level. Deep learning is a kind of machine learning.
1. Machine learning is a method to realize artificial intelligence. The concept of machine learning comes from the early artificial intelligence researchers. The algorithms that have been developed include decision tree learning, inductive logic programming, reinforcement learning and Bayesian network. In short, machine learning is the use of algorithms to analyze data, learn from it and make inferences or predictions. Unlike traditional handwritten software that uses a specific instruction set, we use a lot of data and algorithms to "train" the machine, which leads to machine learning how to complete tasks.
For many years, computer vision has been one of the best fields of machine learning, although it still requires a lot of manual coding to complete tasks. Researchers will manually write some classifiers, such as edge detection filters, to help the program identify the boundaries of objects; graphic detection classifiers to determine whether an object has eight faces; and classifiers that recognize "s-t-o-p". On the basis of these manually written classifiers, they develop algorithms for understanding images and learn how to determine whether there is a stop sign.
However, due to the lag of computer vision and image detection technology, it is easy to make mistakes.
2. Deep learning is a technology to realize machine learning. Early machine learning researchers also developed an algorithm called artificial neural network, but it was unknown for decades after its invention. Neural networks are inspired by the human brain: the interconnectedness of neurons. However, the neurons in the human brain can be connected with any neuron in a specific range, and the data transmission in the artificial neural network has to go through different layers, and the propagation direction is also different.
For example, you can slice an image into small pieces and input it into the first layer of the neural network. The primary calculation is done in the first layer, and then the neurons transmit the data to the second layer. The second layer of neurons perform tasks, and so on, until the last layer, and then output the final results.
Each neuron assigns a weight to its input: the accuracy and error of the neuron relative to the task it is performing. The final output is determined by these weights. So let's take a look at the stop flag example mentioned above. The attributes of a stop sign image are subdivided one by one, and then "checked" by neurons: shape, color, character, sign size and whether it moves. The task of the neural network is to determine whether this is a stop sign. It will give a "probability vector", which is actually a guess based on weight. In this example, the system may have 86% confidence that the image is a stop sign, 7% confidence that it is a speed limit sign, and so on. The network architecture then tells the neural network whether its judgment is correct.
However, the problem is that even the most basic neural network costs a lot of computing resources, so it was not a feasible method at that time. However, a small group of fanatical researchers, led by Professor Geoffrey Hinton of the University of Toronto, insisted on adopting this method, eventually enabling supercomputers to execute the algorithm in parallel and proving its effectiveness. If we go back to the stop sign example, it is likely that the neural network will often give wrong answers due to the influence of training. This shows that continuous training is needed. It takes tens of thousands of images, even millions of images, to train until the weights of neuron inputs are adjusted so accurately that they can give the correct answer almost every time. Fortunately, Facebook uses neural networks to remember your mother's face; Wu Enla implemented a cat recognition neural network at Google in 2012.
Today, in some cases, machines trained through deep learning outperform humans in image recognition, including finding cats and identifying cancer signs in the blood. Google's alphago learned to go and did a lot of training for the game: constantly playing against itself.
summary
The essence of artificial intelligence is intelligence, and machine learning is to deploy computing methods supporting artificial intelligence. Simple will, artificial intelligence is science, machine learning is to make the machine become more intelligent algorithm, machine learning to a certain extent, the achievement of artificial intelligence.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.