What are programs (program)
A computer program is a sequence of coded instructions (or a symbolic sequence of instructions that can be automatically converted into a coded sequence of instructions) for the purpose of obtaining a result that can be performed by a computer (such as a device with information processing capability).
Popular speaking, the computer to work, but it is not human, even as the dog understand the needs of people ("Little Lamb Shaun" in the dog is how smart and lovable and loyal to the host). How to make it work, that requires programmers in a programming language to write programs, programming language is the computer can understand the language, the computer can execute these programs (instructions), the final completion of the task.
The C + + program below is the factorial of the completion N:
int n = std::atoi(argv[1]); //求n的阶乘 double result = 1.0; for (int i = 2; i <= n; i++) { result *= i; } std::cout << n << "的阶乘是:" << result << std::endl;
What is an algorithm (algorithm)
An algorithm is a description of the solution steps for a particular problem, represented as a finite sequence of instructions in a computer, and each instruction represents one or more operations.
Give a simple example, and you can use it in your life. Now make a small game, a on the paper randomly wrote a 1 to 100 integer, b to guess, guess the game is over, guess the wrong word a will tell B guess small or big. So what will b do, the first time you must guess 50, guess the middle number. Why is it? Because of this worst case scenario (Log2log2100) Six or seven times can be guessed.
This is a binary search, which may be used in life, and often used in software development.
Then look at a slightly more complex a little bit of the algorithm, "quick sort", the interview frequency is very high very high, even can be said to be required.
What is the definition of machine learning Algorithm (learning) machine learning
The definition in machine learning book:
With respect to a certain kind of task T and performance Metric p, if a computer program can measure the performance of P on T with experience e and self-improvement, then we call this computer program to learn from experience E.
such as Alphago:
- Task T: Chess
- Performance Standard P: Percentage of opponents defeated
- Training experience: With your own game or competition experience.
Another example is autonomous driving:
- Task T: Driving on the freeway with a video sensor
- Performance Standard P: Average error-free mileage
- Training Experience E: Watch a series of images and driving instructions recorded during human driving.
The definition of Baidu Encyclopedia:
Machine Learning (machines learning, ML) is a multidisciplinary interdisciplinary subject involving probability theory, statistics, approximation theory, convex analysis, algorithmic complexity theory and many other disciplines. Specialized in computer simulation or realization of human learning behavior, in order to acquire new knowledge or skills, reorganize the existing knowledge structure to continuously improve their performance.
It is the core of artificial intelligence, is the fundamental way to make the computer intelligent, its application throughout the field of artificial intelligence, it mainly uses induction, synthesis rather than deduction.
The main task of machine learning
Supervised Learning:
(1) Classification: Divide the instance data into appropriate classifications.
KNN (k-Nearest neighbor algorithm), decision tree, naive Bayesian, logistic regression, SVM (support vector product).
(2) Regression: predicting numerical data.
Unsupervised Learning:
(1) Clustering: The process of dividing a data collection into multiple classes consisting of similar objects.
K-means (K-mean-value clustering)
Neural networks (neural network) and the biological revelation of deep Learning (learning)
Ann Research in artificial neural network is partly inspired by biology, and the biological learning system consists of interconnected neurons (neuron), an unusually complex mesh. The artificial neural network is composed of a series of simple cells, each of which has a certain number of real value inputs, and produces a single real numerical output.
It is estimated that the human brain is made up of aboutA dense network of 1 1 1011 neurons connected to each other, with an average of each neuron connected to the other 4104 neurons. The activity of neurons is usually activated or suppressed by connections to other neurons.
Neurons of the organism:
Artificial neurons (perceptual machines):
Multilayer Sensing Machine:
Neural network representation
The 1993 Alvinn system is a typical example of Ann Learning, which uses a learned Ann to drive a car on the freeway at a normal speed. The input to the Ann is a 30*32 pixel grid with the brightness of the pixel coming from a forward camera mounted on the vehicle. The output of the Ann is the direction in which the vehicle travels.
Shallow-level learning
In the the late 1980s, the invention of the reverse propagation algorithm (also called the back propagation algorithm or BP algorithm) for artificial neural networks brought hope to machine learning and set off a machine learning craze based on statistical models. This craze has continued to this day. It is found that the BP algorithm can be used to make an artificial neural network model to learn statistical laws from a large number of training samples, so as to predict unknown events. This statistical-based machine learning approach is more advantageous in many ways than in previous systems based on artificial rules. The artificial neural network at this time, although also known as Multilayer perceptron (multi-layer Perceptron), is actually a shallow layer model with only one layer of hidden layer nodes.
In the the 1990s, a variety of shallow machine learning models were presented, such as support vector machines (svm,support vector machines), boosting, and maximum entropy methods (such as Lr,logistic Regression). The structure of these models can basically be seen with a layer of hidden nodes (such as SVM, boosting), or no hidden layer nodes (such as LR). These models have achieved great success both in theoretical analysis and in application. In contrast, because of the difficulty of theoretical analysis, training methods need a lot of experience and skills, this period of shallow artificial neural network is relatively quiet.
Deep learning
The essence of deep learning is to learn more useful features by building machine learning models with many hidden layers and massive training data, which ultimately improves the accuracy of classification or prediction. Therefore, the "depth model" is the means by which "characteristic learning" is the purpose. Different from the traditional shallow learning, the difference of deep learning is that: 1) emphasizes the depth of the model structure, usually has 5 layers, 6 layers, or even 10 layers of hidden layer nodes; 2) clearly highlights the importance of feature learning, that is to say, by changing the characteristics of the original space to a new feature space, This makes it easier to classify or predict. Compared with the method of constructing characteristics of artificial rules, the use of big data to learn the characteristics, more able to depict the rich intrinsic information of the data.
Deep learning itself is a machine learning branch, simple can be understood as the development of neural network.
A typical convolutional network used to identify numbers is LeNet-5. Most American banks used it to identify handwritten figures on cheques. Can reach the point of this commercial, it is conceivable accuracy.
The network structure of the LeNet-5 is as follows:
Concepts associated with machine learning
Data Mining (Mining)
Data Mining = machine learning + database. Data mining is the process of automatically discovering useful information in a large data repository.
Natural language Processing (Natural Language process)
Natural Language Processing = text processing + machine learning. Natural language processing technology is a field that allows machines to understand human language. In the natural language processing technology, a large number of techniques related to compiling principles, such as lexical analysis, grammatical analysis and so on, in addition, in the understanding of this level, the use of semantic understanding, machine learning and other technologies. As the only symbol created by human beings themselves, natural language processing has always been the research direction of the machine learning field. According to Baidu machine learning expert Kaiyu's saying "Listen and look, plainly speaking is a cat and a dog will, and only the language is unique to the human." How to use machine learning technology for the deep understanding of natural language has always been the focus of industry and academia.
mode recognition (Pattern recognition)
Pattern recognition = machine learning. The main difference between the two is that the former is a concept developed from the industry, the latter mainly from the computer science.
Statistical learning (statistical learning)
Statistical learning is approximately equal to machine learning. Statistical learning is a highly overlapping subject of machine learning. Because most of the methods in machine learning come from statistics, it can even be argued that the development of statistics promotes the prosperity of machine learning. For example, the well-known support vector machine algorithm is derived from statistical disciplines. But to some extent, the difference is that: statistical learners focus on the development and optimization of statistical models, partial mathematics, while machine learners are more concerned about the ability to solve problems, partial practice, so machine learning researchers will focus on learning algorithms on the computer to perform the improvement of efficiency and accuracy.
Computer Vision (Computer vision)
Computer Vision = Image processing + machine learning. Image processing technology is used to process images as input into the machine learning model, and machine learning is responsible for identifying relevant patterns from the image. Computer vision-related applications are many, such as Baidu, handwritten character recognition, license plate recognition and other applications. This field is very hot in the application foreground, but also is the hot direction of research. With the development of deep learning in the new field of machine learning, the effect of computer image recognition is greatly promoted, so the future development of computer vision is immeasurable.
Speech recognition (Speech recognition)
Speech recognition = Speech processing + machine learning. Speech recognition is the combination of audio processing technology and machine learning. Speech recognition technology is generally not used alone, generally combined with natural language processing of related technologies. The current app has Apple's voice assistant, Siri, and so on.
Computer graphics, Digital image processing, PC vision
- Computer Vision (computer vision, referred to as CV), is to let the computer "understand" the world people see, the input is the image, the output is the key information in the image;
Pictures, dog or cat?
Picture, [xyz xyz xyz ... xyz]
- Computer Graphics (computer graphics, referred to as CG), is to let computers "describe" the world people see, input is a three-dimensional model and scene description, output is rendered image;
Pictures of [xyz XYZ xyz ... xyz]
- Digital image processing (digitally image processing, referred to as DIP), the input is an image, the output is also an image. Applying filters to a pair of images in Photoshop is a typical image processing. Common operations include blur, grayscale, contrast enhancement, and more.
Picture of PS after picture
Contact again
- CG will also use dip, today's three-dimensional game in order to increase the performance will be superimposed full-screen post-effect, the principle is dip, just put the computational amount on the graphics side. It is common practice to draw a full-screen rectangle to perform image processing in the Pixel Shader.
- CV relies heavily on dips for chores, such as pre-treatment of images that need to be identified, enhanced contrast, and noise removal.
- Finally, this year's hot spot-augmented reality (AR), which requires both CG and CV, certainly does not miss the DIP. It uses DIP pretreatment, uses CV to track object recognition and attitude acquisition, and uses CG to overlay virtual three-dimensional objects.
Interview--What is the interviewer's wedding?
My personal experience a formal interview consists of several parts:
- Basic capabilities: Data structures and algorithms by doing some IQ questions, ACM, General pen questions will be found from Leetcode. In addition to basic data structures and algorithms, basic capabilities often examine how well a candidate is mastering a programming language.
- Work experience: in which companies have worked, have done what projects, can be done by the things very clear and very systematic to speak out. (Note: Even if it is not what you have done, the job seeker can speak it well and the interviewer will give extra points)
- Communication skills: Whether the character is better, whether the communication can be pleasant, is not able to integrate into the team. In fact, sometimes it is to see the value of Yan, popular said can see eye. Even if the ability is not good, but the interview lawsuit think people good, work can get, worth training also no problem.
What does a job seeker want?
What to look for in an interview
- Technical competence is the core
- Modest and prudent honesty is an important factor to impress the interviewer
- Communication is important, too.
- Properly beautify your experience, but do not brag, nor too modest
Resources:
Deep Learning (depth learning) Learning Notes finishing Series (iii)
"Machine learning" Tom M.mitchell
"Machine learning Practice" Peter Harrington
The beauty of Mathematics Wu
The method of statistical learning, Li Hang
"Computer vision, graphics and image processing, what are the three links?" "Zhang Jing
"Talking from machine learning" the subconscious of the computer
"The opposites of computer vision and graphics" Bu ju
Machine learning Algorithms