The problem that machine learning can solve well
- Recognition mode
- Identify exceptions
- Forecast
Brain work mode
Humans have a neuron, each containing a weight that is much better than a workstation.
Different types of neurons
Linear (linear) neurons
Binary threshold (two-valued) neurons
ReLu (rectified Linear Units) neurons
Sigmoid neurons
Stochastic binary (random two-valued) neurons
Different types of learning tasks
Supervised learning (supervised learning)
Given the input vectors, learn how to predict the output vectors.
For example: Regression and clustering.
Reinforcement learning (Enhanced learning)
Learn how to choose Actions to maximize payoff (benefits).
The output is an action, or sequence of actions, and the only supervisory signal is a scalar feedback .
The difficulty is that feedback is largely delayed , and a scalar contains a limited amount of information.
Unsupervised learning (unsupervised learning)
A good intrinsic expression of the input is found.
Provides a compact, low-dimensional representation of the input.
Provides an economic high-dimensional representation of input by the characteristics that have been learned.
clustering is an extremely sparse coding form, with only one-dimensional non-0 characteristics .
Different types of neural networks
Feed-forward Neural Networks (forward propagation neural network)
More than one layer of hidden layer is the deep neural network.
Recurrent networks (recurrent neural network)
More credible in biology.
The sequence can be modeled with RNN:
Equivalent to a very deep network, each layer of hidden layer corresponds to a time slice.
The hidden layer has the ability to memorize long-time information.
Perception machine from a geometrical point of view
Weight-space (Weighted space)
Each weight corresponds to one dimension of space.
Each point of space corresponds to a specific weight selection.
Ignoring biased items, each training sample can be treated as a hyper-plane of an over-origin point.
Taking all the training samples into consideration, the possible solution of the weights is inside a convex cone .
Two-valued neurons can't do it.
With OR
Circular simple Pattern recognition
Regardless of mode A or pattern B, each time the entire training set runs out, the neuron gets 4 times times The input of the ownership value.
Without any distinction, there is no way to differentiate between the two (non-circular patterns can be identified).
Using hidden neurons
Linear neurons are also linear and do not increase the ability to learn in the network.
The nonlinearity of the fixed output is not enough.
The weights of learning hidden layers are equivalent to the learning characteristics.
Welcome to participate in the discussion and concern this blog and the Micro Blog as well as the Personal homepage Follow-up content continue to update Oh ~
reproduced please respect the author's labor, complete reservations The above text as well article links , thank you for your support!
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Neural Networks for machine learning by Geoffrey Hinton (or both)