Neural network model for machine learning-under (neural networks:representation)

Source: Internet
Author: User

3. Model Representation I 1

Neural networks are invented when mimicking neurons or neural networks in the brain. So, to explain how to represent a model hypothesis, let's start by looking at what individual neurons are like in the brain.

Our brains are filled with neurons like the one shown here, which are cells in the brain. One of the two points worth noting is that neurons have cell bodies like this (Nucleus), and neurons have a certain number of input nerves and output nerves. These input nerves are called dendrites (dendrite) and can be thought of as input wires that receive information from other neurons. The neurons ' output nerves are called axons (Axon), which are used to transmit signals or transmit information to other neurons.

In short, a neuron is a computational unit that receives a certain amount of information from the input nerve and makes some calculations, and then transmits the result through its axon to other nodes or other neurons in the brain.

Here is a set of neurons:


Neurons use weak currents to communicate. These weak currents, also known as action potentials, are actually some faint currents. So if a neuron wants to deliver a message, it sends a faint current to other neurons through its axis bursts.

2

, the yellow circle represents a neuron, X is the input vector, and θ represents the weight of the neuron (which is actually the model parameter we described earlier), and hθ (X) represents the excitation function (in neural network terminology, the excitation function is just another term for a similar nonlinear function, g (z), G (z) equals 1 divided by 1 plus e-z-square).

In fact, you can understand that neurons are weight θ.

When the input is sent to the neuron, the calculation (actually xtθ) will have an output, which is then fed into the excitation function, resulting in the true output of the neuron.

Note: When we draw a neural network, when I draw a neural network, I usually draw only input nodes x1, x2, X3, and so on, but sometimes add an additional node x0, which is sometimes referred to as biased units or biased neurons. But because x0 is always equal to 1, sometimes we draw it, sometimes we don't draw it, it depends on whether it's good for example.

A neural network is a combination of different neurons. The first layer is the input layer, the last layer is the output layer, and all the layers in the middle are hidden layers.

Note: input unit x1, x2, X3, again, sometimes you can also draw an additional node x0. Meanwhile, there are 3 neurons here, I wrote A1 (2), A2 (2) and A3 (2), and then again, we can add a A0 (2) here, which, like X0, represents an extra skewness unit whose value is always 1 (note: A1 (2), A2 (2), and A3 (2) are calculated is the value of G (xtθ), while the A0 (2) holds the bias 1).

If a network has SJ units at Layer J, and the J+1 layer has SJ + 1 units, then the matrix θ (j) controls the mapping of Layer J to layer j+1.

The dimensions of the matrix θ (j) are S (j+1) * (sj+1), S (j+1) rows, (sj+1) columns.

In short, our diagram above shows how to define an artificial neural network. This neural network defines the function H: a mapping from input x to output Y. I will take these hypothetical parameters
In the case of θ in uppercase, so that different θ corresponds to different assumptions, we have different functions, such as maps from X to Y.

This is how we mathematically define neural network assumptions.

4. Model Representation II

5. Examples and intuitions I

The problem of classification of "and", "or" is solved by using neural network.

6. Examples and intuitions II

Neural networks can also be used to identify handwritten numbers.

The input it uses is a different image or just some original pixel points. The first layer calculates some features, and then the next layer calculates some slightly more complex features, then the more complex features, which are actually finally passed on to the last layer of logistic regression classifier, which accurately predicts the number of neural networks to "see".

Examples of multi-classification using neural networks are shown below.

Neural network model for machine learning-under (neural networks:representation)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.