[Translate] using neural networks for regression (using neural Networks with Regression)

Source: Internet
Author: User

This article is from here, the content of this blog is Java Open source, distributed deep Learning Project deeplearning4j The introduction of learning documents.

Introduction:

in general, neural networks are often used for unsupervised learning, classification, and regression. That is, neural networks can help group unlabeled data, classify data, or output successive values after supervised training. The application of typical neural networks in classification uses the logical regression classifier (and the like) in the last layer of the network to convert continuous (continue) values into discrete values such as: 0/1, for example, given a person's height, weight, and age, You can give it a heart attack or a heart-free judgment. The real return is to map a contiguous set of inputs to another set of contiguous outputs.

For example, given the intervention of a house, the area, and the distance to a good school, you will predict the price of the house: This is the continuous input mapping to the continuity output. There is no 0/1 in the classification task, but only the independent variable x is mapped to a continuous output y.

Nn-regression structure:

In, x is the input, the feature propagates forward in the layer in front of the network, many x's are connected to each neuron in the last hidden layer, and each x is multiplied by a corresponding weighted w. The sum of these products, coupled with a bias, is sent to an activation function Relu (=max (x,0)), a widely used activation function that does not saturate like a sigmoid activation function. for each hidden layer neuron, reluctant enters an activation value of a, and in the output node of the network, calculates the sum of these activation values as the final output. In other words, using a neural network to do regression will have an output node, and this node is only to add the activation value of the previous node. Got it ? is the independent variable that is obtained by all of your x maps.

Training Process:

For network reverse propagation and network training, you can simply use the output of the network ? Compared with the real value y , the error of the network is minimized by adjusting the weights and biasing. You can use Root-means-squared-error (RMSE) as the loss function.

The deeplearning4j can be used to build a multilayer neural network, and at the end of the network add an output layer, the specific code reference is as follows:

// Create Output Layer . Layer (). NIn ($NumberOfInputFeatures). Nout (1). Activationfunction ('  Identity'). Lossfunction (LossFunctions.LossFunction.RMSE)

Where the number of neurons in the Nout output layer, NIN is the dimension of the eigenvector, in which this should be set to 4, activationfunction should be set to ' identity '.

[Translate] using a neural network for regression (using neural Networks with Regression)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.