Under "TensorFlow" Network Operation Api_

Source: Internet
Author: User

Error value

Measure the loss error between two tensor or one tensor and 0, which can be used in a regression task or for the purpose of regularization (weight decay).

loss

tf.nn.l2_loss(t, name=None)

Explanation: This function uses the L2 norm to calculate the error value of the tensor, but does not prescribe and takes only half of the value of the L2 norm, as follows:

output = sum(t ** 2) / 2

Input parameters:

  • t: One Tensor . The data type must be one of the following:,,,,,,,,,, float32 float64 int64 int32 uint8 int16 int8 complex64 qint8 quint8 qint32 . Although, in general, the data dimension is two-dimensional. However, the data dimension can take any dimension.
  • name: Take a name for this operation.

Output parameters:

One Tensor , the data type and the t same, is a scalar.

Examples of Use:

#!/usr/bin/env python#-*-coding:utf-8-*-import numpy as Npimport tensorflow as Tfinput_data = tf. Variable (Np.random.rand (2, 3), Dtype = tf.float32) output = Tf.nn.l2_loss (Input_data) with TF. Session () as sess:    init = tf.initialize_all_variables ()    sess.run (init)    print Sess.run (input_data)    Print Sess.run (output)    print Sess.run (tf.shape (output))
Classifier sigmoid_cross_entropy_with_logits

tf.nn.sigmoid_cross_entropy_with_logits(logits, targets, name=None)

Explanation: The function is to calculate the logits cross-entropy after the activation of the sigmoid function.

For a discrete classification task that is not independent of each other, the function is to measure the probability error. For example, for example, in a picture that contains multiple categorical targets (elephants and dogs), you can use this function.

In order to describe brevity, we stipulate that x = logits z = targets , then the Logistic loss value is:

1 + exp(-x) )

To ensure that the calculations are stable and avoid overflow, the actual calculation is achieved as follows:

max(x, 0) - x * z + log(1 + exp(-abs(x)) )

Input parameters:

    • logits: One Tensor . The data type is one of the following: float32 or float64 .
    • targets: One Tensor . Both the data type and the data dimension are the logits same.
    • name: Take a name for this operation.

Output parameters:

One Tensor , the data dimension and the logits same.

Examples of Use:

#!/usr/bin/env python#-*-coding:utf-8-*-import numpy as Npimport tensorflow as Tfinput_data = tf. Variable (Np.random.rand (1,3), Dtype = tf.float32) output = Tf.nn.sigmoid_cross_entropy_with_logits (Input_data, [[ 1.0,0.0,0.0]]) with TF. Session () as sess:    init = tf.initialize_all_variables ()    sess.run (init)    print Sess.run (input_data)    Print Sess.run (output)    print Sess.run (tf.shape (output))

Early use, and later use more Softmax.

softmax

tf.nn.softmax(logits, name=None)

Explanation: The function is to calculate the Softmax activation function.

For each batch i and classification j , we can get:

softmax[i, j] = exp(logits[i, j]) / sum(exp(logits[i]))

Input parameters:

  • logits: One Tensor . The data type is one of the following: float32 or float64 . The data dimension is two [batch_size, num_classes] -dimensional.
  • name: Take a name for this operation.

Output parameters:

One Tensor , both the data dimension and the data type are the logits same.

 

Examples of Use:

#!/usr/bin/env python#-*-coding:utf-8-*-import numpy as Npimport tensorflow as Tfinput_data = tf. Variable ([[0.2, 0.1, 0.9]], Dtype = tf.float32) output = Tf.nn.softmax (Input_data) with TF. Session () as sess:    init = tf.initialize_all_variables ()    sess.run (init)    print Sess.run (input_data)    Print Sess.run (output)    print Sess.run (tf.shape (output))
Log_ softmax

tf.nn.log_softmax(logits, name=None)

Explanation: The function is to calculate the Softmax activation function.

For each batch i and classification j , we can get:

softmax[i, j] = log(exp(logits[i, j]) / sum(exp(logits[i])))

Input parameters:

  • logits: One Tensor . The data type is one of the following: float32 or float64 . The data dimension is two [batch_size, num_classes] -dimensional.
  • name: Take a name for this operation.

Output parameters:

One Tensor , both the data dimension and the data type are the logits same.

softmax_cross_entropy_with_logits

tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None)

Explanation: The function is to calculate the logits cross-entropy after the activation of the Softmax function.

For each separate classification task, this function is to measure the probability error. For example, on the CIFAR-10 dataset, there is only one category label for each image : A picture may be a dog or a truck, but it is not possible to have both in one picture. ( This is also the tf.nn.sigmoid_cross_entropy_with_logits(logits, targets, name=None) difference from this API )

Warning: Data for the input API logits cannot be scaled because Softmax calculations are performed in this API, and if logits scaled, it affects the calculation accuracy. Do not call this API area to calculate the value of Softmax because the final output of this API is not the value of the Softmax function.

logitsAnd labels must have the same data dimension [batch_size, num_classes] , and the same data type float32 or float64 .

Examples of Use:

#!/usr/bin/env python#-*-coding:utf-8-*-import numpy as Npimport tensorflow as Tfinput_data = tf. Variable ([[0.2, 0.1, 0.9]], Dtype = tf.float32) output = Tf.nn.softmax_cross_entropy_with_logits (Input_data, [[1,0,0]]) With TF. Session () as sess:    init = tf.initialize_all_variables ()    sess.run (init)    print Sess.run (input_data)    Print Sess.run (output)    print Sess.run (tf.shape (output))

Sparse_ s oftmax_cross_entropy_with_logits

tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels, name=None)

Explanation: The function is to calculate the logits cross-entropy, the same encoding, after the activation of the Softmax function softmax_cross_entropy_with_logits,只是logits的shape是[batch, class], label的shape是[batch],不用人为one_hot .

weighted_cross_entropy_with_logits

Under "TensorFlow" Network Operation Api_

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.