Learning Google's deep learning finally a little bit of the prospect, to share my tensorflow learning process.
TensorFlow's official Chinese document is jerky, and the dataset has been used in the Mnist binary dataset. And not much about how to build their own picture dataset Tfrecords.
First paste my conversion code to transfer the pictures under the picture folder to the Tfrecords dataset.
############################################################################################ #!/usr/bin/
1. Configprotogpu
Tf. Configproto is typically used when creating a session. Used to configure the session parameters
With TF. Session (config = tf. Configproto (...),...)
#tf. Configproto () parameter
log_device_placement=true: whether to print device allocation log
Allow_soft_placement=true: Allow
Preface:
TensorFlow There are many basic concepts to understand, the best way is to go to the official website followed by the tutorial step by step, there are some translated version, compared to see to help understand: tensorflow1.0 document translation text:
One, the necessary process of building and executing the calculation diagram
1,graph (Figure calculation): see TF. Graph classUsing TensorFlow to train a neural network consists of two parts:
TensorFlow model save/load
When we use an algorithmic model on-line, we must first save the trained model. TensorFlow the way to save the model is not the same as Sklearn, Sklearn is straightforward, a sklearn.externals.joblib dump and load method can be saved and loaded using. and TensorFlow because of the graph, operation these concepts, save and load the model slightly more troublesome. first, the basic method
Online Search TensorFlow model preservation, most of the search is the basic metho
, _t6020.tf_phonenumber as tf_phonenumber , _t6020.tf_eMail as tf_eMail , _t6020.tf_remark as tf_remark , _t9011.tf_departmentId as _t9011___tf_departmentId , _t9011.tf_name as _t9011___tf_name from Salesman _t6020 left outer join _Department _t9011 on _t9011.tf_departmentId = _t6020.tf_departmentId where ( ( _t9011.tf_departmentId like '001020%' ) )Query the SQL statemen
space Model (MODEL,VSM) vector space model:Cornell UniversitySaltonEt man last century -The prototype system was presented and advocatedSMARTBasic idea:The document is regarded as a vector composed of t-dimensional features, generally using words, each feature will be based on a certain basis to calculate its weight, the T-dimension with the weight of the features together constitute a document, to represent the subject content of the document.Similarity calculation:The similarity of the comput
HTML 4.01/XHTML 1.0 Reference Manual (89 in total) in alphabetical order
DTD:Specifies the XHTML 1.0 DTD in which the label is allowed. S = strict, t = transitional, F = frameset.
Tag
Description
DTD
Define comments.
STF
Define the document type.
STF
Define the anchor.
STF
Define the abbreviation.
STF
Define the abbreviation of the first letter.
STF
Np.random.choice (len (utterances), 10, Replace=false)
# Evaluate Random Predictor
y_random = [Predict_random (TEST_DF. CONTEXT[X], test_df.iloc[x,1:].values) for x in range (len (TEST_DF))] for
n in [1, 2, 5,]:
print ("Recall @ ({}, : {: G} ". Format (n, Evaluate_recall (Y_random, Y_test, N))
Recall @ (1): 0.0937632
Recall @ (2): 0.194503
Recall @ (5): 0.49297 Recall
@ (10, 10): 1
Very good. The result is the same as we expected. Of course, we are not satisfied with a stochastic pre
Project has been uploaded to github--queue-threading queue
TensorFlow offers two types of queues Fifoqueue: Advanced First Out Queue Randomshufflequeue: Random out Team queue
Functions that modify the state of the queue are mainly Enqueue (): Team Enqueue_many (): Join multiple elements dequeue (): Out
The following code demonstrates how to use these functions (all of the following code is implemented from the TensorFlow: actual combat Google Depth learning framework)
Import TensorFlow as
Image preprocessing is a very simple, by improving the diversity of training data, and then to the training model of the recall rate, adaptability has a very big lifting effect.
In addition to training, you need more training times, for example, I have a rotation of each picture, then the number of training will be increased by one times. This means that the diversity of training sets increases and the number of training sessions increases.
Code:
Import TensorFlow as
Click Open Link
Cow Exhibition
Time limit:1000 ms
Memory limit:65536 K
Total submissions:9288
Accepted:3551
Description"Fat and docile, big and dumb, they look so stupid, they aren't much
Fun ..."
-Cows with guns by Dana Lyons
The cows want to prove to the public that they are both smart and fun. in order to do this, Bessie has organized an exhibition that will be put on by the cows. she has given each of the N (1
Bessie must choose which cows she wants t
(Wx_plus_b) return outputs
2.2 using tensorflow to establish a Neural Network Model
Input layer size: 2
Hidden Layer size: 20
Output layer size: 1
Activation function: hyperbolic tangent function
Learning rate: 0.1 (slightly different from the book)
(For details about the construction process, refer to the boring video. The link will not be attached with self-search ......)
### Define placeholder for inputs to network xs = tf. placeholder (
=Tf.placeholder (Tf.float32,[none,n_input]) +y=Tf.placeholder (tf.float32,[none,n_classes]) thekeep_prob=Tf.placeholder (Tf.float32) - $ #set the network core size, number of layers, sampling steps, etc. theweights={ the 'WC1': TF. Variable (Tf.random_normal ([3,3,1,64])), the 'WC2': TF. Variable (Tf.random_normal ([3,3,64,128])), the 'wc3': TF. Vari
, we calculate each item's score separately for each word iteration, and then add the query to the document by adding the item's score in the document.
The weight of a word item in a document is the number of times the word item appears in the document;
Bag of Word Model: the order of the word items in the document is ignored and only the number of occurrences is concerned;
Compared with the Boolean retrieval model, there is a great improvement.
TF:
, self.b] Self.input = inputThe above code implements the convolution operation of the input signal and maximizes the pooling of the results.Here we see how to initialize the Lenet layer, how to convert the lenet layer output signal to the MLP network hidden layer input signal, the specific code is as follows:Layer0 = Lenetconvpoollayer ( rng, input=layer0_input, image_shape= (batch_size, 1, +), Filter_ Shape= (Nkerns[0], 1
com.example.wow.demo_lockscreen;import android.content.Context;import android.graphics.PixelFormat;import android.graphics.Point;import android.view.LayoutInflater;import android.view.View;import android.view.ViewGroup;import android.view.WindowManager;import android.widget.Button;/** * Created by wow on 15-4-9. */public class LockScreen { Point mLpSize; Button mBtnUnlock; ViewGroup mView; final WindowManager mWindowManager; final WindowManager.LayoutParams
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.