Basic TensorFlow usage example
This article is based on Python3 TensorFlow 1.4. This section describes the basic usage of TensorFlow by using the simplest example, plane fitting.
The introduction method of constructing TensorFlow is as follows:
Import tensorflow as tf
Next, we construct some random Three-dimensional data, and then use TensorFlow to find the plane to fit it. First, we use Numpy to generate a random three-dimensional point. The variable x represents the three-dimensional point (x, y) coordinates are a 2x100 matrix, that is, 100 (x, y). Then the variable y represents the z coordinate of the three points. We use Numpy to generate these random points:
Import numpy as np
X_data = np. float32 (np. random. rand (2,100 ))
Y_data = np. dot ([0.300, 0.200], x_data) + 0.400
Print (x_data)
Print (y_data)
Here, the rand () method of the Numpy random module is used to generate a random matrix of 2x100. In this way, 100 (x, y) coordinates are generated, and then a dot () is used () after calculating the matrix multiplication, a vector with a length of 2 is used to multiply the Matrix to obtain a vector with a length of 100, and then add a constant to obtain the z coordinate, the output sample is as follows:
[0.97232962 0.08897641 0.54844421 0.5877986 0.5121088 0.64716059
0.22353953 0.18406206 0.16782761 0.97569454 0.65686035 0.75569868
0.35698661 0.43332314 0.41185728 0.24801297 0.50098598 0.12025958
0.40650111 0.51486945 0.19292323 0.03679928 0.56501174 0.5321334
0.71044683 0.00318134 0.76611853 0.42602748 0.33002195 0.04414672
0.73208278 0.62182301 0.49471655 0.8116194 0.86148429 0.48835048
0.69902027 0.14901569 0.18737803 0.66826463 0.43462989 0.35768151
0.79315376 0.0400687 0.76952982 0.12236254 0.61519378 0.92795062
0.84952474 0.16663995 0.13729768 0.50603199 0.38752931 0.39529857
0.29228279 0.09773371 0.43220878 0.2603009 0.14576958 0.21881725
0.64888018 0.41048348 0.27641159 0.61700606 0.49728736 0.75936913
0.04028837 0.88986284 0.84112513 0.34227493 0.69162005 0.89058989
0.39744586 0.85080278 0.37685293 0.80529863 0.31220895 0.50500977
0.95800418 0.43696108 0.04143282 0.05169986 0.33503434 0.1671818
0.10234453 0.31241918 0.23630807 0.37890589 0.63020509 0.78184551
0.87924582 0.99288088 0.30762389 0.43499199 0.53140771 0.43461791
0.23833922 0.08681628 0.74615192 0.25835371]
[0.8174957 0.26717573 0.23811154 0.02851068 0.9627012 0.36802396
0.50543582 0.29964805 0.44869211 0.23191817 0.77344608 0.36636299
0.56170034 0.37465382 0.00471885 0.19509546 0.49715847 0.15201907
0.5642485 0.70218688 0.6031307 0.4705168 0.98698962 0.865367
0.36558965 0.72073907 0.83386165 0.29963031 0.72276717 0.98171854
0.30932376 0.52615297 0.35522953 0.13186514 0.73437029 0.03887378
0.1208882 0.67004597 0.83422536 0.17487818 0.71460873 0.51926661
0.55297899 0.78169805 0.77547258 0.92139858 0.25020468 0.70916855
0.68722379 0.75378138 0.30182058 0.91982585 0.93160367 0.81539184
0.87977934 0.07394848 0.1004181 0.48765802 0.73601437 0.59894943
0.34601998 0.69065076 0.6768015 0.98533565 0.83803362 0.47194552
0.84103006 0.84892255 0.04474261 0.02038293 0.50802571 0.15178065
0.86116213 0.51097614 0.44155359 0.67713588 0.66439205 0.67885226
0.4243969 0.35731083 0.07878648 0.53950399 0.84162414 0.24412845
0.61285144 0.00316137 0.67407191 0.83218956 0.94473189 0.09813353
0.16728765 0.95433819 0.1416636 0.4220584 0.35413414 0.55999744
0.94829601 0.62568033 0.89808714 0.07021013]
[0.85519803 0.48012807 0.61215557 0.58204171 0.74617288 0.66775297
0.56814902 0.51514823 0.5400867 0.739092 0.75174732 0.6999822
0.61943605 0.60492771 0.52450095 0.51342299 0.64972749 0.46648169
0.63480003 0.69489821 0.57850311 0.50514314 0.76690145 0.73271342
0.68625198 0.54510222 0.79660789 0.58773431 0.64356002 0.60958773
0.68148959 0.6917775 0.61946087 0.66985885 0.80531934 0.5542799
0.63388372 0.5787139 0.62305848 0.63545502 0.67331071 0.61115777
0.74854193 0.56836022 0.78595346 0.62098848 0.63459907 0.8202189
0.79230218 0.60074826 0.50155342 0.73577477 0.70257953 0.68166794
0.6636407 0.44410981 0.54974625 0.57562188 0.59093375 0.58543506
0.66386805 0.6612752 0.61828378 0.78216895 0.71679293 0.72219985
0.58029252 0.83674336 0.66128606 0.50675907 0.70909116 0.6975331
0.69146618 0.75743606 0.6013666 0.77701676 0.6265411 0.68727338
0.77228063 0.60255049 0.42818714 0.52341076 0.66883513 0.49898023
0.55327365 0.49435803 0.6057068 0.68010968 0.77800791 0.65418036
0.69723127 0.8887319 0.52061989 0.61490928 0.63024914 0.64238486
0.66116097 0.55118095 0.80346301 0.49154814]
In this way, we get some three-dimensional points.
Then we construct the model and use TensorFlow to fit a plane based on the data. The fitting process is to find the relationship between (x, y) and z, that is, the relationship between the variable x_data and the variable y_data, the relationship between them is represented by linear transformation, that is, z = w * (x, y) + B, so the fitting process is actually the process of finding w and B, so here we first set two variables w and B like a variable. The Code is as follows:
X = tf. placeholder (tf. float32, [2,100])
Y_label = tf. placeholder (tf. float32, [100])
B = tf. Variable (tf. zeros ([1])
W = tf. Variable (tf. random_uniform ([2],-1.0, 1.0 ))
Y = tf. matmul (tf. reshape (w, [1, 2]), x) + B
When creating a model, we can first express the existing variables and declare them using the placeholder () method. Then we can pass the actual data to the model during running, the first parameter is the data type, and the second parameter is the shape. Because x_data is a 2 × 100 matrix, the shape is defined as [2,100], and y_data is a vector with a length of 100, therefore, the shape is defined as [100]. Of course, you can use the tuples here, but you must write them as (100 ,).
Then we use Variable to initialize the variables in TensorFlow, and B to initialize it as a constant. w is a random initialized vector of 1 × 2, which ranges from-1 to 1, then y is represented by w, x, and B. The matmul () method is the matrix multiplication provided in TensorFlow, similar to the Numpy dot () method. However, matmul () does not support multiplication of vectors and matrices, that is, BroadCast is not allowed. Therefore, before multiplication, call reshape () to convert it to a standard matrix of 1 × 2, finally, the result is represented as y.
In this way, we construct a linear model.
Here, y is the value output in our model, while the actual data is the input y_data, that is, y_label.
To fit this plane, we need to reduce the gap between y_label and y. The smaller the gap, the better.
So next we can define a loss function to represent the gap between the actual output value of the model and the actual value. Our goal is to reduce this loss. The code implementation is as follows:
Loss = tf. performance_mean (tf. square (y-y_label ))
The square () method is called here, and the difference between y_label and y is passed in to obtain the sum of squares. Then, the average value of this value is obtained using the performance_mean () method, which is the loss value of the current model, our goal is to reduce this loss value, so we can use the gradient descent method to reduce this loss value. The following code is defined:
Optimizer = tf. train. GradientDescentOptimizer (0.5)
Train = optimizer. minimize (loss)
GradientDescentOptimizer optimization is defined here, that is, the gradient descent method is used to reduce this loss value. We train the model to simulate this process.
After running the model, we can run the model. when running the model, we must declare a Session object, Initialize all the variables, and then perform step-by-step training. The implementation is as follows:
With tf. Session () as sess:
Sess. run (tf. global_variables_initializer ())
For step in range (1, 201 ):
Sess. run (train, feed_dict = {x: x_data, y: y_data })
If step % 10 = 0:
Print (step, sess. run (w), sess. run (B ))
200 cycles are defined here, and each cycle executes a gradient descent optimization. Each cycle calls the run () method. The input variable is the train object just defined, feed_dict: assign a value to the variables of the placeholder type. As training progresses, the loss will become smaller and smaller, and w and B will be slowly adjusted to fit values.
In every 10 cycles, we print the values of w and B. The result is as follows:
0 [0.31494665 0.33602586] [0.84270978]
10 [0.19601417 0.17301694] [0.47917289]
20 [0.23550016 0.18053198] [0.44838765]
30 [0.26029009 0.18700737] [0.43032286]
40 [0.27547371 0.19152154] [0.41897511]
50 [0.28481475 0.19454622] [0.41185945]
60 [0.29058149 0.19652548] [0.40740564]
70 [0.2941508 0.19780098] [0.40462157]
80 [0.29636407 0.1986146] [0.40288284]
90 [0.29773837 0.19913] [0.40179768]
100 [0.29859257 0.19945487] [0.40112072]
110 [0.29912385 0.199659] [0.40069857]
120 [0.29945445 0.19978693] [0.40043539]
130 [0.29966027 0.19986697] [0.40027133]
140 [0.29978839 0.19991697] [0.40016907]
150 [0.29986817 0.19994824] [0.40010536]
160 [0.29991791 0.1999677] [0.40006563]
170 [0.29994887 0.19997987] [0.40004089]
180 [0.29996812 0.19998746] [0.40002549]
190 [0.29998016 0.19999218] [0.40001586]
200 [0.29998764 0.19999513] [0.40000987]
As we can see, as training continues, w and B are gradually approaching the real value, and the fitting is becoming more accurate and close to the correct value.
This article permanently updates link: https://www.bkjia.com/Linux/2018-03/151273.htm