75577305
54374909
Tf.logging.set_verbosity (Tf.logging.INFO) now runs the code, you will see the following additional log output: INFO:tensorflow:loss = 1.18812, step = 1INFO:
Tf.app.run () ~ Enter main ~elif Flags.num_gpus = = 1: Automatically find GPU
# Residual Network model parameters
HPS = Resnet_model. Hparams (Batch_size=batch_size,
Num_classes=num_classes,
MIN_LRN_RATE=0.0001,
lrn_rate=0.1,
Num_residual_units=5,
Use_bottleneck=false,
WEIGHT_DECAY_RATE=0.0002,
relu_leakiness=0.1,
optimizer= ' Mom ')
Bottleneck:
Through the already trained model, the bottleneck feature is extracted and then rolled into the next "small" model, which is the fully connected layer.
The implementation steps are:
1. Take the weights of the trained models, model;
2, operation, extraction bottleneck feature (network in the full connection before the last layer of activation of feature map, convolution-all connected layer between), take it out separately, and save
3, bottleneck layer data, then + dense full connection layer, for fine-tuning
Extracting the bottleneck characteristics of a picture requires steps:
1, loading the picture;
2, the weight of pouring into the pre-model;
3, Get bottleneck feature
--What kind of residual unit (with bottleneck or without bottleneck) is used
For the file name queue, we use the tf.train.string_input_producer function. This function needs to pass in a file name list and the system will automatically convert it to a file name queue.
We have to read the data before the calculation, assuming that the reading time 0.1s, the calculation time of 0.9s, then it means that every 1s,gpu will have 0.1s nothing to do, which greatly reduces the efficiency of the operation. After using Tf.train.start_queue_runners, the thread that fills the queue is started, and the system is no longer "stuck." Then the cell can get the data and calculate it, and the whole program runs.
Tf. Fixedlengthrecordreader is to read the fixed-length bytes information (which is appropriate for bin files using Fixedlengthrecordreader reading), and the result indicates that the next call will continue to read the file at the last read, Instead of reading from the beginning.
For example:
Import TensorFlow as TF
filenames = [' d:/tensorflow/test/txt1.txt ']
Filename_queue = Tf.train.string_input_producer (filenames)
reader = TF. Fixedlengthrecordreader (record_bytes=4)
Key, value = Reader.read (filename_queue)
b = value
Sess = tf. InteractiveSession ()
Tf.train.start_queue_runners (sess=sess)
Print (Sess.run (b))
Print (' \ n ')
Print (Sess.run (b))
--4 bytes of content in the Txt1.txt file
https://zhuanlan.zhihu.com/p/27238630 TF read mechanism ~ queue
The Decode_raw operation can be used to convert a string into a uint8 tensor
Tf.cast (x, Dtype, Name=none) converts the data format of X to Dtype. For example, the data format of the original x is bool, so it can be converted into a sequence of 0 and 1 after it is converted to float.
Tf.slice () from the return value to understand, now assume that the shape of input is [A1, A2, A3], the value of Begin is [B1, B2, b3],size value is [S1, S2, S3], then Tf.slice () The value returned is INPUT[B1: B1+S1, B2:b2+s2, B3:B3+S3].
If si=?1, then the return value is input[b1:b1+s1,..., bi:,...]
Tf.contrib.framework.get_or_create_global_step () Gets the current model training to reach the global steps.
Bottleneck residual error module allows the residual network to go deeper, because the same channel number, the bottleneck residual module to save a lot of parameters than the naïve residual module, a unit of less parameters, the corresponding can make a deeper structure.
Generates a sliding average calculation object, Moving_average_decay = 0.999, the DECAY value in each generation is updated as follows
Min (Decay, (1 + num_updates)/(ten + num_updates))
The decay values obtained by this calculation are smoothed for all parameters updated by the above gradient method as follows:
shadow_variable = decay * shadow_variable + (1-decay) * variable
c4-resnet-tf-Small Elephant Cv-code