TensorFlow multithreaded Settings _tensorflow

Source: Internet
Author: User
Tags shuffle
TensorFlow Multithreading Settings
I. Set multithreading through Configproto

(Specific parameter function and description see Tensorflow/core/protobuf/config.proto) in the TF. When Configproto () is initialized, you can control the number of threads or threads in the session thread pool for each operator op in parallel by setting the appropriate parameters. The main parameters involved are the following three:
1. Intra_op_parallelism_threads Control operator op internal parallel when the operator op is a single operator, and the internal can implement parallelism, such as matrix multiplication, reduce_sum and so on, you can set the INTRA_OP_ Parallelism_threads parameters to parallel. 2. Inter_op_parallelism_threads controls parallel computations between multiple operator op when there are more than one operator op, and they are relatively independent, there is no direct path link between the operator and the operator. TensorFlow will try to compute them in parallel, using a thread pool that controls the number by the Inter_op_parallelism_threads parameter.
The first time the session is created, the number of threads for all future sessions is set, unless the Session_inter_op_thread_pool option is configured.
3. Session_inter_op_thread_pool configures the session thread pool.

If the session thread pool has a num_threads of 0, use the inter_op_parallelism_threads option.

Two. Set multithreading when data is read through a queue

(Specific function and description see TENSORFLOW/PYTHON/TRAINING/INPUT.PY) 1. You can set up a single reader multithreaded read by setting Num_threads by using the following function for sample batch processing
1) Batch (tensors, Batch_size, Num_threads=1, capacity=32,
Enqueue_many=false, Shapes=none, Dynamic_pad=false,
Allow_smaller_final_batch=false, Shared_name=none, Name=none)


2) Maybe_batch (tensors, Keep_input, Batch_size, Num_threads=1, capacity=32,
Enqueue_many=false, Shapes=none, Dynamic_pad=false,
Allow_smaller_final_batch=false, Shared_name=none, Name=none)


3) Shuffle_batch (tensors, batch_size, capacity, Min_after_dequeue,
Num_threads=1, Seed=none, Enqueue_many=false, Shapes=none,
Allow_smaller_final_batch=false, Shared_name=none, Name=none)

4) Maybe_shuffle_batch (tensors, batch_size, capacity, Min_after_dequeue,
Keep_input, Num_threads=1, Seed=none,
Enqueue_many=false, Shapes=none,
Allow_smaller_final_batch=false, Shared_name=none,
Name=none)

Cases:

Import TensorFlow as tf  


filenames = [' a.csv ', ' b.csv ', ' c.csv ']  
# Generate a first in first out queue and a queuerunner, generate file name queue 
Filename_queue = Tf.train.string_input_producer (filenames, Shuffle=false) 


# defines reader and Decoder
reader = tf. Textlinereader ()  
key, value = Reader.read (filename_queue)  
example, label = Tf.decode_csv (Value, Record_ defaults=[[' null '], [' null ']]


# using Tf.train.batch () adds a sample queue and a queuerunner for graph.  
# After reader read the file and Decoder decoding data will enter this queue, and then batch out of the team.
# Tf.train.batch () there is only one Reader, you can set multithreading
  
example_batch, Label_batch = Tf.train.batch ([example, label], batch _size=5) with  


TF. Session () as Sess:  
    coord = Tf.train.Coordinator ()  
    threads = tf.train.start_queue_runners (Coord=coord) For  
    i in range:  
        e_val,l_val = Sess.run ([example_batch,label_batch])  
        print E_val,l_val  
    Coord.request_stop ()  
    coord.join (threads)  


2. The following functions allow you to set up multiple reader reads by setting the number of Decoder and reader, where each reader uses a thread
1) batch_join (tensors_list, Batch_size, capacity=32, Enqueue_many=false,
Shapes=none, Dynamic_pad=false, Allow_smaller_final_batch=false,
Shared_name=none, Name=none):


2) Maybe_batch_join (Tensors_list, Keep_input, Batch_size, capacity=32,
Enqueue_many=false, Shapes=none, Dynamic_pad=false,
Allow_smaller_final_batch=false, Shared_name=none,
Name=none)

3) Shuffle_batch_join (Tensors_list, batch_size, capacity,
Min_after_dequeue, Seed=none, Enqueue_many=false,
Shapes=none, Allow_smaller_final_batch=false,
Shared_name=none, Name=none)


4) Maybe_shuffle_batch_join (Tensors_list, batch_size, capacity,
Min_after_dequeue, Keep_input, Seed=none,
Enqueue_many=false, Shapes=none,
Allow_smaller_final_batch=false, Shared_name=none,
Name=none)

Cases:

Import TensorFlow as tf filenames = [' a.csv ', ' b.csv ', ' c.csv '] # Generate a first in first out queue and a queuerunner, generate file name Queue Filename_queue = Tf.train.string_input_producer (filenames, Shuffle=false) # defines reader reader = TF. Textlinereader () key, value = Reader.read (filename_queue) #定义了多个 Decoder, each Decoder is connected to a reader, which has multiple reader exam Ple_list = [Tf.decode_csv (value, record_defaults=[[' null '], [' null ']]) for _ in range (2)] # Decoder and R  
Eader is 2 # using Tf.train.batch_join () adds a sample queue and a queuerunner for graph.				  
# after multiple reader read files and Decoder decoded data will enter this queue, and then batch out of the team. # using Tf.train.batch_join (), you can use more than one reader to read data in parallel. Each Reader uses a thread example_batch, Label_batch = Tf.train.batch_join (Example_list, batch_size=5) with TF. Session () as Sess:coord = Tf.train.Coordinator () threads = tf.train.start_queue_runners (Coord=coord) fo R i in range: E_val,l_val = Sess.run ([Example_batch,label_batch]) print E_val,l_val coord.re  
    Quest_stop ()Coord.join (Threads)  

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.