TensorFlow Learning String_input_producer & Queuerunner & coordinator & Start_queue_runners_string_input_produce

Source: Internet
Author: User
Tags scalar sessions shuffle python list

____tz_zs


The methods included in this article are:

Tf.train.string_input_producer

Tf.train.input_producer

Tf.train.QueueRunner

Tf.train.add_queue_runner

Tf.train.Coordinator

Tf.train.start_queue_runners


String_input_producer

The Tf.train.string_input_producer function creates an input queue using the provided file list (String_tensor)

String_input_producer (
    string_tensor,
    num_epochs=none,
    shuffle=true,
    seed=none,
    capacity= Shared_name=none,
    name=none,
    cancel_op=none
)


String_tensor parameter for the list of files provided

The NUM_EPOCHSHS parameter is used to limit the maximum number of rounds that load the list of file accidents. When none is completed, all files in the input queue are processed, and the files that are provided in the initialization file list are all rejoin the queue. When this parameter is set, it is counted locally and the Outofrange error is reported at the end of the load count.

When the shuffle argument is true, the file is scrambled before it joins the queue, which in general solves the real problem by reducing the effect of extraneous elements, so False

Capacity parameter is the capacity of the queue


Tf.train.string_input_producer function Https://www.tensorflow.org/api_docs/python/tf/train/string_input_producer

SOURCE Location tensorflow/python/training/input.py

def string_input_producer (String_tensor, Num_epochs=none, shuffle=true
                          , Seed=none, capacity=32, Shared_name=none, Name=none, Cancel_op=none): "" "Output strings (e.g. filenames) to a

  Queue for an input pipeline. Note:if ' Num_epochs ' isn't ' None ', this function creates the local counter ' epochs '.

  Use ' Local_variables_initializer () ' to initialize the local variables.
    Args:string_tensor:a 1-d string tensor with the strings to produce. Num_epochs:an integer (optional). If specified, ' String_input_producer ' produces each string from ' string_tensor ' the ' num_epochs ' times before gene Rating an ' Outofrange ' error.  If not specified, ' string_input_producer ' can cycle through the strings in ' String_tensor ' a unlimited number
    The Times. Shuffle:boolean. If true, the strings are RANdomly shuffled within each epoch. Seed:an integer (optional).
    Seed used if shuffle = = True. Capacity:an Integer.
    Sets the queue capacity. Shared_name: (optional). If set, this queue would be shared under the given name across multiple sessions. All sessions open to the device which has this queue is able to access it via the shared_name. Using this in a distributed setting means each name would be seen by one of the sessions which has access
    To this operation.
    Name:a name for the operations (optional).

  Cancel_op:cancel op for the queue (optional).  Returns:a queue with the output strings.

  A ' Queuerunner ' for the "Queue is added to" the current ' Graph ' of ' Queue_runner ' collection.  Raises:ValueError:If The string_tensor is a null Python list.
  At runtime, would fail with a assertion if string_tensor becomes a null tensor.
  "" "Not_null_err =" string_input_producer requires a non-null input tensor "If not isinstance (String_tensor, OPS. Tensor) and not string_tensor:raise ValueError (Not_null_err) with Ops.name_scope (name, "Input_producer", [string_t Ensor]) as Name:string_tensor = Ops.convert_to_tensor (String_tensor, dtype=dtypes.string) with Ops.control_depend Encies ([Control_flow_ops.  Assert (Math_ops.greater (Array_ops.size (String_tensor), 0), [Not_null_err])]: string_tensor = Array_ops.identity (string_tensor) return Input_producer (Input_tensor=string_tensor, element_shape= [], Num_epochs=num_epochs, Shuffle=shuffle, Seed=seed, capacity=capacity, Shared_n Ame=shared_name, Name=name, summary_name= "fraction_of_%d_full" Capacity, Cancel_op=cancel_op)


From the definition above string_input_producer, the Input_producer function is called after the String_tensor (file address list) is sorted into input_tensor. Other parameters are changed.


Input_producer

Tf.train.input_producer function Https://www.tensorflow.org/api_docs/python/tf/train/input_producer

def input_producer (Input_tensor, Element_shape=none, Num_epochs=none,
                   Shuffle=true, Seed=none, capacity=32, Shared_name=none, Summary_name=none, Name=none, Cancel_op=none): "" "Output the Rows O

  F ' input_tensor ' to a \ for ' a ' input pipeline. Note:if ' Num_epochs ' isn't ' None ', this function creates the local counter ' epochs '.

  Use ' Local_variables_initializer () ' to initialize the local variables. Args:input_tensor:a tensor with the rows to produce. Must is at least one-dimensional.
    Must either have a fully-defined shape, or ' element_shape ' must be defined. Element_shape: (Optional.)
    A ' tensorshape ' representing the shape of a row of ' input_tensor ', if it cannot be inferred. Num_epochs: (Optional.) An integer. If specified ' Input_producer ' produces each row of ' Input_tensor ' nUm_epochs ' times before generating a ' outofrange ' error.
    If not specified, ' input_producer ' can cycle through the rows of ' input_tensor ' a unlimited number of times. Shuffle: (Optional.) A Boolean.
    If true, the rows are randomly shuffled within each epoch. Seed: (Optional.) An integer.
    The seed to use if ' shuffle ' is true. Capacity: (Optional.)
    The capacity of the "queue" is used for buffering the input. Shared_name: (Optional.)
    If set, this queue would be shared under the given name across multiple sessions. Summary_name: (Optional.)
    If set, a scalar summary for the current queue size would be generated, the using this name as part of the tag. Name: (Optional.)
    A name for queue. Cancel_op: (Optional.)  Cancel op for the \ returns:a queue with the output rows.

  A ' Queuerunner ' for the "queue is added" to "current ' Queue_runner ' collection of the" current graph. Raises:ValueError:If the shape of the Input cannot is inferred from the arguments. "" With Ops.name_scope (name, "Input_producer", [Input_tensor]): Input_tensor = Ops.convert_to_tensor (Input_tensor, Name= "Input_tensor") Element_shape = Input_tensor.get_shape () [1:].merge_with (Element_shape) if not element_shape.i
                       S_fully_defined (): Raise ValueError ("either ' input_tensor ' must have a fully defined shape" "or ' element_shape ' must be specified") If Shuffle:input_tensor = Random_ops.random_shuffle (Input_tensor, Seed =seed) Input_tensor = Limit_epochs (input_tensor, num_epochs) q = data_flow_ops.
                                Fifoqueue (Capacity=capacity, Dtypes=[input_tensor.dtype.base_dtype], Shapes=[element_shape], Shared_name=shared_name, name=name) Enq = Q.enqueue _many ([input_tensor]) Queue_runner.add_queue_runner (Queue_runner. Queuerunner (q, [Enq], Cancel_op=cancel_op)) If Summary_name is not None:summary.scalar (Summary_name, Math_ops.cast (Q.size (), Dtype S.FLOAT32) * (1./capacity)) return Q

The parameter meaning in the Input_producer function is similar to the String_input_producer function, in the definition of the above input_producer function, for element_shape and shuffle Operation and the beginning of the introduction of the parameters of the article consistent, no longer repeat.

The point is that later in the code


First, a first-in first-out queue is created, with a capacity of capacity

Q = data_flow_ops. Fifoqueue (Capacity=capacity,
Dtypes=[input_tensor.dtype.base_dtype],
Shapes=[element_shape],
Shared_name=shared_name, Name=name)


Then, define the action to queue Input_tensor

Enq = Q.enqueue_many ([input_tensor])


Next, use Queuerunner to create multiple threads to run the previously defined join operation, and add Queuerunner to the TF using Add_queue_runner. In the Graphkeys.queue_renners collection

Queue_runner.add_queue_runner (Queue_runner. Queuerunner (q, [Enq], Cancel_op=cancel_op))


Queuerunner

Tf.train.QueueRunner function: Https://www.tensorflow.org/api_docs/python/tf/train/QueueRunner

The Tf.train.QueueRunner function is typically used to start multiple threads to manipulate the same queue

  def __init__ (self, queue=none, Enqueue_ops=none, Close_op=none, Cancel_op=none, queue_closed_exception_ty

    Pes=none, Queue_runner_def=none, Import_scope=none): "" "Create a Queuerunner.  On construction the ' Queuerunner ' adds a op to close the queue.

    That OP would be run if the Enqueue ops raise exceptions. When your later call the ' Create_threads () ' method, the ' Queuerunner ' would create one thread for each op in ' Enqueue_op  S '.  Each thread would run its Enqueue op-parallel with the other threads.

    The Enqueue ops don't have to all are the same op, but it is expected this they all enqueue tensors in ' queue '.
      Args:queue:a ' queue '.
      Enqueue_ops:list of Enqueue Ops to run in threads later. Close_op:op to close the queue.
      Pending enqueue ops are preserved.
      Cancel_op:op to close the queue and cancel pending Enqueue Ops.
 Queue_closed_exception_types:optional tuple of exception types that       Indicate that's queue has been closed when raised during a enqueue operation.  Defaults to ' (Tf.errors.OutOfRangeError,) '. Another common case includes ' (Tf.errors.OutOfRangeError, Tf.errors.CancelledError) ', when some of the Enq
      Ueue Ops may dequeue from the other queues. Queue_runner_def:optional ' queuerunnerdef ' protocol buffer. If specified, recreates the Queuerunner from its contents.
      ' Queue_runner_def ' and the other arguments are mutually exclusive. Import_scope:optional ' string '. Name scope to Add.

    Only used when initializing from protocol buffer.
      Raises:ValueError:If both ' queue_runner_def ' and ' queue ' are both specified.
    Valueerror:if ' queue ' or ' enqueue_ops ' are not provided when not restoring from ' queue_runner_def '. ' "' If queue_runner_def:if queue or enqueue_ops:raise valueerror (" Queue_runner_def and queue are Mutua
      Lly exclusive. ") Self._init_from_pRoto (Queue_runner_def, Import_scope=import_scope) Else:self._init_from_args ( Queue=queue, Enqueue_ops=enqueue_ops, Close_op=close_op, Cancel_op=cancel_op, Queue_closed_excepti
    On_types=queue_closed_exception_types) # Protect The Count of runs to. Self._lock = Threading.
    Lock () # A map from session object to the number of outstanding \ Runner # threads for this session. Self._runs_per_session = Weakref.
    Weakkeydictionary () # List of exceptions raised by the running threads. self._exceptions_raised = []

Add_queue_runner

Tf.train.add_queue_runner function: Https://www.tensorflow.org/api_docs/python/tf/train/add_queue_runner

Add_queue_runner (
    qr,
    collection=tf. Graphkeys.queue_runners
)

Adds queuerunner to the specified collection, and the default is TF. Graphkeys.queue_renners Collection

def add_queue_runner (QR, collection=ops. graphkeys.queue_runners): "" "
  Adds a ' queuerunner ' to a collection in the graph.

  When building a complex model this uses many queues it is often difficult to gather all of the
  queue runners that need to be run.  This convenience function
  allows your to add a \ Runner to a down known collection in the graph.

  The companion method ' Start_queue_runners () ' Can is used to start threads for all the
  collected queue runners.

  Args:
    qr:a ' Queuerunner '.
    Collection:a ' Graphkey ' specifying the graph collection to add the
      queue runner to.  Defaults to ' graphkeys.queue_runners '.
  "" " Ops.add_to_collection (collection, QR)

Coordinator

Tf.train.Coordinator function: Https://www.tensorflow.org/api_docs/python/tf/train/Coordinator

Coordinator is a class that coordinates multiple threads to stop together, providing Shoud_stop, request_stop, join three functions

tensorflow/python/training/coordinator.py

  def __init__ (self, Clean_stop_exception_types=none): "" "Create a new coordinator. Args:clean_stop_exception_types:Optional tuple of exception types that should cause a clean stop of the CO Ordinator. If an exception of a types is reported to ' Request_stop (ex) ' the coordinator would behave as if  ' Request_stop (None) ' was called. Defaults to ' (Tf.errors.OutOfRangeError,) ' which are used by input queues to signal the "end of".
    When feeding training data from a Python iterator it are common to add ' stopiteration ' to this list. ' "' if clean_stop_exception_types is None:clean_stop_exception_types = (errors.
    Outofrangeerror,) self._clean_stop_exception_types = tuple (clean_stop_exception_types) # protects all attributes. Self._lock = Threading.
    Lock () # Event set when threads must stop. Self._stop_event = Threading.
    Event () # Python Exc_info to the. # If not None, it shouldHold the returned value of Sys.exc_info (), which is # A tuple containing (type, value, exception).
    Self._exc_info_to_raise = None # True If we have called join () already.  self._joined = False # Set of threads registered for joining while join () is called.  These # threads would be joined into addition to the threads passed to the join () # call.
    It ' s OK if threads are both registered and passed to the join () # call. Self._registered_threads = set ()

Start_queue_runners

Tf.train.start_queue_runners function: https://www.tensorflow.org/api_docs/python/tf/train/start_queue_runners

The Start_queue_runners function starts the TF by default. Graphkeys.queue_renners all Queuerunner in the collection, matching the Add_queue_runner operation with the same set

tensorflow\python\training\queue_runner_impl.py

def start_queue_runners (Sess=none, Coord=none, Daemon=true, Start=true, Collection=ops.

  graphkeys.queue_runners): "" "starts all QUEUE runners collected in the graph.  This is a companion method to ' Add_queue_runner () '.  It just starts threads for all queue runners collected in the graph.

  It returns the list of all threads.  Args:sess: "Session" used to run the queue ops.
    Defaults to the default session.
    Coord:optional ' Coordinator ' for coordinating the started threads.
    Daemon:whether The threads should is marked as ' daemons ', meaning they don ' t block program exit.
    Start:set to ' False ' for create the threads, not start them.  Collection:a ' Graphkey ' specifying the graph collection to get the queue runners from.

  Defaults to ' graphkeys.queue_runners '.
    Raises:ValueError:if ' Sess ' is None and there isn ' no default session. Typeerror:if ' Sess ' is not a ' tf.

  Session ' object. RETURNS:A List of threads. ' "' if sess is none:sess = ops.get_default_session () if not sess:raise valueerror (" Cannot start queue ru Nners:no default session is "" registered.

  Use ' with Sess.as_default () ' or ' a ' "explicit session to Tf.start_queue_runners (sess=sess)") If not isinstance (Sess, session. Sessioninterface): # Following check is due to backward compatibility.
    (b/62061352) if sess.__class__.__name__ in ["Monitoredsession", "Singularmonitoredsession"]: return [] Raise TypeError ("Sess must be a" TF. Session ' object. "" Given class: {} ". Format (sess.__class__)) with Sess.graph.as_default (): Threads = [] For
                                       QR in Ops.get_collection (collection): Threads.extend (Qr.create_threads (Sess, Coord=coord, Daemon=daemon, Start=start)) Return threads


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.