Tensorflow creates variables and searches for variables by name. tensorflow Variables
Environment: Ubuntu14.04, tensorflow = 1.4 (bazel source code installation), Anaconda python = 3.6
There are two main methods to declare variables:Tf. VariableAndTf. get_variable, The biggest difference between the two is:
(1) tf. Variable is a class with many attribute functions, while tf. get_variable is a function;
(2) tf. Variable can only generate unique variables, that is, if the given name already exists, it will automatically modify and generate a new Variable name;
(3) tf. get_variable can be used to generate shared variables. By default, this function performs a variable name check. If there are duplicates, an error is returned. When declared in the specified variable domain
This variable can be used repeatedly when shared as a variable (for example, parameter sharing in RNN ).
The following is a simple example program:
import tensorflow as tfwith tf.variable_scope('scope1',reuse=tf.AUTO_REUSE) as scope1: x1 = tf.Variable(tf.ones([1]),name='x1') x2 = tf.Variable(tf.zeros([1]),name='x1') y1 = tf.get_variable('y1',initializer=1.0) y2 = tf.get_variable('y1',initializer=0.0) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) print(x1.name,x1.eval()) print(x2.name,x2.eval()) print(y1.name,y1.eval()) print(y2.name,y2.eval())
Output result:
scope1/x1:0 [ 1.]scope1/x1_1:0 [ 0.]scope1/y1:0 1.0scope1/y1:0 1.0
1. tf. Variable (...)
Tf. Variable (...) Create a new variable using the given initial value. The variable is added to graph collections listed in collections by default, which defaults to [GraphKeys. GLOBAL_VARIABLES].
If the trainable attribute is set to True, the variable is also added to graph collection GraphKeys. TRAINABLE_VARIABLES.
# tf.Variable__init__( initial_value=None, trainable=True, collections=None, validate_shape=True, caching_device=None, name=None, variable_def=None, dtype=None, expected_shape=None, import_scope=None, constraint=None)
2. tf. get_variable (...)
Tf. get_variable (...) The return value of has two situations:
Use the specified initializer to create a new variable;
When the variable is reused, an existing variable created by tf. get_variable is returned Based on the variable name search;
get_variable( name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True, use_resource=None, custom_getter=None, constraint=None)
3. Search for variables by name
When creating a variable, the program automatically names the variable even if we do not specify the variable name. Therefore, we can easily search for variables by name, which is useful in capturing parameters and finetune models.
Example 1:
In the tf. global_variables () Variable list, you can search and find the variables based on their names. You can also find the variables created by tf. Variable or tf. get_variable.
import tensorflow as tfx = tf.Variable(1,name='x')y = tf.get_variable(name='y',shape=[1,2])for var in tf.global_variables(): if var.name == 'x:0': print(var)
Example 2:
Get_tensor_by_name () can also be used to obtain variables created by tf. Variable or tf. get_variable.
Note that Tensor is obtained instead of Variable. Therefore, x is not equal to x1.
import tensorflow as tfx = tf.Variable(1,name='x')y = tf.get_variable(name='y',shape=[1,2])graph = tf.get_default_graph()x1 = graph.get_tensor_by_name("x:0")y1 = graph.get_tensor_by_name("y:0")
Example 3:
Variables created for tf. get_variable can be reused to directly obtain existing variables.
with tf.variable_scope("foo"): bar1 = tf.get_variable("bar", (2,3)) # createwith tf.variable_scope("foo", reuse=True): bar2 = tf.get_variable("bar") # reusewith tf.variable_scope("", reuse=True): # root variable scope bar3 = tf.get_variable("foo/bar") # reuse (equivalent to the above)print((bar1 is bar2) and (bar2 is bar3))
The above is all the content of this article. I hope it will be helpful for your learning and support for helping customers.