recently, looking at the API interface of TensorFlow, we found that there are two kinds of RNN interface in TensorFlow, the first one is static RNN, the other is dynamic RNN, these two kinds of difference check some data: https:// Stackoverflow.com/questions/39734146/whats-the-difference-between-tensorflow-dynamic-rnn-and-rnn
It is more clear that the original text is as follows:
TF.NN.RNN creates a unrolled graph for a fixed rnn length. That means, if your call TF.NN.RNN with inputs has a steps you are creating a static graph with RNN. The graph creation is slow. Second, you ' re unable into pass in longer sequences (>) than for you ' ve originally specified. TF.NN.DYNAMIC_RNN solves this. It uses a TF. While loop to dynamically construct the graph, when it is executed. That means graph creation are faster and you can feed batches of variable size.
Chinese is probably meant to say:
TF.NN.RNN creates a fixed network length for an expanded graph. This means that if there are 200 steps to enter you with 200 steps to create a static figure Tf.nn.rnn rnn. First, creating graphh is slower. Second, you cannot pass a longer sequence than originally specified (> 200). But the dynamic TF.NN.DYNAMIC_RNN solves this. When it is executed, it uses loops to dynamically build the graph. This means that graphics are created faster and can provide a variable size batch.
Here is more clearly said, the following look at what the different programming:
static RNN:
def RNN (_x,weights,biases): #第一种用static_rnn效果 _x = Tf.transpose (_x, [1, 0, 2]) # Permute n_steps and batch_size _x = tf.re Shape (_x, [-1, N_inputs]) # (n_steps*batch_size, n_input) _x = Tf.matmul (_x, weights[' in ']) + biases[' in '] Lstm_cell =TF.N N.rnn_cell. Basiclstmcell (N_hidden_unis, forget_bias=1.0) _init_state=lstm_cell.zero_state (batch_size,dtype=tf.float32) _X = Tf.split (_x, n_step,0) # N_steps * (batch_size, N_hidden) outputs, states =tf.nn.static_rnn (Lstm_cell, _X, initial_state= _init_state) #第二种用dynamic_rnn, is this effect # Lstm_cell =tf.nn.rnn_cell. Basiclstmcell (N_hidden_unis) # Outputs, states =TF.NN.DYNAMIC_RNN (Lstm_cell, _x, dtype=tf.float32) # outputs = Tf.transpose (outputs, [1, 0, 2]) # get inner loop last output return Tf.matmul (Outputs[-1], weights["out") + biases[' out ' ]
Dynamic RNN:
def RNN (_x,weights,biases): #第一种用static_rnn效果 # _x = Tf.transpose (_x, [1, 0, 2]) # Permute n_steps and batch_size # _x = t F.reshape (_x, [-1, N_inputs]) # (N_steps*batch_size, n_input) # _x = Tf.matmul (_x, weights[' in ']) + biases[' in '] # Lstm_ce ll =tf.nn.rnn_cell. Basiclstmcell (N_hidden_unis, forget_bias=1.0) # _init_state=lstm_cell.zero_state (batch_size,dtype=tf.float32) # _X = Tf.split (_x, n_step,0) # N_steps * (Batch_size, N_hidden) # Outputs, states =TF.NN.STATIC_RNN (Lstm_cell, _X, Initial_stat E=_init_state) #第二种用dynamic_rnn, is the effect of this Lstm_cell =tf.nn.rnn_cell. Basiclstmcell (N_hidden_unis) outputs, states =tf.nn.dynamic_rnn (Lstm_cell, _x, dtype=