Learn tensorflow, generate TensorFlow input and output image format _tensorflow

Source: Internet
Author: User
Tags shuffle

TensorFlow can identify the image files that can be used via NumPy, using TF. Variable or tf.placeholder is loaded into the tensorflow, or it can be read by a function (Tf.read), and when there are too many image files, the pipeline is usually read using the method of the queue. Here are two ways to generate TensorFlow image formats, which provide input and output for TensorFlow graph.

1

Import cv2
import numpy as NP
import h5py

height = 460
width = 345 with

h5py. File (' Make3d_dataset_f460.mat ', ' R ') as F:
	images = f[' images ' [:]
 	
image_num = len (images)

data = Np.zeros ( (Image_num, height, width, 3), np.uint8)
data = Images.transpose ((0,3,2,1))


Mister into the path of the image file: ls *.jpg> list.txt

Import cv2
import numpy as np

image_path = './'
list_file  = ' list.txt '
height =
width = 48
  
   image_name_list = [] # Read image with
open (Image_path + list_file) as FID:
	image_name_list = [X.strip () to X in Fid.readlines ()]
image_num = Len (image_name_list)

data = Np.zeros (image_num, height, width, 3), np.uint8) C11/>for idx in range (image_num): img
	= cv2.imread (IMAGE_NAME_LIST[IDX))
	img = Cv2.resize (img, height, width )
	Data[idx,:,:,:] = img
  


2 TensorFlow self-reading with function

def get_image (image_path): "" "
    reads the JPG image from Image_path.
    Returns the image as a tf.float32 tensor
    Args:
        image_path:tf.string tensor reuturn
    : the
        decoded JPEG image casted to float32 ' "" "Return
    tf.image.convert_image_dtype (
        tf.image.decode_jpeg
            ( Tf.read_file (Image_path), channels=3),
        dtype=tf.uint8)


Pipeline Read method

# Example on "How to" use the TensorFlow input pipelines.
The explanation can is found here ischlag.github.io. Import TensorFlow as TF import random from tensorflow.python.framework import ops from tensorflow.python.framework Import Dtypes Dataset_path = "/path/to/your/dataset/mnist/" test_labels_file = "test-labels.csv" train_labels_file = "Trai" N-labels.csv "Test_set_size = 5 Image_height = image_width = Num_channels = 3 Batch_size = 5 def encode_la Bel (label): Return int (label) def read_label_file (file): F = open (file, "R") filepaths = [] labels = [] for Lin E in F:filepath, label = Line.split (",") filepaths.append (filepath) labels.append (Encode_label (label)) Retu RN filepaths, Labels # reading labels and file path train_filepaths, train_labels = read_label_file (Dataset_path + train_ Labels_file) test_filepaths, test_labels = read_label_file (Dataset_path + test_labels_file) # Transform relative path int o Full path train_filepaths = [datasEt_path + FP for FP in train_filepaths] test_filepaths = [Dataset_path + FP to FP in Test_filepaths] # for this example We create or own test partition All_filepaths = train_filepaths + test_filepaths all_labels = train_labels + Test_la BELs all_filepaths = all_filepaths[:20] All_labels = all_labels[:20] # convert string into tensors all_images = Ops.conv Ert_to_tensor (all_filepaths, dtype=dtypes.string) all_labels = Ops.convert_to_tensor (All_labels, Dtype=dtypes.int32 # Create a partition vector partitions = [0] * len (all_filepaths) Partitions[:test_set_size] = [1] * test_set_size Rand Om.shuffle (partitions) # partition we data into a test and train set according to our partition vector train_images, tes T_images = Tf.dynamic_partition (all_images, partitions, 2) train_labels, test_labels = Tf.dynamic_partition (all_labels
                                    , partitions, 2) # Create input queues Train_input_queue = Tf.train.slice_input_producer (
[Train_images, Train_labels],                                    Shuffle=false) Test_input_queue = Tf.train.slice_input_producer ( [Test_images, Test_labels], shuffle=false) # process path and string T Ensor into an image and a label file_content = Tf.read_file (train_input_queue[0)) Train_image = Tf.image.decode_jpeg (file_
Content, channels=num_channels) Train_label = train_input_queue[1] file_content = Tf.read_file (test_input_queue[0)) Test_image = Tf.image.decode_jpeg (file_content, channels=num_channels) Test_label = test_input_queue[1] # define Tensor Shape Train_image.set_shape ([Image_height, Image_width, Num_channels]) test_image.set_shape ([IMAGE_HEIGHT, IMAGE_ WIDTH, Num_channels]) # Collect batches of images before processing train_image_batch, Train_label_batch = TF.TRAIN.BATC H ([Train_image, Train_label], Batch_size=batch_si
                                    ZE#,num_threads=1) Test_image_batch, Test_label_batch = Tf.train.batch (
                                    [Test_image, Test_label], batch_size=batch_size #,num_threads=1) print "Input pipeline ready" with TF. Session () as Sess: # Initialize the variables Sess.run (Tf.initialize_all_variables ()) # Initialize the queue T Hreads to start to shovel data coord = Tf.train.Coordinator () threads = tf.train.start_queue_runners (Coord=coord) p  Rint ' from the train set: ' For I in range: Print Sess.run (train_label_batch) print ' From the test set: ' For I in range (a): Print Sess.run (test_label_batch) # Stop our \ Threads and properly close ' session COORD.R Equest_stop () coord.join (threads) sess.close ()



Resources

[1] http://ischlag.github.io/2016/06/19/tensorflow-input-pipeline-example/

[2] Https://indico.io/blog/tensorflow-data-inputs-part1-placeholders-protobufs-queues/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.