introduction
Stanford University has created a model that can produce natural language descriptions of image content Neuraltalk, an open source Python tool that can generate text descriptions from images. It implements algorithms for Google (Vinyals ET, convolutional neural network CNN + Long Short-term memory lstm) and Stanford (Karpathy and Fei-fei, CNN + recursive neural network RNN). It can get a sentence describing the image from an image using a recursive neural network (LSTM or RNN).
This project contains the source code for python+ NumPy, which generates natural language descriptions from images through a multi-layered recursive neural network.
Dependency:
Python 2.7, NumPy, scipy, NLTK, Argparse
Guide
Get the code. From here $ git clones its repo
Gets the data. It is not available in repo. Click the open link to download the data in data/, in addition, this download does not include the original image file, so if you want to visualize the original image of the annotation, you must obtain the Flickr8k/flickr30k/coco image and put it into the corresponding data folder. Original image download: http://nlp.cs.illinois.edu/HockenmaierGroup/(to fill out)
Training model. Run Python driver.py
to monitor training. Run the local Web server (such as Python-m simplehttpserver 8123), and then open http://localhost:8123/monitorcv.html. The
evaluates the model checkpoint: Run the Python evaluate_sentence_predctions.py+ checkpoint path. Predictions for
visualizations. Use the accompanying HTML file visualize_result_struct.html to visualize the JSON structure generated by the evaluation code. This will visualize the images and their text descriptions. Note that you must first download the original images and place them in the appropriate data/folder.
I downloaded the flickr8k data set.
The effect is as follows:
For more details, please poke.
Https://github.com/karpathy/neuraltalk
An open source tool for generating text descriptions from images: Neuraltalk