semi-supervised segmentation of Optic Cup in retinal Fundus Images Using variational Autoencoder paper notes

Source: Internet
Author: User

Miccai 2017 paper

Overview:

Depending on the exact segmentation of the cup disc, it is possible to calculate the cup-to-disk ratio, which is the main manifestation of glaucoma disease. Past methods often use supervised learning methods, which require a large number of precise pixel-level calibrations. And these calibrations are time-consuming. Therefore, in order to solve this problem, this paper proposes a semi-supervised learning method, which inherits some similar features from a bunch of untagged data, and then trains a segmentation model based on a small number of tagged images. Specifically, the variational autoencoder is used to learn the parameters of the generated model from the untagged image, so that this well-trained build model provides a good feature embedding, in this latent feature space, The observed images converge into clusters. Then, combine the feature embedding with the segmentation autoencoder. This segmentation Autoencoder is trained on a small number of label datasets and can be segmented by a view cup.

Innovation point: The Generation learning is used in the semi-supervised segmentation method.

Basic Flow:

(Image Auto-encoder, generative variational autoencoder, gvae) generate model learning: Use variational Autoencoder to learn the parameters of the generated model, Autoencoder contains two parts, one is to map the image to the dependent variable space, using the hidden variable z to represent the image, called the Encoder network. One is to reconstruct the image with the implicit spatial variable, called Decoder network.

(Image segmentation) Segmentation variational Autoencoder (svae) also contains two parts: one is segmentation encoder, the dependent variable of the learning segmentation model represents V, One is segmentation decoder, the partition model of the dependent variable representation of x as input, learning the parameters of the segmentation, output segmentation mask. In order to take advantage of the information obtained from image Auto-encoder never tagged data, svae not only needs to refactor segmentation mask, but also to refactor latent representation X learned from Gvae. Therefore, the loss function is:

Experiment:

Data: Eyepacs, 12000 fundus images. 600 of them have been marked from the selection. 600 out of 400 for training and 200 for testing. The experimental results are as follows:

First column: The number of images used in the training set, compared to the unet 1% increase. The segmentation autoencoder used by itself, compared to unet, uses less data for training, and the segmentation accuracy is higher than unet, stating that the svae generalization capability used by itself is better than unet.

Finally, look at the paper auto-encoding variational bayes,code

Semi-supervised Segmentation of Optic Cup in retinal Fundus Images Using variational autoencoder paper notes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.