Translating graphical user interface screenshots created by designers into computer code is a typical task for developers to build custom software, Web sites, and mobile applications. In this article, we demonstrate an in-depth learning approach that can be used to train an End-to-end model to automatically generate code technology from three different platforms (i.e. ios,android and web-based) for a single input image with more than 77% accuracy.
Pix2code:generating Code from a graphical User Interface screenshot
Transforming a graphical user interface screenshot created by a designer to computer code is a typical task conducted by A developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can is leveraged to train a model End-to-end to automatically generate C Ode from a single input image with over 77% of accuracy for three different platforms (i.e. IOS, Android and web-based Tec hnologies).
Project Address: Https://github.com/tonybeltramelli/pix2code
Video Address: https://news.developer.nvidia.com/ai-turns-ui-designs-into-code/
More Machine learning Tutorials: http://www.tensorflownews.com/