Absrtact: As we all know, Microsoft's Kinect is a somatosensory gaming device, but its inclusion of sensor technology can not only be applied to the gaming field. Recently, Microsoft and the Chinese Academy of Sciences have jointly developed a Kinect-based sign language recognition system that can turn deaf
Microsoft's Kinect is known to be a body-sensing gaming device, but its inclusion of sensor technology can not only be applied to the gaming world. Recently, Microsoft and the Chinese Academy of Sciences have jointly developed a system of sign language recognition based on the Kinect, capable of translating deaf and mute gestures into text.
In daily life, the deaf and mute in communication with others, because of sign language can not be effective communication. In response to this problem, the scientific community has proposed at least two solutions: the digital Glove (Data gloves), similar to the one found in the Minority Report, that tracks gestures and compiles them into natural languages; second, camera tracking (Camera tracking), is to use a video camera to complete the screen analysis to obtain language results.
Both of these techniques are theoretically feasible. However, the previous scheme was too expensive to popularize, and the latter had a problem with low recognition rates (for example, when the picture background was too complex).
"In our view," says Chen Xilin of the Institute of Chinese Academy of Sciences, "the biggest contribution of this project is that it validates the possibility of sign language recognition with existing, inexpensive 3D and 2D sensors." ”
The project was born in Microsoft's Microsoft 50x15 program. Microsoft's plan to set up the MRC is to develop projects with top researchers around the world to address the challenges facing humanity. (a summary of the plan is available at the end of this article)
The core technology of sign language translation with Kinect is a set of 3D dynamic trajectory calibration and matching system. Its algorithm includes two modes of operation:
One is the translation mode (translation mode). Translating sign Language into text or speech;
The second is the mode of communication (communication mode). This is mainly to help deaf people to talk with ordinary people. The other's natural language can be translated into sign language and represented by virtual characters on the device screen.
As shown in the following illustration:
In addition to this project, the former British technabling Company and the Spanish engineer Daniel Martinez Capilla also used the Kinect to develop a similar use of the gesture recognition system. The achievements of CAs and Microsoft have realized the mutual translation between Chinese sign language and Chinese.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.