The thought of this comes from a question: What would you do if you started a business now? I want to do is a ring version of the MYO, do not rely on the camera to achieve gesture recognition. Currently Apple (PrimeSense), Intel (RealSense), Google (primesens
Thinking of this comes from the question: What would you do if you started a business now?
I want to do is a ring version of the MYO, do not rely on the camera to achieve gesture recognition. Gesture recognition and hand action tracking, which are currently represented by Apple (PrimeSense), Intel (RealSense), Google (PrimeSense + OpenCV) and Leap Motion, are required to rely on the device's own or peripheral cameras, This limits the relative position between the operator and the device. And the requirements of different standards for the user distance are not the same, it is difficult to develop a unified new interactive mode of cross equipment.
Techniques like MYO do not fix the relative position between users and devices, because the identification behavior is done directly through contact with the user's arm. But in order for most users to get a better recognition rate, sleeve-style or hand-ring wearing style is bound to face the problem of wearing comfort. such as the keyboard, mouse, and touch screen, which we use everyday, as a basic human-computer interaction means, it will be used in a large amount of time in a day. As with the ring, the finger has a relatively high tolerance for prolonged confinement of tight bundles and skin respiration.
In the fingers, the index finger is the ideal object for motion capture and recognition. Because the index finger is more controllable and more flexible than any other finger, one can easily control it to perform small or relatively precise movements. In the current mainstream application content and interaction level design, you can set a few basic gestures: when the application content from top to bottom flow, the index finger vertical waving defined as "rolling (Scroll)", where the finger waving horizontal direction defined as "content category switch (generally Tabs switch)", When the application content flows from left to right, the two gesture definition instructions are exchanged (the SDK is customized by the application). Finger opening defined as "Enter the next level (TAP or enter) of the current display focus content"; single grip definition is "returns the previous level of the current display focus content (back or Cancel) And finally, the forefinger tip double-click the thumb tip to define "extended menu or settings (LC or menu)."
However, with the current gyroscope and acoustic feedback technology, it is necessary to meet the volume and complex environment recognition accuracy of the scheme. If the technology advances to a certain stage, but also through the integration of Minuum single input method technology to achieve text input. In addition, and support the natural language recognition technology in recent years, the corpus of the production of the accumulated maturity of the same, PrimeSense, RealSense and other mainstream technology products brought by the accumulation of user gesture Library can make the future gesture recognition, user behavior to judge the accuracy requirements lower.
Gesture recognition, non-contact recognition is the direction of most companies to explore the next generation of human-computer interaction. But different from the key mouse (rotation wrist + fingertip press) and touch screen (finger waving + point), such as traditional interactive instruction action single, mechanization, gesture interaction instructions more complex, users more expensive to use (accuracy is also limited). Whether this form of interactive innovation can be accepted by the user also needs to be explored in specific devices and usage scenarios.