Introduction to the Kinect for Windows SDK Development (10) Gesture Recognition: Basic concepts

Source: Internet
Author: User
Tags abstract definition new set touch

Like click (clicks) is the core of the GUI platform, taps is the core of the touch platform, gesture (gestures) is the core of the Kinect application. Unlike digital interactions in a graphical user interface, gestures are a real-life activity. Without a computer we don't need a mouse, but without the Kinect, gestures still exist. Gestures, on the other hand, are part of the interaction between people in everyday life. Gestures can enhance the persuasive power of speech and can be used to emphasize and convey emotion. Gestures like waving (waving) or pointing (pointing) are some kind of silent speech.

The design of the Kinect application and the task of the developer are to map these real-life gestures to the computer interaction to convey people's ideas. Trying to transplant a natural interactive interface based on gestures from a mouse or a touch-like GUI does a lot of work. Drawing on the study of the concept over the past more than 30 years and acquiring some design ideas from some of the Kinect for Xbox body Games, computer engineers and interaction designers have created a new set of gesture libraries for the Kinect.

This article will cover some of the knowledge of the user experience and discuss how to apply gestures to the Kinect application. We will show how Kinect is part of the human-computer interaction model of the Natural Interactive interface (Natural User Interface). We will discuss some specific examples of using Kinect for gesture recognition and interaction. More importantly, it will show some gestures that have been used as the Kinect gesture recognition library.

1. What is gesture

In many different disciplines, gestures (gesture) have their own unique meanings, and there may be some similarities and differences between these meanings. In the field of art, gestures are used to convey the most expressive parts of the dance, especially in the Asian dance, where gestures are used as symbols of certain religions or as symbols. In the field of interactive design, gestures and manipulations are very different in the natural interactive interface based on touch.

These gestures have their own unique meanings in different subject areas. The academic world tries to define an abstract concept of gestures. The most widely used definition of gestures in user experience design is actually Eric Hulteen and Gord Kurtenbach in 1990, a gesture called human-computer interaction (gestures in human-computer communication), The definition is as follows: "Gesture is the movement of the body, he contains some information." Waving goodbye is a gesture. Tapping the keyboard is not a gesture, because the movement of the finger to tap the key is not observed, nor is it important that his only expression of the keyboard is pressed this action. (a gesture is a motion of the "body" that contains information. Waving goodbye is a gesture. Pressing a key on a keyboard are not a gesture because the motion of a finger on it way to hitting a key is neither observ Ed nor significant. All this matters is which key was pressed) "

This definition explains both what gestures are and what not gestures. The next formal definition, such as this one, usually has two difficulties, both to avoid being too specific and to avoid being too abstract. If a definition is too specific-for example, to define a technology-it may become blurred as the UI technology changes. As a definition of an academic definition rather than a common usage, it must be general enough, and it is consistent with or that a large number of research institutions have previously been published in HCI Research and in the art of semiotics. On the other hand, the definition is too broad and there is a risk of irrelevance: if everything is a gesture, then nothing is.

Eric Hulteen and Gord Kurtenbach The central point of the definition of gestures is that gestures can be used to communicate, and gestures are meant to be told rather than executed.

It is interesting to introduce language and behavior into human-computer interface, which is a radical change. We interact with computer voice into silent language (mute): We communicate with computing devices by pointing and gesturing rather than language. When interacting with a computer, we click on the keyboard key or touch the screen. We seem to prefer this form of silent communication even if the current technology can support simpler voice commands. We do not have the power to operate (manipulation) and interact with virtual objects rather than real objects, so there is no permanence. Movement becomes pure gesture.

Based on the definitions of Eric Hulteen and Gord Kurtenbach, we all know what a UI operation is--not a gesture at the moment--that understanding what gestures are and gestures indicating "significant" behavior or symbols still has great difficulty. What is the meaning of mobile interaction? What is the most obvious difference between gesture communication and verbal communication? The symbolic meaning of our gestures is often abstract and simple.

In the field of human-computer interaction, gestures are often used to convey some simple instructions rather than communicating certain facts, describing problems, or stating ideas. Using gestures to manipulate computers is usually imperative, which is not usually the purpose of people using gestures. For example, wave (wave), in the real world is usually a way to greet, but this way of greeting in human-computer interaction is not very common. Usually the first time you write a program will usually show "Hello", but we are not interested in greeting the computer.

But in a busy restaurant, waving this gesture may have different meanings. When you recruit to the waiter, it may be to arouse the attention of the waiter, need them to provide services. In a computer, it sometimes has special meaning to draw attention to the computer, for example, when the computer sleeps, it usually knocks on the keyboard or moves the mouse to wake up to remind the computer to "notice". When you use Kinect, you can use a more intuitive approach, as in the case of a minority report to Tongo, lift your hands, or simply wave to the computer, and the computer wakes up from hibernation.

In the field of human-computer interaction, gestures usually have some meaning, indicating the intention to let something happen. Gestures are an instruction. When clicking on a button on the UI interface through a mouse or trackpad, we want the button to trigger the event behind it. Typically, a button has a label that indicates the function of the button: Start, Cancel, turn on, and close. Our gesture operation is to implement these events.

The 1th of the above definition can be drawn, another feature of gestures is more casual (arbitrary). Gestures have a limited field, then there is no meaning outside the field. Surprisingly, apart from pointing (pointing) and shrugging (Shurg), anthropologists did not find anything we could call a generic gesture. However, in the computer's UI, pointing (pointing) is usually considered direct because it involves tracing, while the meaning of the shrug is too subtle to identify. Therefore, any Kinect gesture that we want to use must be based on the design of the application's users and applications, and the developer's agreement on what a gesture representative means.

Because gestures are arbitrary (arbitrary) they are also based on conventions (conventional). The designer of the application must tell the user the meaning of the gesture that is being used, or the gesture is a convention known to everyone. Furthermore, these conventions are not based on language and culture, but on established technical rules. We know how to use the mouse (behavioral learning) not because it is something we have imported from our culture, but because it is a cross-cultural engagement based on a particular graphical user interface. Similarly, we know how to click or swipe a smartphone, not because these are cultural conventions, but because these are cross cultural natural user interface conventions. Interestingly, we know to some extent how to click on a tablet computer, because we previously learned how to use the mouse to click. Technical engagements can be translated into one another because language and gestures can be transformed between different languages and cultures.

However, this arbitrariness and misunderstanding nature of gestures also brings with it the risk of being concerned when designing any user interface, especially the user interface, such as Kinect, without any predefined operating conventions. Just as in some countries, nodding to indicate a negative shaking of the head is possible. Gestures, or any physical movement, can be misleading.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.