The natural user interface (Natural user Interface, abbreviated NUI) is considered to be the next-generation user interface after both the CLI (command-line Interface) and the GUI (graphical user Interface). It is characterized by: very simple and easy to learn, so that new users in a very short period of time from the "beginner" into "experts", so that users in the process of using NUI continued to reap success without frustration. This is also the meaning of "N" in Nui: by more rational excavation and use of the user's existing mental models and subconscious things, create the appropriate metaphor, so that the occurrence of interaction become more natural, reduce the user's thinking and confused time. Nui often combines a variety of interactions, such as touch, voice, gesture, body, and so on.
Original Author: Donald A. Norman
"I believe the people of the future will look back on 2010-we have crossed the mouse and the keyboard, and we have taken some more natural interactions, such as touch, voice, gesture, writing, into the era that computer scientists call Nui." ”
--steve Baller CEO Microsoft
Interactive technology based on various gestures has now become a new favorite in the business world. Smaller, cheaper, but more efficient microprocessors, internal storage devices, cameras, making it possible for users to manipulate devices by clicking, sliding, and various other complex gestures. The term "interaction" is being redefined and the rules are being rewritten. Even the future interactive way has been the market name: "Natural User Interface."
In fact, this is not strange, some of the market is always more than the reality of the more advanced. No matter how the form of interaction changes, some of the core elements that affect the user experience have not changed. The power of the GUI is that "graphics" is not the whole of it, it serves to tell the user in a very intuitive way what actions are feasible and how to perform those actions. Perhaps the graphical "icons" and "menus" appear to be mechanical, and the GUI behaves poorly in the face of complex systems, but it provides at least the opportunity for users to explore and learn. The most important design principle of GUI is "visibility": Through "menu", all possible actions are visible to the user and easy to discover, make the system easy to explore and be mastered by the user naturally.
The user interface based on gestures is not new, but in the early days, gestures have become a part of human-computer interaction. Brad Myer, in an article published in 1998, briefly reviewed this area and introduced the process of hand-gesture technology research from the beginning of the 60 to the start of the early 90 (e.g., the Newton of Apple in 92). By the 80 's, Myron Krueger the field of artificial reality (Artificial Reality) through projection technology, which began to be familiar to the public. and Multi-Touch technology has been in existence since the 80 's: Nimish Mehta The first multi-touch interactive system that serves HCI in the 1982 paper. If the idea is to be more extensive, most of the instruments themselves are a combination of "gesture" and "multi-touch", and the appearance of late electronic musical instruments (electronic drums, electric guitar) Let these two interactive technology with the electronic world, this interactive form has existed for a long time-1928 Russians invented the Teleming electronic organ (builder ) is one of the world's first electronic devices that use hand gestures to control.
Most gestures are far from "natural" and can be easily learned by users, with only a handful of gestures that allow users to use them without the need for deliberate learning. Even the simplest "shake head" movement has different meanings in different cultures. It is certain that Western travellers traveling to India have had this experience: nodding for "no" and shaking their heads to indicate "yes". Similarly, even the simplest "hello", "Goodbye" and "come here" gestures have different ways of behaving in different cultures. You can search the Wiki encyclopedia for "list of gesture" to learn about the use of high frequency gestures around the world.
Another reason for not being bullish is that gestures are usually a short action and do not leave a "record" or "path". The lack of these important information leads to the fact that when the user's gestures do not receive any feedback (when the user makes a mistake), there is usually little information to help them find out why. In addition, a purely gesture-based interactive system can be a huge challenge for users-they have to explore all possible gestures and memorize them themselves.
Of course, it's not that the gesture-based interactive technology is useless, it's still a powerful way of interacting, and I believe it will find its place in the future of the interactive world. The process of technological progress is the process of making new technology cheaper, allowing products based on these new technologies to be made and promoted on a large scale. I think that gesture technology will be unified and standardized in the future, like some of the "standard" gestures now: raising your hand to represent "more" (increase volume, size, etc.), shaking the device on behalf of "provide another choice", sliding horizontally means "go to the new page" and so on. The new standards and details will be developed, for example: Everyone knows to press and drag the picture to change its position, but most people are not familiar with the "momentum" factor that still lets the picture slide over the distance after stopping the gesture (there are still many mobile device user interfaces that ignore the "momentum" 's design). The key to "momentum" design involves another parameter: "Friction"-thus, when the user drags the picture to another position and stops the gesture, the picture will continue to move under "momentum" and gradually stop after a short distance of movement under "friction". Of course, today, the "momentum" effect is not unfamiliar to most people, but the first time I saw this effect was more than 20 years ago.-apple's HCI group, developed by Joy Mountford. The length and dynamics of gestures will undoubtedly become the focus of most papers and seminars in the future. Even today, different research groups have their own "rules", especially in many design details, for example, what happens when you drag a picture with your finger and touch the edge of the screen? What if there are more than one screen? If there are multiple users, each user occupies a separate screen? Can pictures be "dragged" from one computer to another? The details of these interactions are also different because of their respective "rules".
Some of the difficulties that hand developers face today have reminded me of some of the similar problems that GUI developers have encountered in the early moments. When developing the GUI for the early Xerox PARC, when the user drags the file icon onto the folder icon, it is natural that the file icon should disappear on the folder icon-and, similarly, the same visual effect should be produced when dragged onto the trash icon. However, this interaction principle encountered a problem with the printer icon: After performing the drag-and-drop action, the file was printed, but the file also disappeared, is this reasonable? At that time, the interaction designer spent a lot of effort to solve similar problems, so that the details are approaching reasonable, perfect, and reduce confusion. In a gesture-based interactive system, such problems can be more problematic and require designers to take the time to think about these issues.
At present, some researchers are trying to develop a new "gesture language", trying to give the system different instruction input meaning according to the number of user touch points. For example, single touch, two-point touch, three-point touch, four-point touch, respectively, represent different instructions or instruction input methods. Let me think of the mouse click on a similar issue that once appeared: a click on behalf of the input character, two clicks on behalf of the selection of a word, three clicks on behalf of a choice of words-seems reasonable, but there are inherent logic problems. If each click is "up one level", then why is the three clicks selected, not a sentence? It is important to note that when developing the GUI of Xerox star, researchers have spent considerable effort to build a "perfect and standard" "Click" Command System, Just as researchers are trying to build a "gesture" instruction system, only the "standard" has not worked long enough, and only a few rules have been used so far.
If you look at gestures from an experiential perspective, it can also be divided into two sides: gestures (and indirect body potential) can give users more participation and pleasure in the process of interaction, but at the same time, the long-term use of hand gestures is also easy to make users feel tired, improper gestures can even give users the risk of injury, The popular bowling game on the Wii is an example, many users will grasp in the hands of the controller as a bowling ball thrown out, some too "natural" gestures/body potential is likely to cause some of our unintended consequences.
Some people may say: Our system is entirely based on gesture/body potential, without any controller, of course, this does not exist to throw the controller out of the scene, but followed by a number of other more complex questions: virtual objects in the physical world of the missing symbol? The controller is missing, Whether the system can cope with the extremely complex human motion state-scope, continuity, persistence, temporary, and so on, are the problems in the front. This is why most of the current gestures based systems provide users with more or less control devices: switches, hand-held controllers, gloves, and so on, to circumvent some of the more complex technical problems.
The design of the gesture interaction system is not fundamentally different from other forms of interaction, and must conform to the basic principles of the fundamental interaction design: The user image, user behavior, usage scenarios, and the various interaction details that may occur between the user and the system, and the impact and results they may have. Because of the high degree of freedom of gesture operation, the user's gestures are often ambiguous, not clear, so feedback from the system-to guide the user's prompts, so that users understand their actions caused by what effect, where the error occurred, what to do to get out of the predicament, is more essential. In addition, because gesture is a natural behavior, the system must also have the ability to differentiate user gestures: which is the user's subconscious action, which is the user's command input. There may be more problems in the process of solving this problem, and the classification of instruction input and user natural gestures will become difficult, and these problems will not appear on keyboards, mice, and stylus pens.
Finally, I would like to sum up that gesture technology will be a very important part of future interactive technology, but it also needs time to really mature, to allow researchers to find the best use scenario for it, a set of standard specifications so that the same gestures in different systems can represent the same meaning. In addition, we need a better theory to deal with the problem of user guidance, feedback and error correction in the gesture system.
Touch-screen based gesture interaction system has been accepted by many users, I often see people do not know whether the action can be understood by the system using gestures to interact with the user interface-for example, on the screen does not support touch with the finger click, use drag gestures but found the picture motionless, Shake your head in front of the sink and find that the faucet needs to be screwed. There is no doubt that gesture technology is an interactive technology that gives users a higher level of experience, which gives them a better sense of control. But like other new technologies, gestures have the cost of bringing these pleasurable experiences. Different systems will have different rules, the learning curve will make the user mad; The defective user will have a problem (comedians have something to say); Users will never know what other gestures are not found in the system, and the memory of a lot of gesture commands makes a person headache. While gesture-interactive technology brings with it greater potential and energy, it also brings more problems and challenges, more possibilities of making systems messy and confusing to users.
In any case, the new technology will always eventually find their own place, their value needs to be slowly discovered by people and use in the right place. But in essence, these interactive technologies can not call themselves "natural", they are no more than the other "natural" where to go, because the rules are made by people, new users need to expend considerable effort to build new mental models and learn how to use the NUI system.
Finally, can Nui really bring the "natural" interactive experience? I don't think so, but they will be useful in the future.
Article Source: nickinteractiondesign.com