Envious of the cool gesture in the Iron Man movie? In fact, the interactive thinking is far beyond your imagination! The future interaction will be a new disruptive change, and the space-separated gesture is a completely unfamiliar field. Today, this good article from the special study of gesture interaction exipple, not only explain the difference between gestures and touch operation, but also attach 7 actual design experience, the trend is here, the proposed UI, interactive students spend 10 minutes to learn.
Future interactions are multiple ways. However, the combination of touch and space gestures (and possibly voice input) is not a typical UI design task.
In Exipple, our designers collaborate with engineers to create interfaces in various environments that respond to physical attributes such as gesture interaction and user mobility. We have benefited a lot from the iterative design, development and evaluation process, and I would like to share our insights and insights in gesture interaction.
(Photo of an interactive video wall in FC Barcelona museum)
Design easy to find gestures
Gestures are often considered a natural way of interacting with screens and objects, and we talk about scaling maps on a mobile device screen and waving to the next movie in front of the TV. But are these gestures really that natural?
Gestures are unfamiliar areas for users who have never experienced some kind of interactive approach. Although we instinctively understand how to view the details of a map on a touchscreen, think about it if you look at a large screen in the distance. If someone tells you that you don't have to touch the screen, you can move through the hand and magnify the map in a natural and intuitive way, what gesture would you first try? When confronted with such a problem, Each of us has its own definition of the gestures of nature.
"Future interactions are multiple ways."
Design to be easy to discover, this is very important. Make sure that you provide the right design symbol clues to help users easily discover how gestures are done. These can be visual cues that indicate what gestures trigger what action. After repeated use, these exploratory hints need not be retained, as the user has learned the gesture.
You can also design animations that gradually reveal a different way of interacting. For example, to make sure that users understand that they can operate at a distance without touching the screen, we create a menu that displays more information when the finger is on the screen. At first the pictures were suspended in a casual manner (fig. a). Close your hand and point to these pictures to show that each picture is actually a category that contains more content (b chart).
Why can't we just move the touch-control operation here?
Last year we did a little informal research. We invited some people to the studio and showed them some familiar TV screens: menus and icons, maps, grids, and carousel maps. Let them imagine how they should operate these interfaces in the distance by means of space-separated gestures.
These interfaces are actually a series of miniature gesture interaction prototypes. We collect their expectations, let them explore and give us feedback. It shows a clear pattern, and their expectations are largely rooted in familiar gestures on mobile devices. All the participants applied the psychological model on the phone to the empty gesture. Sometimes we can even differentiate between iOS and Android users through the expectation of interface manipulation.
"The most intuitive is not necessarily the most efficient and easy to use. ”
But we soon met the challenge: the most intuitive is not necessarily the most effective and easy to use. For example, the mouse is a high-precision device that provides a precise control. It is not so precise that the body of a human limb is separated in three-dimensional space. We feel our hands moving on the x-axis, but we actually move slightly in the other two dimensions.
We cannot expect to achieve the same accuracy. Focusing on precise and meticulous movement inevitably leads to some kind of tension--and "straighten hands" is certainly not a natural way of interacting.
When touching the screen, the contact point is the starting point-a reference point. Imagine the difference between a typical two-finger scaling and a similar operation with both hands separated. The reference point of the zoom level becomes ambiguous. You can't release the screen to stop the operation, so the start and end points are ambiguous.
Examples to avoid: an empty gesture equivalent to double pointing scaling.
Do not attempt to turn touch gestures directly into empty gestures, although they are more familiar and easier. Gesture interaction requires a new way-a way that is unfamiliar at first, but ultimately allows the user to take control and make the user experience design go further.
Remove the "Random jump" pointer
If you use computer vision technology in your project (such as capturing action with a camera with a depth sensor, such as Kinect, Asus, ORBECC, etc.), you know that you can't track your hands and fingers by 100%.
Other technologies may provide higher accuracy, but they usually require users to wear special equipment. As we move our hands, the computer does not continue to "see" our hands, resulting in what we call hand shaking: it looks like a pointer on the screen or an arrow "tense" trembles.
"Remove the feedback form of the pointer and provide an alternative. ”
Designing another pointer or arrow does not work as much as it still needs to track hand movements on the screen. You can ask the developer to filter these subtle hand motions to avoid this effect. However, this solution has to pay a high price, loss of some response and precision, and will cause a slight delay in the pointer and hand, reducing the user's sense of control of the interface. We can't afford to lose the user's sense of control.
So what should we do?
Remove the pointer. No pointers are required on the touch screen. Remove the feedback form of the pointer and provide an alternative. Let the pictures and objects "bounce out", immediately respond to the user's hand movement, do not need any pointers.
This will fundamentally change the way you think about the user interface. This is not a Web page, nor is it a touch-screen experience on the mobile side.
Open the way.
Try to emancipate your mind from the familiar standard Web pages and mobile UI patterns. Forget the button--think action. Imagine that there is no more screen, and you have to use gestures to control the devices in your surroundings. How do you get the TV down? How to light the light?
Symbolic and figurative gestures, such as a "hush" gesture with an index finger, reduce the volume of the TV, which is straightforward and expressive. This may depend on a particular scenario, and it requires a user to learn, but once the user has learned it, they are easy to remember and use.
We have created some successful gestures to control the media playback:
To establish a connection between the gesture and the action it triggers. These can be based on an easily memorized meaning or visual reference. It's not easy--because you need to consider such aspects as the cultural environment. For example, a gesture expressed universally accepted in a country or culture may be offensive in another country. A symbol that is very prominent in some circumstances may not be helpful on other occasions.
Relying on visual gestures to create all types of interaction may result in too much gesture to remember. Use them as fast, powerful quick triggers--worth assigning to actions that require frequent user duplication.
Reduce error recognition
The biggest challenge for computers is to distinguish between real intentions and the occasional gestures that people naturally make, such as when talking to people and moving around in their hands.
It's easy to accidentally trigger an action that changes the interface in the wrong way and leads to an unstable experience. As a UI designer, you have to work closely with the developer to find out what makes sense and what doesn't, and then avoid identifying the wrong ones.
A good starting point is that gesture design is always associated with a particular scene and the situation that needs to be encountered. Are you playing music? Then gestures can be triggered. If not, then do nothing.
"Forget the button--think action. ”
Time is an important factor in distinguishing between gestures and unexpected hand movements. For example, if I point to an object for more than 1 seconds, it means I really want to manipulate it.
Distance is another factor. If you are designing an interactive device for a museum or a visiting center, you may want it to recognize the gestures of the participants close enough to ignore the bystanders who are standing in the distance.
Avoid fatigue
Just like the literal meaning, it's not easy to feel the effects of gestures. You have to look at the user over and over again to understand the real feelings that your experience creates.
A few simple things to remember:
1. Unless you are designing a physical game or exercise program, make sure that people do not have to raise their arms or raise their hands too often or too long.
2, to ensure that the hand trajectory and UI elements of the distance between the proportion of comfortable enough, especially for large screens. This means that users can easily point to any part of the screen.
As an example, a small range action corresponds to a larger area on the screen, making it easier to reach.
3, the use of both hands than the single hand interaction is more difficult to feel tired.
You can use a hand as a habitual hand to trigger an action (such as displaying a slider). Then use the other hand to help (adjust the value of the slider). Consider that you do not have to use a single hand to do all the work, you can explore more combinations.
Keep it in line, both hands will trigger the same action.
Finally, any action that the user triggers through the right hand should also be able to trigger with the left hand. This is not only for the convenience of right-handed and left-handed, this consistency also helps people learn and accept. So if you learn a gesture, you can trigger it with either hand-you don't have to remember which hand to use.
Consistency runs through your entire concept, just like all other UX projects. Once you have successfully created a combination of gesture + action, consider whether you need to enable similar actions in other user scenarios. Once familiar, users will expect to use the same gesture.
To create a unified gesture language that is easy to find and remember.
With these gesture-interaction specifications, you can start exploring this relatively unknown area of creativity. Once these distinctions are understood, it is possible to combine empty gestures and touch gestures to create unique and fluent user interactions.