Over the past few years, Multi-Touch is only one of the most important techniques in sci-fi movies to show the future, and now it has become the mainstream user interface technology. Multi-Touch screens are now standard screens for new smartphones and Tablet computers. In addition, it may be popular on computers in public places, such as a kiosk or desktop computer pioneered by Microsoft Surface.
The only uncertainty that actually exists is the popularity of multi-touch on regular desktop computers. Perhaps the biggest obstacle to this popularity is the fatigue of moving a finger on a vertical screen for a long time (called the gorilla arm). I personally hope that the power of multi-touch will actually push the redesign of the desktop display. We can imagine that a desktop screen might resemble a mapping table and might be as large as a drawing table.
But that could happen in the distant future. Currently, developers need to master new APIs. Multi-touch support in Windows 7 has been infiltrated and applied to various areas of the Microsoft. NET Framework through low-level and high level interfaces.
Learn more about touch support
If you consider that the use of multiple fingers on the display may cause complexity in expression, you may be able to understand why no one knows the "correct" programming interface for multi-touch at this point in time. This will take some time. At the same time, you have several options.
Windows Presentation Foundation (WPF) 4.0 provides two multi-touch interfaces for programs that run under Windows 7. To specifically use multi-touch, programmers want to explore low-level interfaces that contain multiple routed events defined by UIElement (named Touchdown, Touchmove, TouchUp, Touchenter, and Touchleave) and down, A preview version of the move and up events. Obviously, these events are modeled on mouse events, but require an integer ID attribute to track multiple fingers on the display. Microsoft Surface is built on WPF 3.5, but it supports a broader range of low-level touch interfaces that differentiate between types and shapes of touch input.
The topic of this column is the high-level multi-touch support in WPF 4.0, which contains a collection of events whose name begins with the word "manipulation." These action events perform multiple key multi-touch jobs:
Merging the two-finger interaction into a single operation
To parse the movement of one or two fingers into a transformation
Delay when the finger leaves the screen
Some of the action events are listed in the Silverlight 4 document, but may give the reader a hint of confusion. Silverlight itself does not support these events, but the Silverlight applications written for Windows Phone 7 support these events. The action events are listed in Figure 1.
Figure 1 Action events in Windows presentation Foundation 4.0
|
|
cannot |
can |
can |
manipulationinertiastarted |
cannot |
|
manipulationcompleted |
is |
The web-based Silverlight 4 application will continue to use the Touch.framereported event, which I discussed in the March 2010 issue of MSDN Magazine, "Finger Dance: Exploring multi-touch support in Silverlight."
In addition to the action event itself, the UIElement class in WPF also supports an overridden method that corresponds to an action event, for example, onmanipulationstarting. In Silverlight for Windows Phone 7, these overrides are defined by the control class.