Every time I go to the American Museum of Natural History in New York, I will definitely visit the Primate museum. The Primate museum has chosen a large number of skeletons and specimens, showing a panorama of primate evolution, ranging from tiny tree shrews, lemurs and stuffed monkeys to chimpanzees, gorillas and humans.
The most striking thing about the exhibit is that all primates have an astonishing commonality: the hands have the same skeletal structure, including a pair of thumb. This allows our ancestors and distant relatives to grip and climb the same joint arrangement and number of branches so that our species can influence the surrounding world and engage in construction work. Our hands may have come from the claws of a small primate millions of years ago, and both hands are an important factor in making us truly human.
What's so strange about us instinctively pointing fingers and even touching the display on the computer screen?
Our input devices have been evolving in order to meet the desire of humans to make their fingers more closely connected with computers. The mouse for the selection and drag operation, but for the shape of the free sketch and handwriting operation is not competent. We can write with a Tablet stylus, but it's hard to stretch or move. We are familiar with the touchscreen from ATM and museum outlets, but are usually limited to simple points and press operations.
I think this technique, called multi-touch, represents a huge leap forward. As the name suggests, multi-touch can detect multiple fingers, beyond the touch-screen concept of the past, creating a huge difference in the type of movement and gesture that can be communicated by the screen. Multi-Touch has evolved from past touch-input devices to a new stage, but at the same time, it is essentially a different input pattern than before.
The most obvious application of multi-touch is probably on a TV news show, with some maps on the big screen for their weather forecasters or experts to operate. Microsoft has been researching multi-touch in several ways (from a coffee table sized Microsoft Surface computer to a small device like Zune HD), and the technology has become a fully standard on smartphones.
While Microsoft Surface can respond to multiple fingers at the same time (and even several internal cameras to view objects placed on glass), most other multi-touch devices are limited to discrete numbers. Many devices can only respond to two fingers (that is, so-called touch points). (I'll use my fingers and touch points as a synonym.) But the collaborative effect works: On a computer screen, two fingers function more than twice times the size of one finger.
The limitations of two touch points are features of multi-touch displays, which have recently been available for desktop PCs and laptops, as well as the custom Acer Aspire 1420P portable computer that was released to attendees last November at the Microsoft Professional Developer Conference (PDC) (commonly known as PDC portable computer). The release of the PDC portable computer provides a unique opportunity for thousands of developers to write multitouch-aware applications.
I used to explore multi-touch support under Silverlight 3 using the PDC portable computer.
Silverlight Events and Classes
Multi-Touch support is becoming a standard in various Windows APIs and frameworks. This support is built into Windows 7 and the upcoming Windows Presentation Foundation (WPF) 4. (The Microsoft Surface computer is also based on WPF, but includes some custom extensions to achieve some of its very special features.) )
In this article, I'm going to focus on multi-touch support in Silverlight 3. This support is somewhat inadequate, but certainly sufficient, useful when exploring basic multi-touch concepts.
If you publish a multitouch Silverlight application to your site, who can use it? The user will need a multi-touch monitor and, of course, a Silverlight application running in a multi-touch-enabled operating system and browser. Currently, Internet Explorer 8, which runs under Windows 7, provides this support, and there may be more operating systems and browsers supporting multi-touch in the future.
Silverlight 3 's support for multi-touch is made up of 5 classes, a delegate, 1 enumerations, and a single event. Whether your Silverlight program is running on a multi-touch device, and if so, how many touch points the device supports, is not certain.
A Silverlight application that needs to respond to Multi-Touch must connect a handler to a static touch.framereported event:
touch.framereported + = ontouchframereported;
You can connect to this event handler on a computer that is not equipped with a multi-touch monitor without problems. The Framereported event is the only public member of the Static Touch class. The handler looks like this:
void OnTouchFrameReported(
object sender, TouchFrameEventArgs args) {
...
}
You can install multiple touch.framereported event handlers in your application, all of which will report all touch events anywhere in your application.
Touchframeeventargs has a public property called TimeStamp (I haven't had a chance to use it) and three important public methods:
Touchpoint Getprimarytouchpoint (UIElement relativeto)
Touchpointcollection gettouchpoints (UIElement relativeto)
void Suspendmousepromotionuntiltouchup ()
Getprimarytouchpoint or gettouchpoints parameters are used only to report location information for Touchpoint objects. You can use null values for this parameter, where the location information is relative to the upper-left corner of the entire Silverlight application.