Since the birth of a personal computer, the mouse has become a way of interacting with computers. The computer is more and more functional, mobility is getting better, but the interactive way is really no change.
Recently, revolutionary products and inventions bring a new human-computer interaction experience. With the spur of maximum output, the mouse and keyboard will one day be discarded by us.
The following embedded technologies may change the way we interact.
Multi-Touch
The mouse and notebook tablet by clicking to achieve the purpose of double-clicking the icon and dragging the window, and multi-touch can be used simply
A finger action implements a complex command. The iphone screen is known to use this technique: use two fingers to zoom in or out of the picture, or scroll the page with a sliding gesture.
Apple's range of products iphone, itouch, MacBook and the latest ipad are widely used, and other manufacturers are starting to follow Apple's footsteps. Even Apple's new Magic Mouse is a touch-control device with gesture recognition. The change that multi-touch brings to a common computer increases the efficiency of command input: The human finger replaces the only mouse pointer on the screen. The screen is actually locked out when it comes to multi-touch. R. Clayton Miller invented an interface called 10/gui to solve the problem of multi-touch. People free up their hands on a large mobile device and operate the screen with 10 fingers.
On the computer screen, there are 10 visible dots representing the 10 fingers of the user, and when people's fingers are squeezed or moved, they can perform the opening and scrolling of the page.
Gesture Sensing
The mouse wheel and iphone can be used for motion induction, but only gesture induction allows the action to be carried out in three-dimensional space.
In recent years, Nintendo's Wii consoles have been pushing gestures to the public. Recently, a large number of manufacturers have also launched a number of hand-gesture sensors for gamers.
In Los Angeles, the company is a desktop user, they developed a product called "G-speak". With a special pair of gloves, the user will be able to move images and data from one screen to another by standing in front of a wall-mounted screen like a policeman, a technique that has appeared in Spielberg's 2002 film Minority Report.
Bloem's chief executive, Christian Rishel, believes the interface can free people from a lot of data. "If you drown in the ocean of data, you have to find the right data at the right time." ”
Early adopters of this expensive technology include military and oil companies, but he believes that all computers will use some of this technology in 5-10 years, Rishel said.
Out of the two-dimensional limitations of human-computer interaction, Rishel that this technology will make human-computer interaction more efficient and rewarding. "We're going to pull the data out and put them on the wall," Rishel said humorously.
Speech recognition
What happens when we talk directly to the computer? The concept of speech recognition has been put forward for decades, and a series of software products have been developed. Most of the software acts as a transcription machine, and the speed of input is usually only one-third of the speed at which people usually speak. Naunce, a Massachusetts company that developed a "dragon's natural voice," said that their products could allow people with physical disabilities to operate computers without using a traditional keyboard or mouse. Peter Mahoney, vice president and head of the Dragon Project, said: "We have a group of core customers ... they use our software 100% of the time when they are using the computer." ”
Peter Mahoney gives us some examples of how the "dragon" recognizes voice signals and executes them. For example, when using Microsoft Word, just say "underline" and the software can underline the text. The user can interact with the software as long as the pronunciation ("linefeed") and the menu commands ("Redo changes") are available. Say "Internet", you can open the browser, speech recognition also allows users to select hyperlinks for reading. Other applications such as e-mail can also be done with simple voice commands.
"The Voice interface is very flexible and its application is limitless," Mahoney said in a telephone interview. "It has more potential than physical devices. ”
Visual tracking
Since we're looking at what we're going to click on, why not just use our eyes to do it?
Visual tracking technology relies on high-definition cameras and invisible infrared light sources to detect human eye direction. This technology has proved very useful in scientific research and advertising research. But at the same time, the technology is used almost most of the time for people with disabilities and is expensive. GUIDe (visual strengthening user interface design) is a research project which is devoted to the popular visual tracking technology, in which the development of "Eyepoint" software allows users to put their hands on a board, eyes focus on a point on the screen, the area around the point will be magnified, Then squeeze the plate with your hand to make the program run.
After using the "Eyepoint" software, the experimenters concluded that "human-computer interaction is faster and simpler based on vision ... because they're already looking at that target." Manu Kumer was the leader of the Stanford University project a few years ago.
Kumar says that Eyepoint can also use less wrist pressure than the traditional mouse, but "gaze and click" is a little higher than the "position and click" Error rate. "I firmly believe that this technology will evolve to replace the mouse." But so far, high costs are still the biggest drag on visual tracking and promotion.
Brain Control computer
Just think about it and the computer will do it for you. This ultimate human-computer integration is faster than you might expect, but there are some potential hurdles to getting into the general consumer market.
Brain-controlled computer Interaction (BCI) directly reflects the pulse of nerve cells to electronic screens or machine devices. Like speech recognition, BCI helps many physically inconvenient people, such as cerebral apoplexy patients and amyotrophic lateral sclerosis (ALS). In the past decade, BCI has helped many patients who cannot move their bodies to use computers.
One problem that has long plagued the development of commercially available BCI for normal people is the need to implant electrodes to provide clear neural signals to the brain, which may lead to infection, rejection and scarring. However, other non implantable scanning techniques, such as EEG, can be used in the cerebral cortex with a charged-pole hat, these techniques have recently developed rapidly.
At the CeBIT fair in Germany earlier this month, Guger technology showcases their new device "Intendix", which they call "the world's first BCI voice", with a virtual keyboard that contains numbers and letters on the screen, if you want to light a character. Then Intendix will detect your brain activity and select the characters. The company claims that Intendix can easily communicate with people who are injured and sick, and that the Intendix function takes only a few minutes, Intendix can identify 5 to 10 characters per minute. Of course, this is too slow for the healthy, and the price of this equipment is also as high as 12000 dollars.
The related research also has "nerve repair", which can connect people's brains and work through brain waves, which may lead to viable desktop applications.
Whatever the future of human-computer interaction looks like in the past, everyone has a long time to continue using the mouse.
Source: http://www.livescience.com/technology/beyond-the-mouse-100314.html
Translation: Laker