class= "Post_content" itemprop= "Articlebody" >
The December 20 issue of The New York Times print edition, published in the paper titled "The Human Brain computer, learning from experience" (Brainlike Consolidator, Learning from experience) commented that with the continuous progress of IT technology, The future computer will be to a certain extent to simulate the thinking of the human brain, with self-learning ability ...
Related reading: Google's robot Corps-the inverse of the day! Google's next big game
The following is the main content of the article:
Computers have entered the era of learning lessons for themselves, which will bring about a radical change in the digital world.
A new computer chip will be born in 2014, not only to automate tasks that currently require complex programs-such as moving the robot's arm smoothly and efficiently-but also to bypass and tolerate errors, making "computer crashes" the past.
Some big tech companies are already experimenting with this new way of computing, mimicking the biological nervous system, the way nerves react to stimuli and interpret information with other neurons. It allows the computer to perform tasks while absorbing new information, thus adjusting the mode of operation according to the change of external signals.
In the coming years, this approach will spawn new artificial intelligence systems and will accomplish tasks that seem very simple to humans, including listening, speaking, watching, navigating, manipulating and controlling. This will make a huge leap for tasks such as facial and speech recognition, navigation and planning, which are still at an early stage and rely heavily on human-written programs.
Designers say this style of computing will allow robots to walk more safely, but it will still take a while to achieve the kind of autonomic thinking computers that science fiction has.
"We are moving away from the engineering computer system to a time when there are many biological computing features." Said Larry Smarr, an astrophysicist at the California Institute of Telecommunications and Information technology (Larry Small). The agency is one of many research institutes that are developing this new computer circuit.
Traditional computers are subject to procedural restrictions. For example, a computer vision system can only "recognize" an object based on a system preset algorithm. This algorithm is based on statistical data, like a cookbook, which needs to follow the steps to execute the calculation instructions.
But Google researchers last year developed a machine learning algorithm called "neural networks" that can be used to identify processes independently. The program scans the pictures of 10 million cats to automatically identify the cat.
The company said in June this year that it has developed new search services using these neural network technologies to find specific photos more accurately.
This new approach is also used in software and hardware, and its development has benefited from the rapid advances in human understanding of the brain. But Kwabena Boahen, head of Stanford's "Silicon Brain" research project, said that because scientists are far from fully understanding how the brain works, the pattern is Benas Porhan.
"I have no clue." He said, "I'm an engineer, and I'm going to do something." There are a lot of grandiose theories at the moment, but you have to give me a theory that can make things. ”
Until now, the idea of computer design is using John Feng Neumann (John von Neumann) theory developed 65 ago. The microprocessor runs at a high speed and is supplemented by lengthy binary instructions. In layman's terms, this method usually stores information as memory, possibly stored in a processor, or stored in a nearby memory chip or high-capacity disk.
For example, data such as weather models or word processing files can be quickly and in and out of the processor like short-term memory, while computers are responsible for executing predefined instructions. The final result is stored in the main memory.
The new processor contains electronic components that can be connected by "circuitry" that mimics the biological synapse design. Because they are based on a large number of components similar to neurons, so also known as neural-shape processor, the word is by the Caltech physicist Cava Mide (Carver Mead) 1980 's invented.