A few months ago, I traveled long distances to the forest Park in the IBM Research Lab in New York State, York, to get a glimpse of the future of artificial intelligence that was in the offing. This is the research and development of the supercomputer "Watson", and Watson was at the "Brink" in 2011 (jeopardy!) The show was the top of the game. The original Watson computer is still here-it is a computer system that is about the size of a bedroom, surrounded by 10 upright freezer-type machines. Technicians can connect various cables to the back of the machine through small holes inside the system. The temperature inside the system is surprisingly high, as if the computer cluster is alive and well.
Today's Watson system is significantly different than it was before. It is no longer only in a row of cabinets, but through a large number of users free to open the server to spread, these servers can immediately run hundreds of artificial intelligence "situation." Like all clouds, the Watson system provides simultaneous customer service across the world, using mobile phones, desktops, and their own data servers to connect to the system. Such artificial intelligence can be increased or reduced proportionally to demand. Given that artificial intelligence will evolve as people use it, Watson will always become smarter; the points of improvement that it learns in any situation are immediately transmitted to other situations. And it's not a single program, it's a collection of software engines--its logical deduction engine and language resolution engine can run on different code, chips, and locations--all of these intelligence factors converge into a unified intelligent stream.
Users can directly access this permanent connection (always-on) of the intelligent system, or through the use of this artificial intelligence cloud services to third-party applications. As with many far-sighted parents, IBM wants Watson's computer to be in medical work, so it is not surprising that they are developing an application for a medical diagnostics tool. Before, most of the AI attempts at diagnosis and treatment ended miserably, but Watson was successful. To put it simply, when I enter a symptom of a disease that I was infected with in India, it gives me a list of suspected illnesses, one by one of which list the likelihood of disease from high to low. It thought I was the most likely infected with Giardia (Giardia)-and that was true. This technology has not yet been directly open to patients; IBM has provided the Watson computer's intelligence to partner access to use to help them develop user-friendly interfaces for the appointment of doctors and hospital services. "I believe that something like Watson--whether it's a machine or a person--will soon be the best therapist in the world," Alan Greene, chief medical officer of Scanadu, a start-up company, said the company was inspired by the film Star Trek Allen Green [2], is using cloud artificial intelligence technology to make a diagnosis and treatment equipment. "From the rate of improvements in artificial intelligence, now-born children are likely to have less need to see a doctor when they grow up," he said. ”
Medicine is just the beginning. All the mainstream cloud computing companies, plus dozens of start-ups, are scrambling to develop a cognitive service similar to Watson's computer. Artificial intelligence has attracted more than 17 billion dollars of investment since 2009, according to Quid, a quantitative analysis firm. Last year alone, 322 companies with similar AI Technologies received more than 2 billion dollars in investment. Facebook and Google have also recruited researchers for the AI research team within their company. Since last year, Yahoo, Intel, Dropbox, LinkedIn, Pinterest and Twitter have all acquired AI companies. Private investment in artificial intelligence has increased at an average annual rate of 62% per cent over the past four years, a rate that is expected to continue.
Looking at all of these activities, the future of artificial intelligence is coming into our vision, and it is neither like HAL 9000 (HAL 9000) (the super computer in the novel and the Movie "2001: Space Roaming")--an extraordinary (but potentially bloodthirsty) Human consciousness and relies on the independent machines that run this--and not the super intelligence that the singularity is ecstatic about. The coming AI is like Amazon's Web service-cheap, reliable, industrial-level digital wisdom running behind everything, occasionally flashing in front of your eyes, other times almost invisible. This generic facility will provide the artificial intelligence you need without exceeding your needs. As with all facilities, even if artificial intelligence changes the internet, the global economy and civilization, it will become tiresome. As Power did more than a century ago, it would make inanimate objects active. All the things we've electrified before, and now we're going to make them cognitive. and practical new artificial intelligence will also enhance the human individual (deepen our memories, accelerate our understanding) and the human community's life. By adding some additional intelligence factors, we can't think of anything that can't be new, different, or interesting. In fact, we can easily predict the next 10,000 startups ' business plans: "Do a business and join artificial intelligence". The Thang is large and in sight.
Around 2002, I took part in a small party at Google-a time when Google had not yet made an IPO and was focusing on web searches. I chatted with Google's outstanding co-founder, Larry Page, who became Google's CEO in 2011. "Pull, I still don't understand, now there are so many search companies, why do you want to do free web search?" How did you come up with the idea? " My unimaginative ignorance demonstrates how hard it is to predict, especially for the future. But I would argue that it is hard to predict the future before Google strengthens its advertising auction plan and makes it a real gain, as well as a merger or other major acquisition of YouTube. I'm not the only one who thinks it won't last long with a Google search engine. But Peggy's answer kept me trying to remember: "Oh, we're actually doing artificial intelligence." ”
I've been thinking a lot about that conversation over the past few years, and Google has bought 14 AI and robotics companies. Given that the search business contributes 80% of the revenue to Google, at first glance, you may feel that Google is expanding its artificial intelligence portfolio to improve search capabilities. But I think the exact opposite. Google is using search technology to improve AI, rather than using artificial intelligence to improve search technology. Whenever you enter a query word, click on a search engine generated link, or create a link on the Web page, you are training Google's AI technology. When you enter the "Easter Bunny" (Easter Bunny) in the Image search field and click on the image that looks most like the Easter Bunny, you are telling Ai what the Easter Bunny is. Google has 1.2 billion search users every day, resulting in 121 billion search keywords, each keyword is in the guidance of artificial intelligence in depth learning. If the artificial intelligence algorithm for the 10 years of solid improvement, combined with more than 1000 times times the data and more than 100 times times the computational resources, Google will develop an unparalleled artificial intelligence products. My prediction is: by 2024, Google's main products will no longer be search engines, but artificial intelligence products.
This point of view will naturally arouse suspicion. For nearly 60 years, artificial intelligence researchers have predicted that the era of artificial intelligence is coming, but until a few years ago, Ai seemed to be unreachable. People have even invented a term to describe the lack of research and the scarcity of research funds: The winter of artificial intelligence. So did things really change?
Yes. The recent three breakthroughs have made the long-awaited artificial intelligence imminent:
1. Low cost parallel computing
Thinking is an inherently parallel process in which billions of neurons discharge at the same time to create synchronous brain waves that are used to compute the cerebral cortex. Building a neural network-the main structure of artificial intelligence software-also requires many different processes to run simultaneously. Each node of the neural network roughly simulates a neuron in the brain-interacting with neighboring nodes to identify the signals it receives. To understand a spoken word, a program must be able to hear all the phonemes (syllables) to each other, and to recognize a picture, you need to see all the pixels in the surrounding pixel environment-both of which are deep parallel tasks. Until recently, however, standard computer processors were able to handle only one task at a time.
Things began to change more than 10 years ago, and then a new chip called the Graphics Processing Unit (graphics 處理 UNIT-GPU), which meets the high density of visual and parallel requirements in visual games, Millions of pixels are recalculated multiple times per second. This process requires a specialized parallel computing chip that is added to the computer's motherboard as a complement. Parallel graphics chips play a significant role, the game can also increase significantly. By 2005, the production of GPU chips had been high and their prices had fallen. In 2009, Wunda (Andrew Ng), a research team at Stanford University, realized that a GPU chip could run a neural network in parallel.
The discovery opens up new possibilities for neural networks to accommodate the connections between billions of nodes. Conventional processors take weeks to compute the cascading possibilities of a 100 million-node neural network. Wunda found that a GPU cluster could accomplish the same task in a single day. Now, some companies that apply cloud computing often use the GPU to run neural networks, such as Facebook's membership of the technology to identify friends in photos of users, and Netflix will also be able to provide a reliable recommendation to 50 million subscribers.
2. Large data
Every kind of intelligence needs to be trained. Even the human brain, which is naturally able to classify things, still needs to look at more than 10 examples before they can distinguish between cats and dogs. This is especially true of artificial thinking. Even the best computer (chess) program has to perform well after at least 1000 innings. Part of the reason for the breakthrough in artificial intelligence is that we collect massive amounts of data from around the world to give AI the training it needs. Giant databases, automatic tracking (self-tracking), web cookies, online footprints, megabytes-level storage, decades of search results, Wikipedia, and the entire digital world have become teachers, and they make AI smarter.
3. Better algorithms
Digital neural networks were invented in the the 1950s, but computer scientists have spent decades studying how to harness the vast, astronomical composition of millions and even billions of neurons. The key to this process is to organize the neural network into a tiered layer (stacked layer). One relatively simple task is face recognition. When a set of bits in a neural network is found to form a pattern-for example, an image of an eye-the result is shifted upwards to another layer of the neural network for further analysis. The next layer may spell two eyes together, passing this meaningful block of data to the third layer of the hierarchy, which combines the image of the eye and the nose (for analysis). Identifying a face may require millions of such nodes (each node generates a calculation to be used by the surrounding nodes) and needs to be stacked up to 15 levels. Jeff Sinton (Geoff Hinton), who worked at the University of Toronto in 2006, made a key improvement to the approach, calling it "deep learning". He was able to optimize the results of each layer mathematically, so that the neural networks could speed up learning as they formed stacked layers. A few years later, when the depth learning algorithm was ported to the GPU cluster, its speed improved significantly. Deep-learning code alone is not enough to produce complex logical thinking, but it is a major component of all the current AI products, including IBM's Watson computer, Google search engine, and Facebook algorithms.
This perfect storm of parallel computing, large data, and a deeper algorithm makes 60 years of artificial intelligence blockbuster. This aggregation also suggests that, as long as these technological trends continue-and there is no reason not to perpetuate them-artificial intelligence will keep improving.
As this trend continues, artificial intelligence based on cloud technology will increasingly become an integral part of our daily lives. But there's no pie in the sky. Cloud computing follows the increasing returns [4] rule, which is sometimes referred to as the network effect (receptacle multiplying), which means that as the network grows, the value of the network increases at a faster rate. The greater the size of the network, the more attractive it is to new users, which makes the network bigger and more attractive, and reciprocating. Cloud technology, which serves artificial intelligence, also follows this rule. The more people use AI products, the smarter they become; the more intelligent it becomes, the more people will use it; then it becomes smarter and more people use it. Once a company moves into this virtuous circle, its size will grow and growth will accelerate, so that no new rivals can match it. As a result, the future of artificial intelligence will be dominated by two to three oligarchs, which will develop large-scale, cloud-based, multi-purpose business intelligence products.
1997, Watson computer's predecessor, IBM's Deep Blue computer in a famous man-machine competition defeated the then chess master Kasparov (Garry Kasparov). After the computer won a few more games, people basically lost interest in this kind of game. You might think that's the end of the story, but Kasparov realizes he can do better in the game if he can visit a giant database that includes all previous chess moves changes as soon as he is deep blue. If this database tool is fair to AI devices, why can't humans use it? To explore this idea, Kasparov pioneered the concept of "Man Plus Machine" (man-plus-machine), which enhances the level of chess players with artificial intelligence, Rather than the confrontation between the human and the machine.
This kind of game is now called the Freestyle chess game, it is a bit like a mixed martial arts competition, the contestants can use any of the combat skills they want to use. You can do it alone, or you can accept the help of your computer with the Super smart Chess software, and you just have to move the pieces according to its advice, or you can be a "half man half player" that Kasparov advocates. Half-man half-machine players listen to the moves recommendations of AI devices in their ears, but sometimes they don't use them--rather like GPS navigation when we're driving. Pure Ai's chess engine won 42 games, while half-man and half-player won 53 games at the 2014 Freestyle Chess counter Tournament, which received any model contestants. The best chess player in the world today is a half man half player Intagrand, a group composed of many people and several different chess programs.
But the most surprising thing is that the advent of artificial intelligence has not allowed the level of pure human chess players to fall. On the contrary, cheap, super smart chess software has attracted more people to play chess, more games than ever before, and the level of chess players has risen. Now, the number of chess masters is twice times greater than that of the dark blue victory over Kasparov. Now ranked first human chess player Magnus Carlsen (Magnus Carlsen) has been trained in artificial intelligence, he is considered to be the most human chess players in the closest computer player, but also the highest point in the history of human chess masters.
If AI can help humans become better chess players, it can also help us become better pilots, doctors, judges and teachers. Most of the business work done by artificial intelligence will be a work of special purpose, strictly restricted to the work that intelligent software can do, for example, (Artificial intelligence product) translates a language into another language, but cannot translate into a third language. For example, they can drive, but they can't talk to people. Or you can recall every pixel of every video on YouTube, but you can't predict your daily routine. In the next decade, 99% of the AI products you interact with directly or indirectly will be highly single-minded and extremely intelligent "experts".
In fact, this is not really smart, at least not the kind of intelligence we want to think about. Indeed, intelligence can be a tendency--especially if the intelligence in our eyes means our peculiar self-awareness, all of our frantic self-examination loops, and messy self-awareness flows. We want driverless cars to be focused on the road instead of bickering over the garage. The general practitioner of the hospital, Watson, can concentrate on his work and not think about whether he should specialize in English. With the development of artificial intelligence, we may have to devise a way to prevent them from having consciousness-the best AI service we claim to be is the unconscious service.
What we want is not intelligence, but artificial intelligence. Unlike general intelligence, Wisdom (product) is attentive, measurable, and of a specific nature. It can also be thought in a way that is completely different from human cognition. Here's a good example of Non-human thinking, and Watson's computer staged a powerful stunt at the south-western music Festival (South by Southwest festival), held in Austin, Texas, in March this year: IBM researchers have added Watson's online recipes, A database of nutrition tables issued by the USDA and a taste study that makes meals more palatable. With this data, Watson has created new dishes based on the taste profiles and the existing models. One of the popular dishes created by Watson is the delicious version of "Fish and chips" (fish and chips), which is made from pickled fish and fried banana with cranberry juice. At the IBM Lab in York City Heights, I enjoyed this dish and another delicious dish made by Watson: Swiss/Thai asparagus custard. Nice taste!
Non-human intelligence is not a mistake, but a feature. The main advantage of AI is their "dissimilarity intelligence" (Alien FDI). An AI product is different from any chef in thinking about food, which allows us to look at food differently, or consider manufacturing materials, clothing, financial derivatives, or any category of science and art in different ways. Compared to the speed or power of AI, its dissimilarity is more valuable to us.
In fact, AI will help us better understand what we mean by intelligence at first. In the past, we might have said that only super smart AI products could drive, or defeat humans on "dangerous edge" programs and in chess competitions. And once the AI has done those things, we feel that these achievements are mechanically and rigidly inflexible and cannot be called true intelligence. Every success of AI is redefining itself.
But we're not just redefining the meaning of artificial intelligence-it's redefining human meaning. In the past 60 years, mechanical processing has replicated the actions and talents we thought were uniquely human, and we have had to change the point of view about the differences between people and machines. As we invent more and more kinds of AI products, we will have to give up more views that are considered uniquely human. Over the next 10 years--and even, over the next century--we will be in a protracted identity crisis (crisis) and constantly ask ourselves what human meaning is. Ironically, the biggest benefit of our daily exposure to practical artificial intelligence products is not to increase productivity, expand the economy or bring in a new way of doing research-though that happens. The greatest benefit of AI is that it will help us define human beings. We need artificial intelligence to tell us who we really are.
The original text comes from Wired, author Kevin Kelly, title The Three breakthroughs that Have Finally unleashed AI in the world. This paper is translated by Shen translation.