The top AI can't recognize a simple pattern? Because humans can't understand them anymore.

Source: Internet
Author: User
Keywords Cloud computing Big data Amazon Google Cloud security sky-run financing cloud security

Look at the black and yellow stripes below and tell me what you see. Nothing, right? However, if you ask the top AI the same question, it will tell you that this pattern represents the school bus. It would say the effectiveness of this assessment is more than 99%. But the answer is 100% wrong.

Computers are really good at identifying objects. However, a new paper directs our attention to areas where the hyper-intelligent algorithm is completely useless. This paper details how researchers use randomly generated simple images to fool the most advanced deep neural networks. These algorithms again and again look at the mixed abstract figure as parrots, table tennis rackets, bagels and butterflies.

These findings force us to understand a very clear but extremely important fact: computer vision and human vision are fundamentally different. However, since computers are increasingly reliant on neural networks to learn to watch, we are not sure how computer vision differs from human vision. As Jeff Clune, one of the research researchers, said, in artificial intelligence, "we can get results without knowing how to get results." ”

Upgrade pictures to fool Ai

One way to find out why these self training algorithms are so clever is to find where they are stupid. In this case, Clune and PhD Anh Nguyen and Jason Yosinski to see whether the top image recognition neural network is susceptible to false positives. We know that computers can recognize koalas. But is it possible for computers to think of other things as koalas?

To find the answer to this question, the team generated random images through evolutionary algorithms. Basically, these algorithms generate very effective visual decoys. In an evolutionary algorithm, the program generates a picture and then slightly changes the picture (mutation). Both the original picture and the copied picture are presented to the imagenet trained neural network. Imagenet, which contains 1.3 million images, has become an essential resource for training computer vision Artificial intelligence. If the algorithm is more certain of the copied photos, the researchers will keep it, so it repeats itself. Otherwise they will step back and try again. Clune said: "This is not the survival of the fittest, but the most beautiful picture will survive", or more accurately, the computer recognition of the highest accuracy of the picture will survive.

Eventually, the technology generated dozens of of images that the neural network believed to be more than 99% accurate. In your opinion, these photos look very different, a series of blue and orange wavy lines, a bunch of ellipses, and yellow and black stripes. But in the artificial intelligence, these pictures are very obvious match: goldfish, remote control and school bus respectively.

A glimpse of the black box interior

In some cases, you can begin to understand how artificial intelligence is being fooled. Squinting, the school bus looks like it's made up of yellow and black stripes. Similarly, you can see that the random-generated images of AI as "monarch butterflies" do Combine butterfly wings, and that the "ski mask" picture does look like an exaggerated face.

But things are much more complicated. The researchers also found that artificial intelligence is always fooled by pure static images. The researchers produced another set of images using slightly different evolutionary techniques. The pictures look almost the same, and the images that appear on the broken TV are similar. However, the top neural networks confirm with 99% accuracy that the images are centipedes, cheetahs and peacocks.

For Clune, these findings suggest that neural networks identify objects through a variety of visual cues. These clues may resemble a person's visual cues (like a school bus), perhaps not. The results of static pictures show that, at least at some point, these clues are very granular. Perhaps in training, the neural network noticed that a line made up of "green pixels, green pixels, purple pixels, green pixels" is common in peacock photographs. When Clune and his team produce photos that happen to have the same lines, they trigger the "peacock" feature. Researchers can also use abstract images that are completely different to trigger the "lizard" feature, showing that the neural network relies on several clues to identify the object, and that each clue triggers the identified feature.

The fact that we have carefully planned to fool these algorithms also points to the larger truth of today's AI: even if these algorithms work, we don't always know why they work. "These models have become very large and complex, and they are learning for themselves," says Clune, who heads the evolutionary Artificial Intelligence Laboratory at Wyoming State State University in the United States: "There are millions of neurons in the neural network, and they all do it." Nor do we understand how they have made such amazing achievements. ”

Similar research is trying to reverse engineer these models. They want to know the rough outlines of AI. "In the past year or two, we've learned a lot about the inside of a neural network black box," Clune explains. It's all vague, but we're starting to see it. ”

Anyway, why is computer miscalculation an important issue?

Earlier this month, Clune discussed the findings with fellow researchers at a conference on Neural Information Processing systems held in Montreal. The conference gathered some of the smartest thinkers in the field of artificial intelligence. The response can be grouped into two camps. One faction said the study was meaningful, and that the people in the camp were older and more experienced in the field of artificial intelligence. They may expect different results, but at the same time they think the results are perfectly reasonable.

The second camp was shocked by the discovery that the people who had not spent much time thinking about what made the computer's brain work today. At least in the beginning, they were surprised that these powerful algorithms can make such a simple mistake. To be reminded, these people also published neural network papers and appeared at the highest level of the AI conference this year.

For Clune, the polarized reaction shows that there is a generational shift in the field of artificial intelligence. A few years ago, people working in the field of artificial intelligence were building artificial intelligence. Today, neural networks are good enough for researchers to just get what they have to use. Clune says: "In many cases, you can use these algorithms directly to solve problems." People flock in and use artificial intelligence just like the gold rush. ”

This is not necessarily a bad thing. But as more and more things are built on artificial intelligence, it becomes increasingly important to explore the flaws of AI. If the algorithm uses only one pixel line to conclude that a picture is an animal, think about how easy it would be to have pornographic photos filtered through a safe search filter. In the short term, Clune hopes the study will encourage other researchers to develop algorithms that will take pictures into account globally. In other words, the algorithm that makes computer vision more like human vision.

The study also allows us to consider other manifestations of these flaws. Like facial recognition is the same technology since? Clune said: "The same, facial recognition algorithm is also very affected by the same problem." ”

You can also imagine all the interesting applications of this discovery. Maybe a 3D-print nose is enough to make a computer think you're someone else. If you wear a dress that has a geometric shape on the surface, the monitoring system will completely ignore you. The finding confirms that the likelihood of damaging it increases as the computer's visual usage increases.

In a big way, as we move into the self-learning system, this discovery also reminds us of a rapidly emerging reality. Now we can still control what we create. But as they continue to build themselves, we will soon find that they are so complicated that we can't see through them. "Human beings can no longer read these computer codes," says Clune. It's like an economy made up of interactive parts, where intelligence emerges from the middle. ”

We will certainly use this intelligence immediately. But it is unclear whether we can fully understand it when we do so.

Via Wired

(Responsible editor: Mengyishan)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.