On explainability of deep neural Networks

Source: Internet
Author: User
Tags nets svm neural net

On explainability of Deep Neural networks«learning F # Functional Data structures and algorithms is out! On explainability of deep neural Networks

During a discussion yesterday with software architect Extraordinaire david Lazar regarding how Everything old is new again, the topic of deep neural networks and its amazing success were brought up. Unless one is living under a rock for past five years, the advancements in artificial neural networks (ANN) had been Quite significant and noteworthy. Since The Thaw Of ai winter, the Frowned-upon wave has come a long-to-be a successful and relied upon technique I n Multiple problem spaces. From a Interestingapocryphal which sums up, the state of ANN, back in the "day" and its current state of Convnets WITH&NB Sp Google Translate squeezing deep learning onto a phone, there have been significant progress made. We all has seen the dreamy images of inceptionism:going deeper into neural network with great results in  Image Classification and speech Recognition while fine Tuning network parameters. Beyond the classical feats of REading Digits in Natural Images with unsupervised Feature learning deep neural Networks (Dnns) has shown outstanding Performance on image classification tasks. We are now having excellent results on mnist, imagenet classification with deep convolutional neural networks, and EFF Ective use Of deep neural Networks for Object Detection.

Otavio good of Google puts it quite well,

Five years ago, if you gave a computer an image of a cat or a dog, it had trouble telling which was which. Thanks to convolutional neural networks, not only can computers tell the difference between cats and dogs, they can even R Ecognize different breeds of dogs.

Geoffrey Hinton et al noted that

Best system on competition got 47% error for it first choice and 25% error for its top 5 choices. A Very deep neural net (Krizhevsky et al.) gets less than 40% error for it first choice and less than 20% for its T OP 5 choices

COURTESY:XKCD and http://pekalicious.com/blog/training/

What could possibly go wrong fanfare?

In deep learning systems where both the classifiers and the features is learned automatically, neural networks possess A grey side, the explain-ability problem.

Explain-ability and determinism in ML systems are a larger discussion, but limiting the scope to stay within the context of Neural nets when you see the unreasonable effectiveness of recurrent neural Networks, it's important to pause and P Onder, why does it work? Is it good enough that I can peek into this black-box by getting strategic heuristics out of the network, or infer the con Cept of cat from a trained neural network by building high-level Features Using Large scale unsupervised learning? Does it make it a ' grey-box ' if we can figure out word embedding extractions from the network in high dimensional space, a nd therefore exploit similarities among languages for machine translation? The very idea of this non deterministic nature is problematic; As in context of how do you choose the initial parameters such as starting point for gradient descent when training the back- Propagation being of key importance. How about retain-ability? The imperviousness makes TroubleshootinG Harder to say the least.

If you haven ' t noticed, I am trying hard don't make this a pop-science alarmist post but here is the leap I am going to take; That the relative lack of explain-ability and transparency inherent in the neural networks (and the Community ' s Relat Ive complacency towards the approach ' because it just works '), this idea of black-boxed-intelligence are probably what may Leads to larger issues identified by Gates, Hawking, and Musk. I would be the first one, the this argument might is a stretch or over generalization of the shortcomings of a SP Ecific technique to create the doomsday scenario, and we might being able to ' decrypt ' the sigmoid and all these fears would g O away. However, my fundamental argument stays; If the technique isn ' t quite as explainable, and with the ML proliferation as we have today, the unintended consequences m Ight is too real to ignore.

With the ensemble of strong AI from weak AI, the concern towards explain-ability enlarges. There is no denying so it can be challenging to understand what a neural network is really doing under those l Ayers approximating functions. For a happy path scenario when a network was trained well, we had seen repeatedly that it does achieve high quality result S. However, it's still perplexing to comprehend the underpinnings as to how it's doing so? Even more alarmingly, if the network fails, it's hard-to-understand what went wrong. Can we really shrug off the skeptics fearful about the Dangers that seemingly sentient Artificial Intelligence ( AI) poses. As Bill Gates said articulately  (practically refuting eric Horvitz ' s position)

The

I am in the camp, that's concerned about super intelligence. First the machines would do a lot of jobs for us and is super intelligent. That should is positive if we manage it well. A few decades after, though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don ' t understand why some people is not concerned.

The non-probabilistic nature of a technique like neural network pose a larger concerns in terms of understanding the CONFI Dence of the classifier?  The convergence of a neural network isn ' t really clear but alternatively for SVM, it's fairly trivial to validate. Depicting the approximation of an ' undocumented ' function as a black-box are most probably a fundamentally flawed idea in I Tself. If we equate this with the biological thought process, the signals and the corresponding trained behavior, we had an EXPE CTED output based on the training set as an observer. However, the non-identifiable model, the approximation provided by the neural network are fairly impenetrable for all in Tents and purposes.

I Don ' t think anyone with the deep understanding of the AI and machine learning are really worried about Skynet in this point. Like Andrew Ng said

"Fearing a rise of killer robots is like worrying about overpopulation on Mars."

The concern is more on adhering to "but it works!" aka if-i-fits-i-sits approach (the mandatory cat meme goes here).

The sociological challenges associated with self-driving trucks, taxis, delivery people and employment is real but these is regulatory issues. The key issue lies in the heart of the technology and we understanding of its internals. Stanford ' s Katie Malone said it quite well in linear digressions episode on Neural Nets

Even though it sounds like common sense the We would like to has controls in place where automation should not being Allowe D to engage targets without human intervention, and luminaries as Hawking, Musk and Wozniak would like to Ban autonomous Weapons, urging AI experts, our default reliance on Black-box approaches could make this nothing more than wishful thinking . As Stephen Hawking said

"The primitive forms of artificial intelligence we already have the proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence it would take off on its own and redesign itself at a ever-increasing rate. Humans, who is limited by slow biological evolution, couldn ' t compete and would is superseded. "

It might is fair to say so since we don ' t completely understand a new technique, it makes us afraid (of change), and Wil L be adapted as the moves forward. As great as the results is, for non-black box models or interpretable models such as regression (closed form Approximatio n) and decision Trees/belief nets (graphical representations of deterministic and probabilistic beliefs) there is the CO Mfort of determinism and understanding. We know today that smaller changes in NN can leads to significant changes as one of the "intriguing" properties of neural n Etworks. In their paper, authors demonstrated, small changes can cause larger issues

We find that deep neural networks learn input-output mappings that is fairly discontinuous to a significant extent. We can cause the network to misclassify an image by applying a certain hardly perceptible perturbation, which was found by Maximizing the network ' s prediction error ....

We demonstrated that deep neural networks has counter-intuitive properties both with respect to the semantic meaning of I Ndividual units and with respect to their discontinuities.

The existence of the adversarial negatives appears to BES contradiction with the network's ability to achieve high gener Alization performance. Indeed, if the network can generalize well, how can it is confused by these adversarial negatives, which is indistinguish Able from the regular examples? Possible explanation is, the set of adversarial negatives is of a extremely low probability ..... However, we don ' t has a deep understanding of how often adversarial negatives appears ...

Let's be clear if we discuss the black-box nature of ANN, we is not talking about Single-unit Perceptron only Bein g capable of learning linearly separable patterns (Minsky et al, 69). IT is well established this XOR functions inability to learn on single layer networks does not extend to multi-layer Perce Ptron (MLP). convolutional neural Networks (CNN) is therefore a working proof to the contrary; The biologically-inspired variants of MLPs with the explicit assumption, that the input comprises of the images hence certain p Roperties can embedded into the architecture. The point is against the rapid adaption of a technique which are black-box in nature with greater computational burden , inherent non-determinism, and over-fitting proneness over its "better" counterparts. To Paraphrase jitendra Malik without being an NN skeptic, there are no reason that multi-layer random forests or SVM cannot achieve the same results. During AI Winter We made ANN pariah, aren' t we repeating the same mistake with other techniques now?

Recently Elon Musk has tweeted

Worth Reading superintelligence by Bostrom . We need to is super careful with AI. Potentially more dangerous than nukes.

And even though things might not being so bad right now, let's conclude this with the following quote Frommichael Jordan from IEEE Spectrum.

Sometimes those go beyond where the achievements actually is. Specifically on the topic of deep learning, it's largely a rebranding of neural networks, which go back to the 1980s. ... In the current wave, the main success stories is the convolutional neural network, but that idea was already present in the Previous wave. And one of the problems ... is this people continue to infer that something involving neuroscience are behind it, and that de EP Learning is taking advantage of a understanding of how the brain processes information, learns, makes decisions, or CO PES with large amounts of data. And that is just patently false.

Now this also leaves the other fundamental question are that if the pseudo-mimicry of biological neural nets actually a goo D approach to emulate intelligence? Or May is Noam Chomsky on Where Artificial Intelligence Went wrong?

That we'll talk about some.

References

    • Neural Networks, manifolds, and topology
    • The future of Ai:a Non-alarmist Viewpoint
    • Stephen Hawking warns artificial intelligence could end mankind '
    • A Shallow Introduction to the deep machine learning
    • Computer Science:the Learning Machines
    • DARPA SyNAPSE Program artificialbrains.com
    • On explainability in machine learning
    • Killed by AI Much? A Rise of Non-deterministic security!

On explainability of deep neural Networks

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.