Why is very few schools involved in deep learning? Why is they still hooked on to Bayesian methods?
First, this question assumes that every university should has a ' deep learning ' person. Deep learning are mostly used in vision (and to a lesser extent NLP), and many universities don ' t has such researchers, so They wouldn ' t has a deep learning researcher either.
One thing that people often forget are that academics has a long careers (thanks to tenure, this is by design). So if you hire a bunch of researchers now who does deep learning, they ' re going to be around for decades. Academia tends to being conservative, so it's not going to the stock up in deep learning researchers just because it's cool today . If this were the norm, CS departments would is full of the fuzzy logic researchers hired in the 90s.
There's nothing magical on deep learning. It's one tool of many (including Bayesian methods, discriminative methods, etc) you should has in your toolbox. Departments try to hire bright people, not those who slavishly follow every fad. Obviously, there'll be is more than these people on faculties who does deep learning in the near future. (If Facebook, Google, and Baidu don ' t all hire them first, that's.)
That said, there is lots of folks working in the this area. Of the schools mentioned in the question, Noah Smith at UW and Katrin Erk at Texas. Other places (off the top of my head) that work in this area:umass, JHU, Maryland, NYU, Montreal, Michigan, and TTI. I ' m more upset this Princeton and Caltech (where I did my PhD and undergrad) don ' t has professors in CS who does language R Esearch. That's the bigger crime in my opinion, and was correlated with their lack of deep learning folks.
Blatant self-promotion ... Colorado had three folks working in this area:me, Mike Mozer, and Jim Martin.Updated Mon. 11,170 views. Asked to answer by Nishant Prateek. upvote104 DownvoteComments More Answers Below. Related Questions
- Deep learning: What is the best practices, methods and algorithms-Train a deep learning system?
- Can deep Learning methods achieve state-of-the-art on Non-image/audio data?
- Why is unsupervised methods or reconstruction methods such as autoencoders, better than supervised methods such as Logist IC Regression for PR ...
- Deep learning: Hierarchical matching pursuit?
- Ensemble Learning: Why is Ensemble methods in machine learning considered as slow at the test time?
Cui Caihao, PhD candidate in CS & IT 6 upvotes by Manigandan Muthusamy, Alvin Pastore, Arpit Gupta, Haider Ali, (more)There is no conflict between these-methods, deep learning and Bayesian methods was both useful machine learning Tools To solve the real problem. Deep learning allows computational model that is composed of multiple layer to learn representations of data with MULTIPL E level of abstraction, this is a automatic feature extractor which can save a lot of engineering skills and domain expert Ise.
Bayesian method is also used in some part of the deep learning, like Bayesian Nets etc. Some School may looks like this they haven ' t involved in deep learning, but actually they share the same knowledge Base and philosophy in the this area. If one is good at machine learning or statistical learning, he'll feel no pressure to do some in deep learning.
Here's a paper about
Deep LearningPublished last month on nature: Page on nature.com. The authors is so famous in the "Right" and my friend, if you met a guy doing the AI or ML, and he told You so he had never heard one of them, you had an obligation to wake him up, lol~
Here are a reply from Yann LeCun | Facebook written Mon. 1,362 views. upvote6 DownvoteCommentJane Lee, Data Mining for businesses and manage ... (more) 2 upvotes by Haider Ali and Pss srivignessh I just wanna quote Yann LeCun ' s answer in FacebookHttps://www.facebook.com/yann.le ...
The key ideas Are:first, there's no opposition between "deep" and "Bayesian". Second, it takes time to acquire skills and talents to being professional in deep learning.
Fw
written 1am. 388 views. upvote2 DownvoteComment1 Piero Savastano1 upvote by Nishant PrateekThere is a big hype in the 80s around "shallow" neural networks. I don ' t know why but bio-inspired models in artificial intelligence seem to follow a cycle of popularity-discontent, where As pure statistical methods seem to is less hyped but more constant in popularity.
Anyway they is not so distant. The basic component of Hinton ' s deep belief network are the restricted Boltzmann machine, which is a flavour of the Boltzma NN machine, which is a probabilistic model.
You can always see the state of a neuron to being conditioned by the state of its inputs, statistically speaking. The whole network state can be described in a probabilistic fashion.
What's universally important for artificial intelligence is linear algebra (vector spaces), calculus (gradient descent), and probability theory (Bayes). Be worried these topics is neglected ...:)
Also, I really see graph theory as a common feature of all advanced models in AI.
Piero,
PhD quitter who still loves neural modelswritten Mon. 662 views. upvote1 DownvoteComment1 Roger Gay3 upvotes by David Ha, Anjith George, and Adriaan de BeerI ' m actually quite disturbed by the current use of the term. It reminds me of all the "high" stuff in the 1980s, what wasn ' t really high level in any particular absolute sense, Just relatively high compared to what proceeded it. Now we had something being called "deep" just because it ' s a bit heavier than something else and "learning" just because It ' s a fashionable word to use. Why are everybody working toward a job in marketing these days?
Why is very few schools involved in deep learning? Why is they still hooked on to Bayesian methods?