Andrew ng Tsinghua Report after hearing sense

Source: Internet
Author: User

Andrew Ng is coming to Tsinghua to report today, and I have a few important things to say about understanding and thinking.

1) particle size of the characteristic representation

What is the characteristic expression of the learning algorithm in a particle size that can play a role? As far as a picture is concerned, pixel-level features are not valuable at all, and it is not possible to differentiate between positive and negative examples of motorcycles, and if the characteristic is a structure (or meaning), such as whether it has a handlebars, whether it has wheel, it is easy to distinguish between positive and negative examples, The learning algorithm can play a role.

2) Primary (shallow) feature representation

Since the pixel-level feature representation method has no effect, what does it work?

NG reports that the sparse coding method, which is complex graphics, often consists of some basic structures, such as color can be adjusted by the three primary colors according to a certain ratio. Ng gives a graph a linear representation of 64 orthogonal edges (which can be understood as an orthogonal basic structure). For example, the X can be used in 1-64 edges three in accordance with the weight of 0.8,0.3,0.5. While the other basic edge is not contributing, it is 0

Of course, how to find the 64 orthogonal structure, no further clarification, and at the same time, it is proposed that a paper has never been labeled in the sound of 20 basic sound structure, the rest of the sound can be synthesized by the 20 basic structure.

3) Structural feature representation

Small pieces of graphics can be composed of basic edge, more structured, more complex, conceptual graphics how to express it? This requires a higher level of feature representation, such as V2,v3. So V1 see pixel level is pixel level. V2 See V1 is the pixel level, this is the level of progressive, like high school students see Junior high school students naive, college student see naive like.

4) How many features are required?

We know the need to build a hierarchy of features, shallow into the deep, but how many characteristics of each layer?

Ng said that any method, as long as the characteristics of enough, the effect can always improve, but the characteristics of many means that the computation is complex, the space for exploration, the data may be used to train in each feature will be sparse, will bring a variety of problems, not necessarily more characteristics of the better.

From the text, the concept of text, or a word, what does a doc mean by this thing, and what is the right way to say? With a word, I do not think, the word is the pixel level, at least it should be term, in other words, each doc is made up of terms, but this means that the ability to express the concept is enough, probably not enough, need to go on a step, reached topic level, with topic, and then to the doc is reasonable. However, the number of each level is very large, such as the concept of Doc expressed->topic (thousand-million magnitude)->term (100,000 magnitude)->word (million levels).

When a person is looking at a doc, the eyes see word, by these word in the brain automatically cut word formation term, in accordance with the concept of organization, prior learning, get topic, and then carry on high-level learning.

As can be seen from the report, Google in the image, the sound of a great investment in power, and machine learning, deep learning should be in the image and sound, there is a huge opportunity.

In addition, deep meaning is multi -layer neuron network, each layer represents a level of concept, the more the lower the concept of orthogonality, the more the higher the concept of orthogonality, the more similar degree. Because high-level concepts may contain the same basic structure to each other.

High-level concepts to separate the basic structure of the method is also easy to understand, is to break up the cluster, such as Doc can be through the LDA method of topic, a finite topic can describe a doc,topic inside can also through a similar method, and then dispersed the cluster, to get a more shallow layer of topic, This can be tested, NG does not elaborate, I feel so.

Ng's report ppt did not flow out, found a relatively close, for everyone to learn the system: Http://www.ipam.ucla.edu/publications/gss2012/gss2012_10595.pdf

July 2014 Andrew Ng in the automation of the report again, I wrote a post-listening sense, for your reference: http://blog.sina.com.cn/s/blog_593af2a70102uwhl.html

Andrew ng Tsinghua Report after hearing sense

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.