Chapter 2: Natural Language Processing-from rules to statistics

Source: Internet
Author: User

Any language can be considered asEncoding MethodThe language's syntax rules are the encoding and decoding algorithms. We send the meaning we want to express through a sentence (an encoding). The person who hears this sentence (receives the encoding information) understands this sentence (Decoding ), to understand what the other party wants to express. This is an interesting and vivid process.

Natural Language Processing (NLP) began in 1950 and has a history of more than 60 years. However, in the past 20 years, scientists have fallen intoMisunderstanding(To enable a machine to complete translation or speech recognition so that only humans can do it, the machine must understand natural language, to achieve this, computers must be equipped with intelligence like ours ). Today, a little expert knows that all natural language processing relies on mathematics, which is more accurate to statistics.

As we all know, to learn a foreign language, we all need to learn its syntax rules, parts of speech, and word structure. In fact, these are rules-based natural language processing processes.

At that time, there was a syntax analysis tool, Parser (not the current Standford parser), which could construct a syntax analysis tree for a sentence, mark the subject and the object, and modify the relationship between words. However, in the early days, it was hard to deal with long sentences. First, we need to use grammar rules to cover even 20% of real statements. The number of grammar rules should be at least tens of thousands. Second, even if we can write a set of grammar rules covering all natural language phenomena, it is also quite difficult to use a computer to parse it, because natural languages are contextual, unlike programming languages.

It can be seen that rule-based syntactic analysis is not feasible. Because of context relevance, we need to contact the context to determine the meaning of a word. Before 1970, the efforts of natural language processing were quite unsuccessful. Until 1970, the emergence of statistical linguistics broke this situation. The key task of driving this transformation is verik jarenik and his IBM Watson lab. In 2005, Google's statistical-based translation system completely exceeded the rule-based systran translation system, and the last bastion stick to the rule-based method was pulled out.

Today, few scientists claim to be defenders of traditional rule-based methods. Statistics-based natural language methods established on mathematical models have become the mainstream.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.