3.2.5.9 Write a lexical analyzer

Source: Internet
Author: User

The lexical parser or scanner is mainly used to parse the text of a string, and then analyze the words in the text to recognize the attributes of a certain type. The first step in writing a compiler or parser is to do something like this: lexical analysis. There have been many ways to use string search, where regular expressions are used to accomplish this purpose.

Example:

Print ("Lexical analyzer") import collectionsimport Retoken = collections.namedtuple (' Token ', [' Typ ', ' value ', ' line ', ' column '])        def tokenize (code): keywords = {' IF ', ' Then ', ' ENDIF ', ' for ', ' NEXT ', ' GOSUB ', ' RETURN '} token_specification = [        (' number ', R ' \d+ (\.\d*) '), # Integer or decimal number (' ASSIGN ', r ': = '), # assignment operator      (' END ', r '; '), # Statement Terminator (' ID ', R ' [a-za-z]+ '), # Identifiers (' OP ', R ' [+\-*/] '), # Arithmetic operators (' NEWLINE ', R ' \ n '), # line endings (' SKIP ', R ' [\t]+ ') ), # Skip over Spaces and tabs (' mismatch ', R '. '), # All other character] Tok_regex = ' | '. Join (' (? p<%s>%s) '% pair for pair in token_specification] Line_num = 1 Line_start = 0 for mo in Re.finditer (tok_reg Ex, code): kind = mo.lastgroup value = Mo.group (kind) if kind = = ' NEWLINE ': Line_start = m O.end () Line_nuM + = 1 elif Kind = = ' SKIP ': pass elif kind = = ' mismatch ': Raise RuntimeError ('%r unexp ected on line%d '% (value, line_num)) else:if kind = = ' ID ' and value in Keywords:kind    = Value column = Mo.start ()-Line_start yield Token (kind, value, line_num, column) statements = "        IF quantity Then total: = all + price * Quantity;    Tax: = Price * 0.05; ENDIF, "" For tokens in tokenize (statements): print (token)

The resulting output is as follows:

Lexical analyzer

Token (typ= ' if ', value= ' if ', line=2, column=4)

Token (typ= ' ID ', value= ' quantity ', line=2, column=7)

Token (typ= ' Then ', value= ' then ', line=2, column=16)

Token (typ= ' ID ', value= ' total ', line=3, column=8)

Token (typ= ' ASSIGN ', value= ': = ', line=3, column=14)

Token (typ= ' ID ', value= ' total ', line=3, column=17)

Token (typ= ' OP ', value= ' + ', line=3, column=23)

Token (typ= ' ID ', value= ' price ', line=3, column=25)

Token (typ= ' OP ', value= ' * ', line=3, column=31)

Token (typ= ' ID ', value= ' quantity ', line=3, column=33)

Token (typ= ' END ', value= '; ', line=3, column=41)

Token (typ= ' ID ', value= ' tax ', line=4, column=8)

Token (typ= ' ASSIGN ', value= ': = ', line=4, column=12)

Token (typ= ' ID ', value= ' price ', line=4, column=15)

Token (typ= ' OP ', value= ' * ', line=4, column=21)

Token (typ= ' number ', value= ' 0.05 ', line=4, column=23)

Token (typ= ' END ', value= '; ', line=4, column=27)

Token (typ= ' ENDIF ', value= ' ENDIF ', line=5, column=4)

Token (typ= ' END ', value= '; ', line=5, column=9)

In this example, first from the library collections import namedtuple so that each word can be recorded ( ), where the main property is 4 token

Then define a function tokenize(), define the keyword set keywords in the function , define a dictionary of regular expressions that recognize different words token_ Specification, and then generates a regular expression by using a string join function:

(? P<number>\d+ (\.\d*)?) | (? p<assign>:=) | (? p<end>;) | (? p<id>[a-za-z]+) | (? p<op>[+\-*/]) | (? p<newline>\n) | (? p<skip>[\t]+) | (? P<mismatch>.)

Through the above regular expression, can match all rules, as long as the match succeeds, is saved in the last grouping, thus used lastgroup to get. specifically to analyze a regular expression means can appear, P<NUMBER> number \d+ means matching all numeric characters, + (\.\d*)?

Through the statement re.finditer (Tok_regex, code) to match all the words in the entire input text, and then in the for loop to get all the matching successful words, and then determine the type of words, And according to the newline character to increase the line number, determine the column number, with the word type, line number, list number, and then save the value of the complete lexical analysis process.

It is important to note that due to the length of the parsed text, it is possible to reach 1M, or a few M size, so it is not possible to save all the words together and return, so the iteration of the way to return, thus using the keyword yield to return. From the above, with the function of regular expressions, it is easy to write a lexical analyzer, which is a good tool for "lazy people".


Cai Junsheng No.: Shenzhencai Shenzhen

Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

3.2.5.9 Write a lexical analyzer

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.