3.2.5.9 write a lexical analyzer, 3.2.5.9 lexical analyzer

Source: Internet
Author: User

3.2.5.9 write a lexical analyzer, 3.2.5.9 lexical analyzer

The Lexical analyzer or scanner is mainly used to analyze the text of a string, and then analyze the words in the text to identify it as a certain type of attribute. The first step in writing a compiler or parser is to do this: lexical analysis. In the past, there were many methods to use string search. Here we use regular expressions to achieve this purpose.

Example:

Print ("lexical analyzer") import collectionsimpsimport reToken = collections. namedtuple ('Token', ['typ', 'value', 'line', 'column ']) def tokenize (code): keywords = {'if ', 'then', 'endif ', 'for', 'Next', 'gosub', 'Return '} token_specification = [('number', R' \ d + (\. \ d *)? '), # Integer or decimal number ('assign', R': ='), # Assignment operator ('end', R ';'), # Statement terminator ('id', R' [A-Za-z] + '), # Identifiers ('op', R' [+ \-*/]'), # Arithmetic operators ('newline', R' \ n'), # Line endings ('skip', R' [\ t] + '), # Skip over spaces and tabs ('mismatch', R '. '), # Any other character] tok_regex =' | '. join ('(? P <% s> % s) '% pair for pair in token_specification) line_num = 1 line_start = 0 for mo in re. finditer (tok_regex, code): kind = mo. lastgroup value = mo. group (kind) if kind = 'newline': line_start = mo. end () line_num + = 1 elif kind = 'skip': pass elif kind = 'mismatch': raise RuntimeError ('% r unexpected on line % d' % (value, line_num) else: if kind = 'id' and value in keywords: kind = value column = mo. start ()-line_start yield Token (kind, value, line_num, column) statements = ''' IF quantity THEN total: = total + price * quantity; tax: = price * 0.05; ENDIF; ''' for token in tokenize (statements): print (token)

The output is as follows:

Lexical analyzer

Token (typ = 'if', value = 'if', line = 2, column = 4)

Token (typ = 'id', value = 'quantity ', line = 2, column = 7)

Token (typ = 'then', value = 'then', line = 2, column = 16)

Token (typ = 'id', value = 'Total', line = 3, column = 8)

Token (typ = 'assign', value = ': =', line = 3, column = 14)

Token (typ = 'id', value = 'Total', line = 3, column = 17)

Token (typ = 'op', value = '+', line = 3, column = 23)

Token (typ = 'id', value = 'price', line = 3, column = 25)

Token (typ = 'op', value = '*', line = 3, column = 31)

Token (typ = 'id', value = 'quantity ', line = 3, column = 33)

Token (typ = 'end', value = ';', line = 3, column = 41)

Token (typ = 'id', value = 'tax ', line = 4, column = 8)

Token (typ = 'assign', value = ': =', line = 4, column = 12)

Token (typ = 'id', value = 'price', line = 4, column = 15)

Token (typ = 'op', value = '*', line = 4, column = 21)

Token (typ = 'number', value = '0. 05 ', line = 4, column = 23)

Token (typ = 'end', value = ';', line = 4, column = 27)

Token (typ = 'endif ', value = 'endif', line = 5, column = 4)

Token (typ = 'end', value = ';', line = 5, column = 9)

In this example, first import namedtuple from the library collections to record the attributes of each word (Token). Here there are mainly four attributes: type, value, row number, and column number, therefore, you can create a data structure named "Token", which provides space for storing each word attribute.

Next, define a function tokenize (). In the function, first define the keywords set, define the dictionary token_specification for regular expressions that recognize different words, and then generate a regular expression through the string join function:

(? P <NUMBER> \ d + (\. \ d *)?) | (? P <ASSIGN >:=) | (? P <END>;) | (? P <ID> [A-Za-z] +) | (? P <OP> [+ \-*/]) | (? P <NEWLINE> \ n) | (? P <SKIP> [\ t] +) | (? P <MISMATCH> .)

By using the regular expression above, all rules can be matched. If the match is successful, the rules are saved in the last group. Therefore, lastgroup is used to obtain the rules. To analyze a regular expression (? P <NUMBER> \ d + (\. \ d *)?), The outer braces indicate grouping ,? P <NUMBER> indicates that the group name is NUMBER. \ d + indicates that all numeric characters are matched. + (\. \ d *)? Indicates whether the decimal point can exist.

Statement re. finditer (tok_regex, code) matches all words in the entire input text, then obtains all successfully matched words in the for loop, judges the word type, and adds a line number based on the linefeed, determine the column number, including the word type, row number, and list number. Then save the value to complete the lexical analysis process.

It is worth noting that, because the length of the analyzed text is not limited, it may reach 1 MB or several Mb, it is impossible to save all the words together and return them again, therefore, the subiteration method is used to return data, and the keyword yield is used to return data. From the above view, with the regular expression function, it is easy to write a lexical analyzer, which is a good tool for "lazy.

 


Cai junsheng: shenzhencai Shenzhen

Copyright Disclaimer: This article is an original article by the blogger and cannot be reproduced without the permission of the blogger.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.