Construct the configurable lexical syntax analyzer generator (on) to configure lexical

Source: Internet
Author: User
Tags expression engine

Construct the configurable lexical syntax analyzer generator (on) to configure lexical

This article is original. For more information, see the source.

Http://blog.csdn.net/xinghongduo

Preface

 

When the source program is compiled as the target program, it must go through the following six processes: lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, and target code generation. Lexical Analysis and syntax analysis are the initial stages of the compilation process and an important part of the compiler. in the environment where relevant theories and tools are lacking in the early stage, it is very tedious to compile the lexical syntax analyzer. In the 1970s S, M. e. lesk, E. schmidt and Stephen C. johnson compiled the lexical analyzer generator Lex and syntax analyzer generator Yacc for Unix systems respectively. Lex accepted the lexical rules defined by regular expressions, and Yacc accepted the grammar rules described by the BNF paradigm, they can automatically generate C source programs that analyze the corresponding lexical and syntax rules. Their powerful functions and flexible features greatly simplify the construction difficulty of lexical analysis and syntax analysis. Nowadays, Lex and Yacc have become well-known built-in Unix standard tools (Flex and Bison for Linux) and are widely used in Compiler Front-end construction, it has helped people implement the front-end of compilers in hundreds of languages. Well-known applications include the interpretation engine of mysql SQL parser, PHP, Ruby, Python, and other script languages, browser kernel Webkit, early GCC, etc. This article introduces the Internal principles of configurable lexical analyzer and syntax analyzer generators. Finally, the author implements a configurable lexical analyzer generator and syntax analyzer generator similar to Lex & Yacc, it is used to successfully construct a lexical parser that analyzes the C99 Standard C.

 

Lexical analyzer

 

The function of the lexical analyzer (tokenizer) is to break down the input stream into a token sequence according to the lexical definition rules, and record the string matching each token and the position where it appears and provide it to the syntax analyzer. It is easier to manually compile a lexical analyzer for a certain language. In fact, many compilers also hand-written lexical analyzer. The advantage of this method is that it is intuitive and easy to understand, the disadvantage is low development efficiency and error-prone. After decades of development, compilation technology has formed a set of mature theories. Applying these theories can enable us to automatically construct lexical analyzers. In Lex, each lexical rule is defined by a regular expression. We only need to define a regular expression for each token, and Lex can automatically generate the corresponding analysis program, therefore, the core of the lexical analyzer generator is the regular expression engine. The following describes some basic theories and how to implement a basic Regular Expression Engine.

 

Finite State Automation

 

To implement a basic Regular Expression Engine, we first introduce a basic computer model, FSM, which is widely used ), the finite state automation form is defined as a quintuple M = (K, Σ, f, S, Z ).

K is a rich set. Each element of K is called a state;

Σ is a finite symbol table, and each element of it is called a symbol input, so it is also called an input symbol table;

F is a conversion function. f (k, a) = e indicates that the k State must be converted to the e state when facing the input symbol;

S is the initial state and the initial state is unique;

Z is the acceptance status set;

 

For example, the finite state automation of identifying regular expression AB (a | B) * AB can be defined as M = ({1, 2, 4, 5}, {a, B}, f, 1, {5}) where f is defined as: f (1, a) = {2}, f (2, B) = {3}, f (3,) = {3, 4}, f (3, B) = {3}, f (4, B) = {5 }. The status transition diagram is as follows:

 

 

Status transition diagram of finite automatic machine M

A finite state automation can be divided into two types: deterministic (DFA) and uncertain (NFA). Uncertainty here indicates that there may be multiple conversions for an input symbol in the same state, status 3 has two conversions when facing symbol a, so it is NFA. DFA is only in a certain State at any time in the matching process, while NFA may be in multiple States at a certain time in the matching process. Therefore, DFA is a special case of NFA. For a basic regular expression, there are DFA and NFA that can recognize it.

 

 

DFA equivalent to M, which is unique in State Conversion

 

Non-standard process for matching ababaab with NFA:

 

Current Status set

Remaining input string

Status Conversion

{1}

Ababaab

F (1,)

{2}

Babaab

F (2, B)

{3}

Abaab

F (3,)

{3, 4}

Baab

F (3, B) then f (4, B)

{3, 5}

Aab

F (3, a) then f (5,)

{3, 4}

AB

F (3, a) then f (4,)

{3, 4}

B

F (3, B) then f (4, B)

{3, 5}

Accept

 

Currently, the NFA-based Regular Expression Engine is generally used for backtracing matching to support some advanced regular expressions, which is relatively slow. However, the lexical analyzer generally does not require advanced Regular Expressions and does not require backtracking. Similar to the breadth-first search method, each matching gets all the next reachable states based on all the current reachable states, finally, if the status set has an acceptance status, the report accepts the status; otherwise, the matching fails. The status transition of DFA is unique. The matching process is simpler than that of NFA, and the final status is determined. Because the DFA status transition is definite, the DFA matching speed is higher than that of NFA. However, NFA supports some advanced regular expression features not supported by DFA, and there are many NFA-based engines. JAVA, the Regular Expression Engine in PHP is based on NFA.

 

If an edge with an input of null ε exists in NFA, it is called ε-NFA. The ε edge means that status conversion can be performed without any input symbols, therefore, after each state transition, ε-NFA also needs to perform a ε Closure operation (ε-Closure) on the obtained State set s ), the ε closure operation aims to obtain the union of all States that s and s can reach through any ε edge, that is, all current reachable states.

 

This section describes the definition, classification, and matching process of a finite-state automatic machine. The following describes how to construct an automatic machine for Recognition Based on regular expressions. First, let's review the basic syntax of the regular expression. The following definitions are from MSDN and deleted. The author only keeps the necessary syntax rules for implementing the lexical analyzer generator, if necessary, you can expand functions on your own.

 

Basic regular expression rules

 

1. x | y matches x or y.

2. [xyz] character set. Match any character.

3. [^ xyz] Reverse character set. Match any character that is not included.

4. [a-z] character range. Matches any character in the specified range.

5. [^ a-z] Reverse range character. Matches any character that is not within the specified range.

6. {n} n is a non-negative integer. Exactly match n times.

7. {n,} n is a non-negative integer. Match at least n times.

8. {n, m} M and n are non-negative integers, where n <= m. Match at least n times, at most m times.

9. * matches the character or subexpression zero or multiple times. Equivalent to {0 ,}.

10. + match the previous character or subexpression once or multiple times. Equivalent to {1 ,}.

11 .? Matches the previous character or subexpression zero or once. It is equivalent to {0, 1 }.

12. \ mark the next character as a special character. For example, "n" matches the character "n ". "\ N" matches the line break.

13. (pattern) matches the subexpression.

14 .. match any character except "\ n.

15. \ d numeric character match. It is equivalent to [0-9].

16. \ D non-numeric character match. It is equivalent to [^ 0-9].

17. \ n line break match.

18. \ r matches a carriage return.

19. \ s matches any blank characters, including spaces, tabs, and page breaks.

20. \ S matches any non-blank characters.

21. \ w matches any class characters, including underscores. Equivalent to [A-Za-z0-9.

22. \ W matches any non-word characters. Equivalent to [^ A-Za-z0-9.

The above rules can be combined and nested to form Regular Expressions for identifying complex patterns. Many rules seem to be needed, but only a few simple construction rules are required, then we can construct the ε-NFA for recognizing any complex regular expression.

 

Thompson Constructor

 

Thompson constructor is a constructor proposed by Ken Thompson, one of the fathers of C and Unix, to recognize the regular expression ε-NFA. Its principle is very simple, first, construct the ε-NFA for the recognition sub-expression, and then combine the ε-NFA with several simple rules to finally obtain the ε-NFA for the recognition of the complete regular expression. The advantage of the Thompson constructor method is that the construction speed is fast and the number of ε-NFA states is small.

 

1. For the blank edge ε, the structure is ε-NFA:

 

 

 

2. If r = a and a are input characters, the constructed N (r) is

 

3. For r = st, assume that the NFA of s and t has been constructed as N (s) and N (t), then N (r) is constructed

 

 

4. For r = s | t, N (r) construction is

 

5. For r = s *

6. For r = s +

 

7. For r = s?

 

8. For r = s {n}

 

9. For r = s {n ,}

 

10. For r = s {n, m}

 

Given an arbitrary complex regular expression, create the ε-NFA of each symbol or subexpression from left to right, and then merge the rules using the preceding rules, finally, we can construct an ε-NFA for recognizing the complete regular expression. Use a regular expression AB (aa )? | B +) * ba? For example, the process of using the Thompson constructor to establish and identify its ε-NFA:

 

First, construct the ε-NFA for identifying AB:

 

Construct the ε-NFA of the Identification (aa:

 

 

Is there an optional symbol after a subexpression (aa ?, Application rule 7 Constructor (aa )? ε-NFA:

 

 

Construct the ε-NFA of B:

 

 

It is found that B has a + symbol. Rule 6 is applied to construct B +'s ε-NFA:

 

 

Merge with rule 4 (aa )? And B +, get (aa )? | ε-NFA of B +:

 

 

Discovery (aa )? | B +) has a * symbol. apply Rule 5 to construct (aa )? | B +) * ε-NFA:

 

 

Use Rule 3 to merge AB and (aa )? | B +) *, get AB (aa )? | B +) * ε-NFA:

 

 

Construct the ε-NFA of the next symbol B, and use rule 3 and AB (aa )? | B +) * E-NFA merge to obtain AB (aa )? | B +) * ε-NFA of B:

 

 

Construct? ε-NFA:

 

 

Finally, apply Rule 3 to convert AB (aa )? | B +) * B and? The ε-NFA merge to obtain the final recognition AB (aa )? | B +) * ba? ε-NFA:

 

 

For the lexical analyzer, first use the Thompson constructor to construct the ε-NFA for recognizing each token regular, and then run these ε-NFA in parallel to obtain the ε-NFA for recognizing all tokens, the ε-NFA has multiple different acceptance states. For an input string, If ε-NFA matches the string, it may eventually reach multiple different acceptance states. In order to make the lexical analyzer report match unique, the concept of matching priority should also be introduced here. For example, in most programming languages, keywords are special cases of identifiers. For the input string int, we expect the lexical analyzer to report that they match the keyword rather than the identifier, so the keyword has a higher priority than the identifier match, as for the implementation of matching priority, you can save the priority of matching token in the acceptance status of each ε-NFA. If the lexical analyzer finally reaches multiple acceptance statuses, then the report matches the token corresponding to the highest priority acceptance status.

 

Identify the ε-NFA of each token in parallel

 

Symbol Set Compression

 

If an element in a regular expression contains multiple symbols for conversion, for example, [a-zA-Z], when constructing two adjacent States s1 and s2 that recognize its ε-NFA, you do not need to add an edge s1 to s2 for each symbol. For status s1, each symbol in this symbolic set is equivalent. Therefore, you can use a symbolic set to indicate State Transition to reduce the number of ε-NFA edges. A symbolic set can be represented by a unique id. For a single symbol, we need a ing table to map the symbol to its symbolic set. Note that after extracting the symbolic set of each element in each token regular, you need to divide these symbolic sets so that each symbol only belongs to one symbolic set.

 

Symbol Set Compression should be performed before the construction of ε-NFA. when reading the token definition regular expression, scanning the symbol set s of each element in the Regular Expression in sequence, comparing s with the already divided symbol set may result in the following situations:

 

1. If s is equal to k of the previously divided symbolic set, the comparison is stopped.

2 If s contains the previously divided symbolic set k and s is not equal to k, then s needs to be divided into the sum of k and s-k, and compare s-k with other symbolic sets.

3. If s belongs to the previously divided symbolic set k and s is not equal to k, an id is assigned to s and the characters contained in s are mapped to this id in the ing table.

4. if s is the same as k of the previously divided symbolic set, s ∩ k is used as the allocation id of the new symbolic set, and the characters in s ∩ k are mapped to this id in the ing table, then, compare the s-s ∩ k with other symbolic sets.

5. If the intersection of s and all previously divided symbolic sets is null, an id is assigned to s and the characters contained in s are mapped to this id in the ing table.

 

The preceding rule does not oversplit each symbolic set. Therefore, it ensures that the number of symbolic sets is the minimum and the compression ratio is the highest, in most cases, the side of ε-NFA can be well compressed. In the worst case, each symbol is a symbolic set. All characters not in the regular expression are mapped to-1 in the ing table, indicating that the side of the symbol does not exist.

 

A

1

[B-wA-Z]

[6-9]

[X-z]

0

[2-5]

Others

0

1

2

3

4

5

5

-1

Regular Expression a1 [a-zA-Z] + \ d [x-z0-5] ing table after compressed symbol set

The symbol set is used to represent the ε-NFA of state conversion, which reduces the number of edges.

 

ε-NFA de-blank edge

 

The number of ε-NFA states is large. After each State set is converted, the ε closure operation is performed to find all reachable states, which results in low matching performance. To avoid the ε closure operation after each state transition, You can further convert ε-NFA to NFA. Considering the significance of the ε edge, the significance of the ε edge is that the status can be converted without any input symbol. If the status S1 can be converted to the status S2 after the ε edge, s2 can be converted to status S3 through an input symbol a, so S1 can also be converted to status S3 through symbol a. For status S1, status S2 is only the intermediate status when it reaches S3. This intermediate process can be achieved by adding an edge marked as a between S1 and S3. Based on this idea, the following algorithms are easily obtained:

 

1. If all the inbound edges of a State node are ε edges, mark them as intermediate. The reason is that all outbound edges in the status can be moved to its parent node, and the status becomes inaccessibility after optimization.

2. Find ε-Closure (s) for each non-intermediate state s, copy the non-ε outbound edges of each State in the state set obtained by the ε Closure to s, and pay attention to the merge of the Acceptance state.

3. Delete all edges related to the transit status.

4. Delete All ε edges.

Identify AB (a | B) * (AB )? ε-NFA

 

To identify AB (a | B) * (AB )? E-NFA is used as an example to describe the optimization process: first, mark the transit status, and the scan finds that all the inbound edges in other States except the starting status are not e-edges, so there is no transit status in the figure, all statuses are reachable after optimization. The ε Closure of, is its own, so you do not need to copy the edge. ε-Closure (3) = {, 6 }, copy all non-ε outbound edges of 4 and 6 to 3, and status 6 is the acceptance status. Therefore, the merged 3 is also the acceptance status. ε-Closure (4) = {3, 4, 6 }, copy the non-ε outbound edges of 3 and 6 to 4 and obtain the NFA.

NFA obtained by eliminating the ε edge

 

Generally, the matching speed of NFA is slightly different from that of DFA, but for many regular expressions with the same prefix or extreme input strings, NFA may be converted to many or even all States during each state transition, for example, aaaaaaaaa, aaaaaaaab ,....... aaaaaaaaaaz these are only regular expressions with different last character. The input string is aaaaaaaaaaaaz, which is converted to the status of 26 branches each time before matching the last character. If N is the number of NFA states and L is the length of the input string, the worst-case time complexity of NFA matching is O (NL), with low performance. DFA determines each state transition. You only need to scan the input string to determine the matching mode. The best and worst time complexity is O (L ), therefore, it is more suitable for scenarios with high performance requirements such as lexical analyzer.

 

Subset Construction Method

 

The subset construction method is used to construct DFA equivalent to NFA. NFA is actually a State set conversion in the matching process. Each obtained State set is a description of the current matching state. Therefore, we can map each reachable state set to a State. If s1 and s2 are two State sets during the conversion process and s1 is converted to s2 through the symbol t, the two States d1 and d2 correspond to the State sets s1 and s2 respectively, add an edge marked with "d1" to "d2" as "t". Note that if the status set s1 has an acceptance status, d1 is also the highest-priority acceptance status in s1. The same applies to s2 and d2. Perform this operation to convert all the symbols between all reachable State sets in NFA to obtain another automatic machine, because all reachable State sets in NFA have corresponding states in this state machine, therefore, this state machine is equivalent to NFA, and any State in this state machine does not have multiple outbound edges with the same symbol, which meets the definition of DFA. Therefore, this state machine is equivalent to DFA.

 

In the preceding section, identify AB (a | B) * (AB )? For example, the Starting Status set {1} is the starting position. It checks the conversion of each input symbol in each status set until no new status set is generated, the table in the figure lists the transition relationships of each State set for each symbol. Each State set in the NFA table corresponds to a state of DFA, And the DFA in the right figure is obtained.

 

Process of subset Construction

 

For the vast majority of NFA, the subset construction method can effectively construct the equivalent DFA. As mentioned above, each reachable state set in NFA corresponds to a state in DFA, the NFA with n States contains 2 ^ n-1 non-empty subsets. That is to say, for some extreme NFA, the number of DFA States constructed by the subset constructor may reach an exponential level. Example: Regular Expression (a | B) * a (a | B )... (A | B) N-1 (a | B). The minimum number of DFA states is no less than 2 ^ n. Although such an extreme exists, there is almost no such regular expression in the actual application of the lexical analyzer. Even if such an application scenario exists, do not use a DFA-based engine, use NFA match directly.

 

Minimize DFA

 

Any DFA has a unique minimum DFA equivalent to it. Using the subset construction method, the DFA converted to NFA is not necessarily the smallest. Although the matching speed of DFA is only related to the length of the input string, however, the minimum number of DFA states may be less, and the Implementation consumes less resources. Therefore, it is meaningful to minimize the number of DFA states.

 

The common method to minimize DFA is the split method, which is to reduce the number of DFA states by combining the equivalent State and deleting the dead state. If a non-acceptance state is converted to itself for any input symbol, it is regarded as a dead state, because once it reaches this state, it will always stop in this State and will never reach the acceptance state, therefore, it is meaningless and can be deleted. If both states s and t are converted to the same State for any input symbol, and s and t are both accepted or unacceptable, s and t are equivalent. In DFA on the left, status 2 and status 3 are not accepted and all symbols are converted to the same status. Therefore, they are equivalent and can be merged to obtain the smallest DFA on the right.

 

Status 2 on the left is equivalent to status 3. After merging, the minimum DFA on the right is obtained.

 

Once multiple States are determined to be equivalent, these States can be merged into one State. Therefore, if s and t are not converted to the same State for any symbols, instead, it is equivalent to converting to the same State. Therefore, s and t are equivalent. The following gives a more rigorous definition of the equivalent State.

 

In a finite state automation, the two States s and t must meet the following two conditions:

Consistency condition: the Status s and t must be the same as the acceptance status (the same token is accepted) or the unacceptable status.

Spreading condition: For all symbols, s and t must be converted to an equivalent State.

 

The ultimate goal of the segmentation method to minimize DFA is to divide all States into several non-Intersecting subsets. The States in each subset are equivalent, and the states between different subsets are not equivalent, finally, merge all equivalent states.

 

First, split the initial subset: all non-accepted states are divided into one subset, and the acceptance States that accept the same token are divided into one subset. After the preceding division, we can obtain n + 1 subsets (n is the number of tokens). Because the status of each subset accepts different tokens, according to the consistency condition, the State of any two subsets is not equivalent.

 

After initial division, we get some non-equivalent subsets. However, for the States in each subset, we still cannot determine whether they are all equivalent States. Therefore, we need to continue division. The division method is to scan every State s of each subset, and compare it with the first State t of the subset. Based on the equivalent definition, determine whether s and t are equivalent, if it is not equivalent, the subset needs to be split. all the states of the subset that are not equivalent to t are split into new subsets. As the determination of the equivalent State has dependency, it is necessary to try to split each subset repeatedly until each subset cannot be split.

 

After the above process, several non-equivalent subsets are obtained, and the States in each subset are equivalent. For a subset that contains multiple states, merge all its States into one state. after merging, the smallest DFA is obtained. The number of subsets is the minimum number of DFA states. The following is an example of how to minimize the size of DFA.

 

 

The initial subset is divided into {1, 3, 5, 6} {2, 4, 7} according to the acceptance status and non-acceptance status}

Scanning subset {,}, status 1 and status 3 for symbol AB are converted to an unequal state, and status 1 and status 5 for symbol AB are also converted to an unequal state, therefore, this subset is divided }.

Scan the subset {2, 4, 7}, status 2, and status 4 to convert symbol a to an unequal state, so this subset is divided into {2, 7} {7}

After the second round of division, the subset {} {4 }.

Scan the subset {3, 5}, status 3, and status 5 to convert symbol a to an unequal state, so this subset is divided into {3} {5}

After the third round of division, the subset {} {3} {4} {5} can no longer be divided, so that the status of each subset can be merged, finally, the minimum DFA is obtained.

 

 

Regular Expression Engine to lexical analyzer

 

The above is actually the basis for constructing the Regular Expression Engine, And the implemented Regular Expression Engine stops when determining which token the input string matches, the Lexical analyzer needs to constantly match tokens in the input stream. To achieve this capability, you can continue to read characters after an automatic machine matches a token, if a pattern matches a longer pattern or a higher-priority pattern, the matching state is updated. If a State Conversion fails, the report syntax analyzer has matched a successfully token, return the input stream pointer to the character position of the last matched token, and reset the automatic machine to start matching the next token. If the token does not match, status conversion is no longer possible. This indicates that the scan fails to meet any token-defined symbols. At this time, a character must be discarded and then reset the automatic machine to rematch.

 


What is the difference between the C language lexical analyzer and the syntax analyzer?

As the name suggests, the lexical analyzer checks the lexical information. The syntax analyzer analyzes the syntax, what the lexical information is, and what the syntax is.
The source code is composed of two streams: keywords, variable names, method names, Parentheses, and so on. The variable names must not contain punctuation marks, it cannot start with a number or a character string. parentheses must appear in pairs. This is lexical;
The syntax and lexical functions can be used for syntax analysis. The syntax is the method of word arrangement. The literal meaning is like a Chinese sentence: I eat radish. There are three words in it: I eat radish, in addition to "I eat radish", these three words can also be made up, "radish eat me", "radish I eat", etc., apparently according to the Chinese grammar, the following two sentences are incorrect. The syntax analyzer analyzes similar syntaxes.

The code and report of the lexical analyzer and syntax analyzer based on the urgent compilation principle. It is best to use delphi to write the code, and send the email address 985922141 @ qqcom. If it is good, add points.

I. Purpose
Compilation technology is a course that emphasizes both theory and practice, and its experimental course should be integrated with the content of multiple courses learned in the first and second grades to complete a small Compilation Program. In this way, we can consolidate and enhance our understanding of lexical analysis, syntax analysis, semantic analysis, code generation, error handling, and other theories, and cultivate students' ability to analyze and design the entire system independently, further develop students' independent programming skills.

Ii. Tasks and requirements
Basic requirements:
1. Lexical analyzer generates word sequences in the following small languages
All the word symbols of this small language, as well as their types of codes and internal values are as follows:

Word Symbols do not encode the inner code value
DIM
IF
DO
STOP
END
Identifier
Constant (integer)
=
+
*
**
,
(
) 1
2
3
4
5
6
7
8
9
10
11
12
13
14 $ DIM
$ IF
$ DO
$ STOP
$ END
$ ID
$ INT
$ ASSIGN
$ PLUS
$ STAR
$ POWER
$ COMMA
$ LPAR
$ RPAR-
-
-
-
-
-
Internal string
Standard binary form
-
-
-
-
-
-

There are several important restrictions on this small language:
First, all keywords (such as IF and WHILE) are reserved words ". The so-called reserved words mean that users cannot use them as their own identifiers. For example, the following statement is absolutely forbidden:
IF (5) = x
Secondly, because keywords are reserved words, they can be processed as a special identifier. That is to say, the conversion graph corresponding to the keyword is not specified. But they (and their types of Code) are prearranged in a table (this table is called a reserved word table ). When a conversion chart identifies an identifier, check the table to determine whether it is a keyword.
Again, if there is no definite operator or delimiter between the keyword, identifier, and constant, at least one blank character must be used for the interval (at this time, the blank character is no longer completely meaningless ). For example, a condition statement should be written
IF I> 0 I = 1;
Never write
IFi> 0 I = 1;
For the latter, our analyzer treats IFI as an identifier unconditionally.
The status conversion chart of the word symbols in this small language, such:

2. the syntax analyzer can recognize arithmetic expressions consisting of the plus + minus-multiplication * Division/Multiplication ^ parentheses () operands. Its syntax is as follows:
E → E + T | E-T | T
T → T * F | T/F | F
F → P ^ F | P
P → (E) | I
The algorithms used can be prediction analysis, recursive descent analysis, operator priority analysis, and LR analysis.

3. The intermediate code generator generates the intermediate code of the above arithmetic expression (Quaternary sequence)

Iii. Implementation Process
The detailed algorithm description, data structure and function description, and flowchart of each question are provided.

1. flowchart of lexical analyzer

2. Main program diagram of the syntax analyzer

3. intermediate code generator flowchart:

Iv. source program
Lexical analyzer:
# Include <string. h>
# Include <malloc. h>
# Include <iostream>
Using namespace std;
Typedef struct table // storage structure of the analysis table
{
Char m [100];
} Table;

Table M [100] [100]; // defines an analysis table

Typedef struct stacknode // defines the element nodes in the stack (Leading node (empty)
{
Char ...... remaining full text>
 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.