Paper: Https://einstein.ai/static/images/layouts/research/seq2sql/seq2sql.pdf Data set: Https://github.com/salesforce/WikiSQL
Seq2sql belongs to Natural Language interface (NLI) field, convenient for ordinary users to access and query the contents of the database, that is, users do not need to understand the SQL statement, only through natural language, you can query the required content. Seq2sql for reference is Seq2seq thought, and Seq2seq applied to machine translation and chatbot similar, SEQ2SQL will input statements encode and then decode into a structured SQL language output, reinforcement learning isapplication in the last module in the Seq2sql. At the same time, this paper also introduces a data set Wikisql, the data set has a manual annotated questions and corresponding SQL statements. Test results show that the accuracy of seq2sql is not particularly high, only 60.3% seq2sql Structure: The seq2sql consists of three parts:
The first part: Aggregation classifierThis part is actually a classifier that classifies user-entered statements into statistics-related constraints such as select Count/max/min augmented Pointer Network used here,augmented Pointer Networkin general, it is also a ecoder-to-decoder structure,Encoder uses two layers of bi-lstm, decoder uses two layers of unidirectional LSTM, encoder output h,ht corresponding to the output status of the T Word decoder each step is, input y s-1, output status GS, and then , decoder generates a attention score for each location T, resulting in a final In Seq2sql, a characterization vector (agg:aggregation clasifier, Inp:input,enc:encoder) is first generated for inputfirst foraugmented Pointer network similar to calculate a attention score,, after quantization, through the Softmax function Characterization vectors for input Complete classification tasks through a multi-tiered network and Softmax,
Part II: SELECT columnThis part is to look at the user input question hit which column first will each column name through LSTM encode the user input encode into a similar to the first part of the final through a multilayer neuron and softmax determine which line is hit
Part III: WHERE clauseDetermine the constraints, because the resulting SQL may not be the same as in the callout, but there are still the same results, so you cannot use the cross-entropy as loss training as in the first two parts, so using the reward function (G:ground-truth) in intensive training, loss using gradientsWikisql:WikisqlContains a series of SQL-related problem sets and SQL table
Null
Seq2sql: Using intensive learning to generate SQL from natural language