This is a creation in Article, where the information may have evolved or changed.
Write at the beginning
Recently in doing elasticsearch related development, there is a need to import the MySQL specified table data into the Elasticsearch index, this requirement seems relatively easy, the general idea is to traverse the table in each row of data, converted to JSON, It is then submitted to ES through the API interface of ES.
This seems to be a relatively simple requirement, just a little bit this is a generic operation, the program does not need to care about what kind of table the remote operation is, what fields are included, and so on.
First edition
A better solution has been found on the StackOverflow:
Only Go-sql-driver/mysql is used here.
func Getjson (SqlString string) (string, error) {rows, err: = db. Query (sqlString) if err! = Nil {return "", err} defer rows. Close () columns, err: = rows. Columns () if err! = Nil {return "", err} count: = Len (Columns) Tabledata: = Make ([]map[string]interface{}, 0) Values: = Make ([]interface{}, Count] valueptrs: = Make ([]interface{}, Count) for rows. Next () {for i: = 0; i < count; i++ {valueptrs[i] = &values[i]} rows. Scan (Valueptrs ...) Entry: = Make (map[string]interface{}) for I, col: = range columns {var v interface{} val: = values [i] b, OK: = val. ([]byte) if ok {v = string (b)} else {v = Val } Entry[col] = v} tabledata = Append (Tabledata, entry)} Jsondata, err: = json. Marshal (tabledata) if err! = Nil {return "", err} fmt. Println (String (jsondata)) return string (Jsondata), nil}
Code derived from dumping MySQL tables to JSON with Golang
Second Edition
This code basically satisfies the requirements, but there seems to be a problem, that is, the field corresponding to the type of field is not preserved, all become a string type, here is a more feasible solution:
In this MySQL driver GitHub there is this issue,returned values is always []byte
That ' s a specialty of mysql:you has to use prepared statements to get the native types. MySQL has both protocols, one transmits everything as text, the other as the "real" type. And that binary protocol are only used if you use prepared statements. The driver is pretty much powerless to enforce a protocol and the text protocol takes less resources on the server.
This could help you:
stmt, err: = db. Prepare (sqlString)
If err! = Nil {...}
Defer stmt. Close ()
Rows, err: = stmt. Query ()
This description is quite clear, according to this logic, I changed the code to the following form:
Query: = FMT. Sprintf ("%s LIMIT%d OFFSET%d", handler.querysql, Limit, OFFSET) stmt, err: = db. Prepare (query) if err! = Nil {panic (err. Error ())} defer stmt. Close () rows, err: = stmt. Query () if err! = Nil {panic (err). Error ())} defer rows. Close () columns, err: = rows. Columns () if err! = Nil {panic (err. Error ())} count: = Len (columns) Values: = Make ([]interface{}, Count] Scanargs: = Make ([]interface{}, Count) For I: = range values {scanargs[i] = &values[i]} for rows. Next () {err: = rows. Scan (Scanargs ...) If err! = Nil {log. Fatalf ("SACN Data error:%s", err. Error ()) Continue} entry: = Make (map[string]interface{}) for I, col: = Range Columns { V: = Values[i] B, OK: = V. ([]byte) if OK {Entry[col] = string (b) } else {Entry[col] = v}}//Serialization of data B, err: = json. Marshal (Entry) if err! = Nil {log. Fatal (Err. Error ()) Continue} Handler.docchan <-&document{//This takes the first field in the table to calculate the document ID Id:handler. Formatdocumentid (Handler.dbname, handler.table, Entry[columns[0]), Data:b,}}
In the test, it seems that only a partial type problem has been solved, and the data type of the shaping can be preserved, but it seems that the floating-point field is still a string and continues to be explored.
To be continued .....