This is a creation in Article, where the information may have evolved or changed.
Before
Today it is found that the benefits of go aside from its lightweight threading (Goroutine) provides a more convenient and flexible concurrency programming pattern, and its I/O mechanism is also designed to be very dynamic.
Before, when I was sending JSON data to other servers, I needed to declare a bytes cache and then json Mashal the contents of the struct into a byte stream through the library and send it through the post function.
The code is as follows:
package mainimport ( "bytes" "encoding/json" "io/ioutil" "log" "net/http")func init() { log.SetFlags(log.Lshortfile)}func main() { cli := http.Client{} msg := struct { Name, Addr string Price float64 }{ Name: "hello", Addr: "beijing", Price: 123.56, } buf := bytes.NewBuffer(nil) json.NewEncoder(buf).Encode(msg) resp, err := cli.Post("http://localhost:9999/json", "application/json", buf) if err != nil { log.Fatalln(err) } body := resp.Body defer body.Close() if body_bytes, err := ioutil.ReadAll(body); err == nil { log.Println("response:", string(body_bytes)) } else { log.Fatalln(err) }}
This always requires us to pre-allocate a cache in memory, in most cases it's okay , but if you need to send a large amount of data, it will seriously affect performance.
After
Go has designed a pipe function to help us solve this problem, so my code can be changed to:
Package Mainimport ("Encoding/json" "io" "io/ioutil" "Log" "Net/http" "Time") Fu NC init () {log. SetFlags (log. Lshortfile)}func Main () {cli: = http. client{} msg: = struct {name, Addr string price float64} {Name: "he Llo ", Addr:" Beijing ", Price:123.56,} r, W: = Io. Pipe ()//Pay attention to the logic here!! Go func () {defer func () {time. Sleep (time. Second * 2) log. Println ("encode complete")///Only if this is off, the Post method will return W.close ()} () log. PRINTLN ("Pipeline Ready Output")///Only post begins to read the data, this starts encode and transmits err: = json. Newencoder (W). Encode (msg) log. PRINTLN ("Pipe output data Complete") if err! = Nil {log. Fatalln ("Encode JSON failed:", Err)}} () time. Sleep (time. Second * 1) log. Println ("Start reading data from pipe") resp, err: = CLI. Post ("Http://localhost:9999/json", "Application/json", R) if Err! = Nil {log. Fatalln (Err)} log. Println ("Post transfer Complete") Body: = resp. Body defer body. Close () If body_bytes, err: = Ioutil. ReadAll (body); Err = = Nil {log. PRINTLN ("Response:", String (body_bytes))} else {log. Fatalln (ERR)}}
The output is as follows:
main.go:35: 管道准备输出main.go:44: 开始从管道读取数据main.go:38: 管道输出数据完毕main.go:31: encode完成main.go:50: POST传输完成main.go:56: response: {"Name":"hello","Addr":"beijing","Price":123.56}
As can be seen, through the pipe, we can easily link the input and output, as long as we can correctly understand the logic of the whole process. With it, we can finally get rid of the nasty middle cache, but also improve the stability of the system.
Server-side code
In order to facilitate debugging, the following is the server-side code:
Package Mainimport ("Encoding/json" "Io/ioutil" "Log" "Net/http") func init () { Log. SetFlags (log. Lshortfile)}func Main () {http. Handlefunc ("/json", Handlejson) http. Listenandserve (": 9999", nil)}func Handlejson (resp http. Responsewriter, req *http. Request) {if Req. Method = = "POST" {body: = req. Body defer body. Close () body_bytes, err: = Ioutil. ReadAll (body) if err! = Nil {log. Println (Err) resp. Write ([]byte (Err. Error ())) return} J: = map[string]interface{}{} if err: = JSO N.unmarshal (Body_bytes, &j); Err! = Nil {log. Println (Err) resp. Write ([]byte (Err. Error ())) return} resp. Write (body_bytes)} else {resp. Write ([]byte ("Please use POST Method!")) }}