Applications: such as HTTP requests, there is a sequence, need to implement: first request (ask) sent first, and read (response) is also followed by this rule, this read and write form a pair (there is a request and return)
Come on, go directly to the code:
Func (cc *clientconn) do (req *http. Request) (Resp *http. Response, err error) {err = cc. Write (req)//client sends a request to the HTTP server if Err! = nil {return}return cc. Read (req)//client reads HTTP server back Data}
Take a closer look at what you did when you write:
func (Cc *clientconn) write (req *http. Request) (err error) {// Ensure ordered execution of Writes// Generate an ordinal ID to ensure that ID= i ID>i performs Id := cc.pipe.next () Cc.pipe.StartRequest (ID) First //This is the Focus Defer func () {cc.pipe.endrequest (ID) //The current ID is executed, go to trigger the next id+1 request execution if err != nil {cc.pipe.startresponse (ID) cc.pipe.endresponse (ID)} else {// remember the pipeline id of this requestcc.lk.lock () cc.pipereq[req] = id //the end of the time to save this, req ID, convenient later read time continue to order Cc.lk.Unlock ()}} () Cc.lk.Lock () //read-write lock, prevent the execution of conflict//judgment read/ Write error message, and net. Conn is closed, and the specific struct structure of CC is introduced later if cc.re != nil { // no point sending If read-side closed or brokendefer cc.lk.unlock () return cc.re}if cc.we != nil {defer cc.lk.unlock () return cc.we}if cc.c == nil { // connection closed by user in the meantimedefer cc.lk.unlock () return errclosed}c := cc.cif req. Close {// we write the eof to the write-side error, because there// still might be some pipelined readscc.we = errpersisteof }cc.lk.unlock () //here is the specific execution of the write request, so the previous step is to ensure that the ordered request err = Cc.writereq (req, c) Cc.lk.Lock () Defer cc.lk.unlock () if err != nil {cc.we = errreturn err}cc.nwritten++ //Order ++return nil}
Look at the structure of cc:
type clientconn struct {lk sync. mutex // read-write lock c net. Conn // golang Connection interfacer *bufio. reader //bufreaderre, we error // read/write errorslastbody io. readcloser //the number of reads and writes of the last ioreadernread, nwritten int //pipereq map[*http. request]uint //save Pair request and Idpipe textproto. Pipelinewritereq func (*http. Request, io. Writer) error //Write data anonymous function}type pipeline struct {mu sync. Mutexid uintrequest sequencerresponse sequencer}type sequencer Struct {mu sync. mutexid uint wait map[uint]chan uint // is using this pipeline to block operations that are not in order}
How is the order specifically implemented:
Generate ID Code, wholesale order func (P *pipeline) next () uint {p.mu.lock () id := p.idp.id++ P.mu.unlock () return id}//executes Startresponse, actually executes Sequencer's Start method func (P *pipeline) Startresponse (Id uint) {p.response.start (ID)}func (s *sequencer) Start (id uint) {s.mu.lock () if s.id == id { //reached the current ID, can be executed, do not need to block S.mu.unlock () return}c := make (Chan uint) if s.wait == nil {s.wait = make (Map[uint]chan uint)}s.wait[id] = c //record Chans.mu.Unlock () <-c //read block in map, wait for C to write}// Of course the previous ID was executed at the end of the time, after an ID trigger blocking the unwrapped func (s *sequencer) end (id uint) {s.mu.lock () if s.id != id {panic ("Out of sync")}id++ //here refers to the back one ids.id = idif s.wait == nil {s.wait = make (Map[uint]chan uint)}c, ok := s.wait[id]if ok {delete (S.wait, id) &NBsp;//Delete the Chan}s.mu.unlock () in this map if ok {c <- 1 //write the data to this Chan and unlock the Block}}
Similarly, the process of read is similar to the write process. Visible, Golang various locks and pipelines to ensure sequential execution in a concurrent environment
Golang HTTP is sent by serial number and received by serial number