Golang write a picture server, in the bulk download compression time to find the memory growing ....
Fortunately, the Golang comes with a memory footprint log combined with an analysis tool to easily see the memory distribution.
For details, refer to:
Http://blog.golang.org/profiling-go-programs
Cpu\ memory information can be counted in real time.
The main point here is how the memory is engaged. A reference to the CPU analysis prior to an article.
/ / Need to include this pprof package
Import "runtime/pprof"
/ / Receive memory statistics save file here
Var memprofile = flag.String("memprofile", "", "write memory profile to this file")
/ / Here is the logic to determine whether you need to record memory
If *memprofile != "" {
Var err error
memFile, err = os.Create(*memprofile)
If err != nil {
log.Println(err)
} else {
log.Println("start write heap profile....")
pprof.WriteHeapProfile(memFile)
Defer memFile.Close()
}
}
/ / There is a more flexible way to open the record and close the record as an http request, open when needed \ close when not needed. Remember to add token
The full code is as follows:
// GODEBUG=schedtrace=1000 ./trace_example // GOMAXPROCS=2 GODEBUG=schedtrace=1000 ./trace_example // GOMAXPROCS=2 GODEBUG=schedtrace=1000,scheddetail=1 ./trace_example package main import ( "flag"
"log"
"os"
"runtime/pprof" // "net/http" // _ "net/http/pprof" "sync"
"time" ) //http://www.graphviz.org/Download_macos.php // var cpuprofile = flag.String("cpuprofile", "", "write cpu profile to file") var memprofile = flag.String("memprofile", "", "write memory profile to this file")
var memFile *os.File
func main() {
flag.Parse() // if *cpuprofile != "" { // f, err := os.Create(*cpuprofile) // if err != nil { // log.Fatal(err) // } // pprof.StartCPUProfile(f) // defer pprof.StopCPUProfile() // } if *memprofile != "" {
var err error
memFile, err = os.Create(*memprofile) if err != nil {
log.Println(err)
} else {
log.Println("start write heap profile....")
pprof.WriteHeapProfile(memFile)
defer memFile.Close()
}
} // go func() { // log.Println(http.ListenAndServe("localhost:6060", nil)) // }() var wg sync.WaitGroup
wg.Add(10) for i := 0; i < 10; i++ {
go work(&wg)
}
wg.Wait() // Wait to see the global run queue deplete. time.Sleep(300 * time.Second)
}
func work(wg *sync.WaitGroup) {
time.Sleep(time.Second)
var counter int for i := 0; i < 1e10; i++ {
time.Sleep(time.Millisecond * 100)
pprof.WriteHeapProfile(memFile)
counter++ }
wg.Done()
}
OK, plus this memory analysis data, continue to run the service, after running for some time, stop the program, using the following command to analyze.
Go Tool pprof Image_service
(pprof) top20
2622.12MB of 4938.25MB total (53.10%)
Dropped 180 nodes (cum <= 24.69MB)
Showing top 20 nodes out of 30 (cum >= 419.23MB)
flat flat% sum% cum cum%
1759.43MB 35.63% 35.63% 1759.43MB 35.63% bytes.makeSlice
203.06MB 4.11% 39.74% 320.58MB 6.49% net/url.parseQuery
166.11MB 3.36% 43.10% 166.11MB 3.36% net/textproto.(*Reader).ReadLine
132.03MB 2.67% 45.78% 132.03MB 2.67% net/textproto.(*Reader).ReadMIMEHeader
117.52MB 2.38% 48.16% 117.52MB 2.38% net/url.unescape
71.02MB 1.44% 49.60% 71.02MB 1.44% mcommoninit
60.50MB 1.23% 50.82% 60.50MB 1.23% fmt.Sprintf
37.51MB 0.76% 51.58% 98.01MB 1.98% _/home/qingpingzhang/project/createdji_servers/image_service/image.(*Handler).HandleRedo
35.51MB 0.72% 52.30% 333.65MB 6.76% net/http.ReadRequest
21.37MB 0.43% 52.73% 21.37MB 0.43% github.com/gographics/imagick/imagick._Cfunc_GoBytes
17.57MB 0.36% 53.09% 17.57MB 0.36% bufio.NewReaderSize
0.50MB 0.01% 53.10% 21.58MB 0.44% net/http.(*Transport).dialConn
0 0% 53.10% 21.87MB 0.44% _/home/qingpingzhang/project/createdji_servers/image_service/image.(*Handler).CompressWithSizeList
0 0% 53.10% 1781.66MB 36.08% _/home/qingpingzhang/project/createdji_servers/image_service/image.(*Handler).DoRecompress
0 0% 53.10% 1759.29MB 35.63% _/home/qingpingzhang/project/createdji_servers/image_service/image.(*Handler).httpGetToMagickWand
0 0% 53.10% 17.57MB 0.36% bufio.NewReader
0 0% 53.10% 1759.43MB 35.63% bytes.(*Buffer).ReadFrom
0 0% 53.10% 21.37MB 0.43% github.com/gographics/imagick/imagick.(*MagickWand).GetImageBlob
0 0% 53.10% 419.23MB 8.49% main.(*ImageService).ServeHTTP
0 0% 53.10% 419.23MB 8.49% main.Action
(pprof) quit
Initially you can navigate to when download compression is assigned too much byteslice caused.
Observe the code and find no specific reason until the article was found online:
http://openmymind.net/Go-Slices-And-The-Case-Of-The-Missing-Memory/
buffer := bytes.NewBuffer(make([]byte, 0, resp.ContentLength)
buffer.ReadFrom(res.Body)
body := buffer.Bytes()
A Memory Leak
Look, what's a memory leak within the context of a runtime that provides garbage collection? Typically it ' s either a rooted object, or a reference from a rooted object, which you haven ' t considered. This is obviously different as it's really extra memory you might isn't be aware of. rooting the object might very well being intentional, but you don't realize just how much memory it's you ve rooted. Sure, my ignorance is at least 75% to blame. Yet I can ' t help but shake the feeling that there's something too subtle about all of this. Any code can return something this looks and quacks like an array of 2 integers yet takes gigs of memory. Furthermore, asbytes.MinReada global variable is just bad design. I can ' t imagine how many people think they ' ve allocated X when they ' ve really allocated x*2+512.
The general meaning is that this buffer uses the smallest unit to read, if not enough, then continue to apply for twice times the size of space.
Can view the source code:
146 // ReadFrom reads data from r until EOF and appends it to the buffer, growing
147 // the buffer as needed. The return value n is the number of bytes read. Any
148 // error except io.EOF encountered during the read is also returned. If the
149 // buffer becomes too large, ReadFrom will panic with ErrTooLarge.
150 func (b *Buffer) ReadFrom(r io.Reader) (n int64, err error) {
151 b.lastRead = opInvalid
152 // If buffer is empty, reset to recover space.
153 if b.off >= len(b.buf) {
154 b.Truncate(0)
155 }
156 for {
157 if free := cap(b.buf) - len(b.buf); free < MinRead {
158 // not enough space at end
159 newBuf := b.buf
160 if b.off+free < MinRead {
161 // not enough space using beginning of buffer;
162 // double buffer capacity
163 newBuf = makeSlice(2*cap(b.buf) + MinRead)
164 }
165 copy(newBuf, b.buf[b.off:])
166 b.buf = newBuf[:len(b.buf)-b.off]
167 b.off = 0
168 }
169 m, e := r.Read(b.buf[len(b.buf):cap(b.buf)])
170 b.buf = b.buf[0 : len(b.buf)+m]
171 n += int64(m)
172 if e == io.EOF {
173 break
174 }
175 if e != nil {
176 return n, e
177 }
178 }
179 return n, nil // err is EOF, so return nil explicitly
180 }
Solution:
//ioutil.ReadAll starts at a very small 512 //it really should let you specify an initial size buffer := bytes.NewBuffer(make([]byte, 0, 65536))
io.Copy(buffer, r.Body)
temp := buffer.Bytes()
length := len(temp)
var body []byte //are we wasting more than 10% space? if cap(temp) > (length + length / 10) {
body = make([]byte, length)
copy(body, temp)
} else {
body = temp
}
After a bit of testing, the memory is garbage collected. Why is there such a situation?
Entering interactive mode (type "help" for commands)
(pprof) top20
834.66MB of 1599.63MB total (52.18%)
Dropped 175 nodes (cum <= 8MB)
Showing top 20 nodes out of 25 (cum >= 72.01MB)
flat flat% sum% cum cum%
427.45MB 26.72% 26.72% 427.45MB 26.72% bytes.makeSlice
185.80MB 11.62% 38.34% 614.25MB 38.40% _/home/qingpingzhang/project/createdji_servers/image_service/image.(*Handler).httpGetToMagickWand
69.01MB 4.31% 42.65% 69.01MB 4.31% net/textproto.(*Reader).ReadMIMEHeader
48MB 3.00% 45.65% 48MB 3.00% net/url.unescape
24.51MB 1.53% 47.18% 24.51MB 1.53% mcommoninit
24.01MB 1.50% 48.68% 72.01MB 4.50% net/url.parseQuery
24MB 1.50% 50.19% 117.02MB 7.32% net/http.ReadRequest
24MB 1.50% 51.69% 24MB 1.50% net/url.parse
7.87MB 0.49% 52.18% 7.87MB 0.49% github.com/gographics/imagick/imagick._Cfunc_GoBytes
0 0% 52.18% 7.87MB 0.49% _/home/qingpingzhang/project/createdji_servers/image_service/image.(*Handler).CompressWithSizeList
0 0% 52.18% 622.62MB 38.92% _/home/qingpingzhang/project/createdji_servers/image_service/image.(*Handler).DoRecompress
0 0% 52.18% 427.95MB 26.75% bytes.(*Buffer).ReadFrom
0 0% 52.18% 7.87MB 0.49% github.com/gographics/imagick/imagick.(*MagickWand).GetImageBlob
0 0% 52.18% 72.01MB 4.50% main.(*ImageService).ServeHTTP
0 0% 52.18% 72.01MB 4.50% main.Action
0 0% 52.18% 72.01MB 4.50% net/http.(*Request).ParseForm
0 0% 52.18% 117.02MB 7.32% net/http.(*conn).readRequest
0 0% 52.18% 117.02MB 7.32% net/http.(*conn).serve
0 0% 52.18% 72.01MB 4.50% net/http.func·014
0 0% 52.18% 72.01MB 4.50% net/url.ParseQuery
In the Golang language comes with the bytes pack inside the application memory, why will not quickly be recycled?
Puzzled, IO operation this piece also need to find time to re-learn.
[Golang] Memory growing Bytes.makeslice