This is a creation in Article, where the information may have evolved or changed.
Richard L. Hudson (Rick) is best known for he work in memory management including the invention of the Train, Sapphire, a ND Mississippi Delta algorithms as well as GC stack maps which enabled garbage collection in statically typed languages Li Ke Java, C #, and Go. He has published papers on Language runtimes, memory management, concurrency, synchronization, memory models and TRANSACTI Onal memory. Rick is a member of Google's Go team where he's working on Go ' GC and runtime issues. In economics, there are this concept of a virtuous cycle–a positive feedback loops between different processes that feed I Nto one another. Traditionally in tech, there have been a virtuous cycle between software and hardware development. CPU hardware improves, which enables faster software to BES written, which in turn drives further improvements in CPU speed and compute power. This cycle was healthy until on 2004, which is on when Moore's law started to end. |
No one has translated this paragraph yet
I'll translate. |
These days, 2X transistors! = 2x faster programs. more transistors = = more cores, but software have not evolved to being able to fully utilize more cores. Because Software today isn't able to adequately put multiple cores to work, the hardware guys was not going to keep putti ng more cores in. The cycle is sputtering.
A Long term goal of Go are to reboot this virtuous cycle by enabling more concurrent, parallel programs. In the shorter term, we need to increase Go adoption. One of the biggest complaints with the Go runtime, right, is, and the GC pauses is too long. When their team initially took on the this problem, he jokingly says that as engineers, their initial reaction is to not ACTU Ally solve the problem, and to look for workarounds like:
Adding an, tracker to the computer and GC when no one ' s looking
Pop up a network wait icon during GC and blame the pause on network latency or something else
But Russ Cox shot these ideas down for some reason, so they decided to roll up their sleeves and actually try to improve t He Go GC. The algorithm they developed trades program execution throughput for reduced GC latency. Go programs'll get a little bit slower in exchange for ensuring lower GC latencies.
|
No one has translated this paragraph yet
I'll translate. |
How can we make latency tangible?
1:read 1 MB sequentially from SSD
20:read 1 MB from Spinny disk
50:perceptual Causality (eye/cursor response threshold).
50+: Various Network delays
300:eye Blink
So what much GC can we do in a millisecond? Java GC vs. Go GC
Go:
Thousands of Goroutines
Synchronization via channels
Runtime written in Go, leverages go same as users
Control of spatial locality (structs can be embedded, interior pointers (&foo.field))
Java:
The biggest difference is the issue of spatial locality. In Java, everything are a pointer, whereas GO enables you to embed structs within one another. Following pointers many layers deep causes a lot of issues for a garbage collector. |
No one has translated this paragraph yet
I'll translate. |
GC Basics Here's a quick primer on Garbage collectors. They typically involve 2 phases:
-
Scan phase:determine which things in the heap is reachable. This involves starting from the poitners in stacks, registers, and global variables, and following pointers into the heap.
-
Mark phase:walk the pointer graph. Mark objects as reachable from the program as you go. From the GC's point of view, it's simplest to stop the world so that pointers was not changing while the mark phase is HA Ppening. Truly concurrent GC is difficult, because pointers is continually changing. The program uses something called a write barrier to communicate to the GC so it should not collect an object. In practice, write barriers can is more expensive than stop-the-world pauses.
|
No one has translated this paragraph yet
I'll translate. |
Go GC
The Go GC algorithm uses a combination of write barriers and short stop-the-world pauses. Here is its phases: Here's what's the GC algorithm looked like in Go 1.4: Here it's in Go 1.5: Note the shorter stop-the-world pauses. During Concurrent GC, the GC uses 25% CPU. Here is the benchmarks: In previous versions of Go, GC pauses is in general much longer, and they grow as the heap size grows. In Go 1.5, GC pauses is more than order of magnitude shorter. Zooming in, there is still a slight positive correlation between heap size and GC pauses. But they know what's the issue is and it'll be a fixed in Go 1.6. There is a slight throughput penalty with the new GC algorithm, and that penalty shrinks as the heap size grows: |
No one has translated this paragraph
I'll translate the |
moving forward Tell People that GC is no longer a issue with Go's low latency GC. Moving forward, they is planning to tune for even lower latency, higher throughput, and more predictability. They want to find the sweet spot between these tradeoffs. Development work for Go 1.6 would be the use case and feedback driven, so let them know. The new low latency GC would make Go a even more viable replacement for manual-memory-management languages like C. q & A Q:any plans for heap compaction? A:our approach had been to adopt the techniques that had served the C language community well, which was to avoid fragmen Tation to begin with by storing objects of the same size in the same memory span. |
I'll translate |
All translations in this article are for learning and communication purposes only, please be sure to indicate the translator, source, and link to this article.
Our translation work in accordance with the CC agreement, if our work has violated your rights and interests, please contact us promptly