Garbage Collection Algorithm Handbook: The Art Book of Automatic memory management

Source: Internet
Author: User

Garbage Collection Algorithm Handbook: The Art of automatic memory management2016-03-18 Computer

Content Introduction

Prospectus

This book is a milestone in the field of automatic memory management, bringing together the best practices that have been deposited in this field over more than 50 years of research, including the most important contemporary garbage collection strategies and technologies, auther.

Almost all modern programming languages use a garbage collection mechanism, so an in-depth look at this content can be beneficial for all developers. This authoritative manual provides expert answers to the different ways the garbage collector works and the various issues that the current garbage collector faces. Having mastered this knowledge, we believe that developers can be more confident in the face of many different garbage collector and various adjustment options.
A total of 19 chapters, the 1th chapter discusses why the need for automatic memory management, and a brief introduction to the different garbage collection strategy comparison method; the 2nd to 5th chapter describes 4 kinds of classical garbage collection algorithms, including the tag-sweep algorithm, the marker-collation algorithm, the copy-retrieval algorithm and the reference Counting algorithm. Chapter 6th in-depth comparison of section 2~ The recovery strategy and algorithm introduced in Chapter 5, the 7th chapter introduces a variety of different memory allocation techniques, and further explores the differences between automatic garbage collection and display memory management in two scenarios, and the 8th chapter discusses why you need to divide the heap into multiple different spaces and how to manage them; chapter 9th introduces generational garbage collection The 10th chapter introduces the management policy of large objects and other partitioning strategies; Chapter 11th introduces the runtime interface including pointer lookups, code locations that can safely initiate garbage collection, read-write barriers, and so on; Chapter 12th discusses specific language-related content, including finalization mechanisms and weak references The 13th chapter discusses the new opportunities and challenges brought by the modern hardware system to the garbage collector, and introduces the related algorithms of synchronization, forward, end, consistency and so on. The 14th chapter describes how to use multiple threads for garbage collection while suspending all application threads; Section 15~ Chapter 18 introduces several different kinds of concurrent recyclers; The 19th chapter discusses the application of garbage collection in hard-hard system.

Directory

LIST

The garbage Collection handbook:the Art of Automatic Memory Management

Publishers ' words

Translator sequence

Objective

About the author

Chapter 1th Introduction 1

1.1 Explicit memory-free 1

1.2. Automatic Dynamic memory Management 3

1.3 Comparison of garbage collection algorithms 5

1.3.1 Security 5

1.3.2 Throughput 5

Completeness and timeliness of 1.3.3 5

1.3.4 Pause Time 6

1.3.5 Space Overhead 7

1.3.6 for language-specific optimizations 7

1.3.7 Scalability and Portability 8

1.4 Disadvantages of Performance 8

1.5 Experimental Methods 8

1.6 Terminology and Symbols 10

1.6.1 Heap 10

1.6.2 Evaluator and collector 11

1.6.3 Evaluator Root 11

1.6.4 references, domains, and addresses 11

1.6.5 survivability, correctness, and accessibility 12

1.6.6 Pseudo Code 12

1.6.7 Dispenser 13

Read and write operations for 1.6.8 evaluators 13

1.6.9 Atomic Operation 13

1.6.10 Collections, multiple collections, sequences, and tuples

2nd Chapter Mark – Sweep recovery 15

2.1 Marking – Sweep algorithm 16

2.2 Tri-Color abstract 18

2.3 Improved marking – Sweep algorithm 18

2.4-bit graph marker 19

2.5 Lazy Sweep 21

2.6 Cache misses in the process of tagging issues 24

2.7 Issues to consider 25

2.7.1 Evaluator Overhead 25

2.7.2 Throughput 26

2.7.3 Space Utilization 26

2.7.4 move, or do not move 26

3rd Chapter Mark – Finishing and recycling 28

3.1 Double pointer finishing algorithm 29

3.2 Lisp 2 algorithm 30

3.3 Lead finishing Algorithm 32

3.4 One-time Traversal algorithm 34

3.5 Issues to consider 36

3.5.1 Necessity of finishing 36

Throughput overhead for 3.5.2 finishing 36

3.5.3 Longevity Data 36

3.5.4 Local Sex 37

3.5.5 Labeling – Limitations of the collation algorithm 37

4th. Copy-Recycling 38

4.1 Half zone replication Recycling 38

4.1.1 Implementation of the work list 39

4.1.2 Example 40

4.2 Traversal sequence and locality 42

4.3 Issues to consider 46

4.3.1 Allocation 46

Space and locality of 4.3.2 47

4.3.3 Moving Objects 48

5th reference Count 49

5.1 Advantages and disadvantages of the reference counting algorithm 50

5.2 Lifting Efficiency 51

5.3 Deferred reference count 52

5.4 Merging reference counts 54

5.5 Ring Reference count 57

5.6 Restricted Domain Reference count 61

5.7 Issues to consider 62

5.7.1 Application Scenario 62

5.7.2 Advanced Solutions 62

6th 64 Comparison of garbage collector

6.1 Throughput 64

6.2 Pause Time 65

6.3 Memory Space 65

6.4 Implementation of the Collector 66

6.5 Adaptive System 66

6.6 Unified Garbage Collection Theory 67

6.6.1 abstraction of garbage collection 67

6.6.2-Tracking Garbage collection 67

6.6.3 Reference count Garbage collection 69

Chapter 7th memory allocation 72

7.1 Order Allocation 72

7.2 Idle List Allocation 73

7.2.1 First Adaptation Assignment 73

7.2.2 Cyclic first-time adaptive allocation 75

7.2.3 Best Fit Allocation 75

7.2.4 Acceleration of idle list Allocations 76

7.3 Fragmentation of Memory 77

7.4 Partition adaptation allocation 78

7.4.1 Memory Fragmentation 79

7.4.2 padding for spatial size rating 79

7.5 combination of partition adaptation allocation and simple idle list allocation 81

7.6 Other issues to consider 81

7.6.1-byte alignment 81

7.6.2 Space Size Limit 82

7.6.3 Boundary Label 82

7.6.4 Heap resolvable 82

7.6.5 Local Sex 84

7.6.6 Expansion Block Protection 84

7.6.7 spanning mapping 85

7.7 Memory allocations in concurrent systems 85

7.8 Issues to consider 86

Chapter 8th heap Memory Division 87

8.1 Terminology 87

8.2 Why partitioning 87

8.2.1 partitioning based on Mobility 87

8.2.2 partitioning according to object size 88

8.2.3 Partitioning a space 88

8.2.4 Partitioning by Category 89

8.2.5 Partitioning for Benefits 89

8.2.6 partitioning to shorten the pause time 90

8.2.7 Partitioning for locality 90

8.2.8 partitioning according to thread 90

8.2.9 partitioning based on availability 91

8.2.10 partitioning according to variability 91

8.3 How to Partition 92

8.4 When to partition 93

9th Chapter Garbage Collection 95

9.1 Example 95

9.2 Time Measurement 96

9.3 Generational hypothesis 97

9.4 Generational and heap layout 97

9.5 Multi-generational 98

9.6 years old record 99

9.6.1 Collective Improvement 99

9.6.2 Aging Half Zone 100

9.6.3 Survival object space and Flexibility 101

9.7 Adaptation to program behavior 103

9.7.1 Appel type garbage collection 103

9.7.2 Feedback-based object lifting 104

9.8 Inter-generational pointer 105

9.8.1 Memory Set 106

9.8.2 Pointer Direction 106

9.9 Space Management 107

9.10 Middle-aged priority recycling 108

9.11 Belt-type recycling frame 110

9.12 application of heuristic method in generational garbage collection 112

9.13 Issues to consider 113

9.14 Abstract generational garbage collection 115

10th Other Zoning Strategies 117

10.1 Large Object Space 117

10.1.1 Runner Collector 118

10.1.2 Object Movement supported by the operating system 119

10.1.3 objects that do not contain pointers 119

10.2 Object-based topology of the collector 119

10.2.1 recovery of mature object Space 120

10.2.2 Object-Dependency-based recycling 122

10.2.3 Thread Local Recycle 123

Allocate 126 on 10.2.4 stack

10.2.5 Region Inference 127

10.3 Mixed Mark – Sweep, copy-type collector 128

10.3.1 Garbage-first Recycling 129

10.3.2 Immix Recycling and other recycling 130

10.3.3 replication recovery in limited memory space 133

10.4 Bookmark Collector 134

10.5 Super Reference count collector 135

10.6 Issues to consider 136

11th. Run-time Interface 138

11.1 Object Assignment Interface 138

11.1.1 acceleration of the distribution process 141

11.1.2 Qing 141

11.2 Pointer Lookup 142

11.2.1 Conservative-pointer lookup 143

11.2.2 using tagged values for precise pointer lookups 144

The exact pointer in the 11.2.3 object finds 145

11.2.4 exact pointer in global root lookup 147

Accurate pointer lookup in 11.2.5 stacks and registers 147

Exact pointer lookup in 11.2.6 code 157

11.2.7 handling of internal pointers 158

11.2.8 processing of derived pointers 159

11.3 Object Table 159

11.4 References from external code 160

11.5 Stack Barrier 162

11.6 Safe Recovery points and the suspension of evaluators 163

11.7 Recycling for Code 165

11.8 reading and writing barrier 166

11.8.1 Design Engineering of Reading and writing barrier 167

11.8.2 Writing barrier Accuracy 167

11.8.3 Hash Table 169

11.8.4 Sequential Storage Buffers 170

11.8.5 Overflow Handling 172

11.8.6 Card Table 172

11.8.7 spanning mapping 174

11.8.8 Summary Card 176

11.8.9 hardware and virtual memory technology 176

11.8.10 writing Barrier related technology Summary 177

11.8.11 Memory Block List 178

11.9 Address space Management 179

11.10 Application of Virtual memory page protection policy 180

11.10.12-Time Mapping 180

11.10.2 apps that prohibit access pages 181

11.11 Choice of heap size 183

11.12 Issues to consider 185

12th. Specific language-related content 188

12.1 End 188

12.1.1 when to call finalization method 189

12.1.2 which thread the end method should call 190

12.1.3 whether the terminating method is allowed to be concurrent with each other 190

12.1.4 whether the terminating method is allowed to access unreachable objects 190

12.1.5 when to reclaim a terminated object 191

12.1.6 How to handle an error when executing an end method 191

12.1.7 whether the end operation needs to follow a certain order 191

12.1.8 competition issues in the end of the process 192

12.1.9 termination method and lock 193

12.1.10 language-specific termination mechanisms 193

12.1.11 further studies 195

12.2 Weak References 195

12.2.1 Other drivers 196

12.2.2 support for different strength pointers 196

12.2.3 Controlling the end order using virtual objects 199

The competition problem of 12.2.4 weak pointer empty process 199

12.2.5 notification when the weak pointer is placed empty 199

12.2.6 weak pointers in other languages 200

12.3 Issues to consider 201

13th. Concurrency Algorithm Preparation Knowledge 202

13.1 Hardware 202

13.1.1 processor and Thread 202

13.1.2 processor-to-memory connectivity 203

13.1.3 Memory 203

13.1.4 Cache 204

13.1.5 Cache Consistency 204

13.1.6 the impact of cache consistency on performance example: Spin lock 205

13.2 Hardware Memory Consistency 207

13.2.1 memory Barrier vs. prior to relationship 208

13.2.2 Memory Consistency Model 209

13.3 Hardware Primitives 209

13.3.1 Compare and Exchange 210

13.3.2 Load link/Condition Store 211

13.3.3 Atomic arithmetical Primitives 212

13.3.4 detection – Detect and set 213

13.3.5 more powerful Primitives 213

13.3.6 cost of atomic manipulation primitives 214

13.4 Forward Protection 215

13.5 notation notation for concurrency algorithms 217

13.6 Mutex 218

13.7 work sharing and end detection 219

13.8 Concurrent Data Structures 224

13.8.1 Concurrent Stacks 226

13.8.2 concurrent queues based on single-linked lists 228

13.8.3 array-based concurrency queue 230

13.8.4 concurrent dual-ended queue support for work theft 235

13.9 Transaction Memory 237

13.9.1 What is transactional memory 237

13.9.2 using transactional memory to assist the garbage collector implementation 239

13.9.3 the garbage collection mechanism support for transactional memory 240

13.10 Issues to consider 241

14th Parallel Garbage Collection 242

14.1 Is there enough work to be done in parallel 243

14.2 Load Balancing 243

14.3 Synchronization 245

14.4 Classification of Parallel Collections 245

14.5 Parallel Tags 246

14.6 Parallel Replication 254

14.6.1 processor-centric parallel replication 254

14.6.2 memory-centric parallel replication technology 258

14.7 Parallel sweep 263

14.8 Parallel Finishing 264

14.9 issues to consider 267

14.9.1 Terminology 267

14.9.2 is it worth 267 to recycle parallel

14.9.3 Load Balancing Strategy 267

14.9.4 Parallel Tracking 268

14.9.5 low-level synchronization 269

14.9.6 parallel sweep and parallel finishing 270

14.9.7 End Detection 270

The 15th chapter of concurrent garbage collection 271

15.1 Correctness of concurrent recycling 272

15.1.1 three-color abstract review 273

15.1.2 Object Loss Issue 274

15.1.3 Strong tri-color invariant and weakly tri-color invariant 275

15.1.4 Recovery Accuracy 276

15.1.5 Evaluator Color 276

15.1.6 color of newly assigned objects 276

15.1.7 Incremental Update-based solution 277

15.1.8 solution based on the starting snapshot 277

15.2 Associated barrier Technologies for concurrent recycling 277

15.2.1 Grey Evaluator Barrier technology 278

15.2.2 Black Evaluator Barrier Technology 279

Integrity of the 15.2.3 barrier technology 280

Implementation mechanism of 15.2.4 concurrent write barrier 281

15.2.5 single-level card table 282

15.2. Level 62 Card Table 282

15.2.7 strategies to reduce recycling effort 282

15.3 Issues to consider 283

16th chapter concurrency tagging – Sweep algorithm 285

16.1 Initialization 285

16.2 End 287

16.3 Allocation 287

16.4 marking process and the concurrent cleaning process 288

16.5 Instant Tag 289

16.5.1 Instant recovery of write barriers 290

16.5.2 Doligez-leroy-gonthier Collector 290

Application of 16.5.3 Doligez-leroy-gonthier collector in Java 292

16.5.4 Slide View 292

16.6 Abstract Concurrent Recycling Framework 293

16.6.1 Recycled Wave Surface 294

16.6.2 Increased trace Source 295

16.6.3 Evaluator Barrier 295

16.6.4 accuracy 295

16.6.5 Abstract concurrency Collector instantiation 296

16.7 issues to consider 296

17th Chapter Concurrent replication, concurrency collation algorithm 298

17.1 principal concurrent replication: Baker algorithm 298

17.2 Brooks Indirect barrier 301

17.3 Self-deletion read barrier 301

17.4 Copy Copy 302

17.5 Multi-version Replication 303

17.6 Sapphire Collector 306

17.6.1 all stages of recycling 306

17.6.2 merging of adjacent phases 311

17.6.3 volatile Domain 312

17.7 Concurrency collation Algorithm 312

17.7.1 Compressor Collector 312

17.7.2 pauseless Collector 315

17.8 issues to consider 321

18th Chapter Concurrency Reference counting algorithm 322

18.1 Simple Reference Counting algorithm review 322

18.2 Buffered reference count 324

18.3 Cyclic reference counting processing in concurrent environments 326

18.4 Getting a snapshot of a heap 326

18.5 slide View Reference count 328

18.5.1 Age-oriented recycling 328

18.5.2 Algorithm Implementation 328

18.5.3 Circular garbage collection based on Sliding view 331

18.5.4 Memory Consistency 331

18.6 Issues to consider 332

Chapter 19th real-time garbage collection 333

19.1 Real-time system 333

19.2 scheduling for real-time recycling 334

19.3 work-based real-time recycling 335

19.3.1 parallel, concurrent copy reclamation 335

19.3.2 effects of non-uniform workloads 341

19.4 Real-time recovery based on Gap 342

19.4.1 scheduling of recovery work 346

19.4.2 Execution Overhead 346

19.4.3 developers need to provide information 347

19.5 time-based real time recycling: Metronome collector 347

19.5.1 Evaluator Usage Rate 348

19.5.2 Support for predictability 349

19.5.3 analysis of the Metronome collector 351

19.5.4 Robustness 355

Combination of more than 19.6 scheduling strategies: "Taxes and expenses" 355

19.6.1 "Taxes and expenses" scheduling strategy 356

19.6.2 the implementation of the "tax and expenditure" scheduling strategy 357

19.7 Memory Fragmentation Control 359

19.7.1 incremental finishing in the Metronome collector 360

19.7.2 incremental copy replication on a single processor 361

19.7.3 stopless Collector: No lock garbage collection 361

19.7.4 Staccato Collector: Try to organize the evaluators without waiting for forward guarantee conditions 363

19.7.5 Chicken Collector: Best effort finishing in evaluators without waiting for forward protection (x86 platform) 365

19.7.6 Clover Collector: Evaluators reliable finishing under the optimistic no lock forward guarantee 366

19.7.7 comparison between the stopless collector, chicken collector, and clover collector 367

19.7.8 Discrete Assignment 368

19.8 issues to consider 370

Glossary 372

References 383

Index 413

Garbage Collection Algorithm Handbook: The Art Book of Automatic memory management

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.