Unconsciously, I have subscribed to more than 100 feeds on newsgator online, and the daily reading pressure is also increasing. In the process of using newsgator, we found a very vivid comparison of the current GC (Garbage Collector) implementation, and shared it with you to help others understand GC, of course, better, if you can't help it (you already know the principle of generational GC), you should be a pastime.
I don't know how to use newsgator to read subscribed feeds. My habit is to simply browse it and add the topics that interest me but don't have time to read to my clippings directory, mark the current page as read, and continue to browse the next page or the next directory. When you are idle, open my clippings and read the addedArticleAnd links. Like other directories, The my clippings directory is sorted in descending chronological order by default. The newer the feed, the higher the front, and I usually control the number of feeds in my clippings to less than 100.
Because I subscribe to a lot of feeds and involve a wide range of fields, coupled with limited reading time, there will always be some contents that I did not want to read in detail, when I enter my clippings, if such a feed is found, I will delete it. Then I will read some of the feeds and share them one after another. I will also clean up my clippings. By default, my reading order is new and old, because the newer, the more important it is. Each time I open my clippings, I may not always browse the entire directory from start to end, but I can almost always find, read, and clear some feeds. In this way, my clippings directory can be kept within 100 entries for a long time, it can always leave a feed that I am interested in, but I haven't had time to read or share it.
Think about it. Isn't the same idea and implementation behind a generational GC? When the memory is insufficient or the applicationProgramWhen the requirement for garbage collection is put forward, GC will find the garbage in the latest generation object pool (which may be meaningless, and the objects created by mistake may be used out, objects that are ineffective. Why does GC determine that the objects in the first generation are most likely to be spam, while those in the second and third generations are relatively unlikely to be Spam? Compared with the previous example, we have not gone through cleanup and the newly added feed. The most likely reason is that it has been mistakenly added or read and shared, those that have not been cleared or shared after one or more cleanups are more likely to be those that are waiting for me to read and share. The concept of changing objects: those objects that have not undergone GC or new creation, that is, in the first generation object, objects that have been created or processed due to errors (beyond the lifecycle) are most likely to be found. The second-generation or older objects have experienced GC, but it is not cleared (because other objects are still using it), it is more likely to be truly useful objects. Using limited energy to really worth it is a more efficient source of generational GC than traditional GC.
Dawei
Source: http://www.blogjava.net/sean/archive/2006/04/27/43674.html