DRAM Memory Introduction (III)

Source: Internet
Author: User
Tags intel core i7

Reference: HTTP://WWW.ANANDTECH.COM/SHOW/3851/EVERYTHING-YOU-ALWAYS-WANTED-TO-KNOW-ABOUT-SDRAM-MEMORY-BUT-WERE-AFRAID-TO-ASK/7

First, consider a 4-bit counter with the adjustable thresholds (figure). When Count was greater than high Threshold, algorithm A was deemed appropriate; The same is true concerning algorithm B and Count less than low Threshold.

For the range between high Threshold and low Threshold, either algorithm may is in effect. This was because a switch from algorithm B to algorithm a would occur only with Count greater than high Threshold and Increa Sing and a switch from algorithm A to algorithm B would occur only with Count less than low Threshold and decreasing.

The overlap range is also the Target range as the system would naturally attempt to maintain Counter between these both poin Ts. This is true since algorithm A tends to lower count while algorithm B tends to raise count. This system acts to reduce or eliminate rapid thrashing between algorithms.

Figure 12. Another of looking at This:if the MSB of Count is a 1 and then the page close policy is too loose

Next, define a truth table (figure) defining how Count would vary. By doing so we can encode a feedback mechanism to our system. Successful predictions by the Adaptive Page Close logic-a prevented Page-miss access (good) in response to a decision T o Close a page or a facilitated page-hit access (good) in response to a decision to leave a page open-suggest no change To policy was required and so never modify Count.

For a facilitated Page-miss access (bad) due to a poor decision to leave a page open, increment Count. If Count were to trend upward we could conceivably conclude then the current policy is most often wrong and is only that , tended to leave pages open far too long while the "fishing" for page-hit operations. The current algorithm must isn't be closing pages aggressively enough.

For a prevented Page-hit access (bad) due to a poor decision to close a page early, decrement Count. If Count were to trend downward we would suspect the opposite:the algorithm is too aggressively closing pages and leaving Potential page-hits on the cutting.

Figure 13. The policy is controlling just right whenever we reduce the number of PAGE-MISS operations and increase the number of page -hit operations

As best we can tell, this construct represent reality for APM technology. Although we would like to believe the system have more than and gears (algorithms), our model perfectly explains the Existi NG control register both in type and number.

Looking ahead you'll see Max page Close limit and Min page close limit is the specified high and low Threshold values, respectively. Setting a larger difference increases the size of the feedback dead band, slowing the rate in which system responds to its own evaluative efforts. mistake Counter is represented by the starting Count and should BES set somewhere near the middle of the dead band .

Adaptive Timeout Counter Sets the assertion time of any decision to keep a page open (i.e. how long before the de Cision to keep a page open stands before we give up hope of a page-hit access). Repeated access to the same page would reset this counter each time as long as the remaining lifetime is Non-zero. Lower values result in a more aggressive page close policy and vice versa for higher values.

RequestRate, we believe, controls how often Count (mistake Counter) are updated, and therefore how Smoot Hly the system adapts to quickly changing workloads. There must is a good reason not to flippantly set this interrupt rate as low as possible. Perhaps this depletes hardware resources needed for other operations or maybe higher duty cycles disproportionally raises Power consumption. Whatever the reason, there ' s more than a fair chance can hurt performance if you ' re just spit-balling with this settin G.

Here at AnandTech we decided to go the extra mile for your, our loyal reader. A few weeks back we approached ASUS USA Tech support with a request to set-up a technical consultation with their Firmware Engineering Department. After passing along we request, what is came out of the meeting is a special beta BIOS that added a number of previously UN Available memory tuning registers once excluded from direct user control.

In the interest of full disclosure, we do request the same help from EVGA and although they were willing to back our play , technical difficulties prevented them from delivering everything we had originally hoped for.

Seen below, these new registers is: Adaptive Page Closing, Adaptive Timeout Counter, Request counte R, Max page close limit, Min page close limit, and mistake Counter. As suspected, the first setting is used to Enabled or disable the feature entirely. Interesting enough, Intel chose not to enable the This feature by default; So we leave it up to you.


Click to enlarge

You won ' t has full resolution when working with these settings, and then again, you won ' t need them anyway

A Short description of each register are shown below (taken from Intel Core i7-900 Desktop Processor Extreme Edition Se Ries and Intel Core i7-900 Desktop Processor Series datasheet, Volume 2, page, dated October 2009). Be aware the source most likely contains at least one known error. In particular, Intel have provided exactly the same description for Adaptive Timeout Counter and mistake Count Er. As well, the bit count of mistake Counter in the table does not match the value in the text, further suggesting Someone goofed.

YEP, Intel owes us a correction to mistake Counter

Once you ' ve had time to fully digest the information Above-and ponder how awesome we are-we would like to cordially Invite some of your own testing and report your results at our forums. AnandTech readers with a valid login can download ASUS rampage III Extreme BIOS release 0878 now. We haven ' t really had a chance to does any significant experimenting with what little spare time we have and we need your he LP Exploring uncharted territory ...

We hope you ' ve enjoyed reading this article as much as We ve enjoyed putting it together. If you took the time to thoroughly peruse and digest the information within the intricacies of basic memory Operation Shou LD no longer be such a baffling subject. With the ground work out of the the the the-the-the-the-platform from which to build as we + closely begin exploring O Ther avenues for increasing memory performance. We ' ve already identified additional topics worth discussing, and provided the time shows up on the books, plan to bring yo U more.

Assumedly, the one big question that may Remain:what is the real world benefits of memory tuning? Technically, we covered the subject in-depth last year in a previous article. We suggest you read through it once again for a refresher before do embark on any overclocking journeys (or before RU SH out to Over-spend on memory kits). Everything written in, article then was just as valid today. We ' ve Run tests here on our Gulftown samples and found exactly the same behavior. undoubtably, Intel has taken steps to ensure their architectures aren ' t prematurely bottlenecked by giving the memory con Troller a big, fat bus for communicating with the DIMMs.

ASUS Rampage III Extreme married to 12GB of sweet, sweet ddr3-goodness

From-what-we can tell, the next generation of performance processors from Intel is going to move over to a 256-bit wid E (quad channel) memory controller, leaving little need for ultra-high frequency memory kits. Thus We re-iterate something many has said before:a top priority when it comes to improving memory ICs and their respect Ive architectures should be-to-focus development on reducing absolute minimum latency requirements for timings such as CAS And TRCD, rather than chasing raw synthetic bandwidth figures or setting outright frequency records at the expense of und Uly high random access times.

Stepping away from the performance segment for a moment, something else that's also come to light are rumored news that Int El ' s Sandy Bridge Architecture (due Q1) would, by design, limit reference clock driven overclocking on Mainst Ream parts to 5% past stock operating frequency. If This was indeed the case the consequence would be a very restricted ability to control memory bus frequency with limited Granularity to tune the first 50~70 MHz past each step, followed by mandatory minimum jump of 200MHz to the next operating Level. Accessing hidden potential would be a even more difficult, especially for users of mainstream memory kits. While there are no downside to the from a processing perspective (hey, + speed are always better), this could be Another serious nail in the coffin of a already waning overclocking memory industry.

DRAM Memory Introduction (III)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.