Optimization is often counter-intuitive

Source: Internet
Author: User
Optimization is often counter-intuitive

Anybody who's done intensive optimization knows that optimization is often counter-intuitive. Things you think wocould be faster often aren't.

Consider, for example, the exercise of obtaining the current Instruction Pointer. There's the NA implements ve solution:

 
_ Declspec (noinline) void * getcurrentaddress () {return _ returnaddress ();}... void * currentinstruction = getcurrentaddress ();

If you look at the disassembly, you'll get something like this:

 
Getcurrentaddress: mov eax, [esp] ret... Call getcurrentaddress mov [currentinstruction], eax

"APIs," you say to yourself, "Look at how inefficient that is. I can reduce that to two instructions. watch:

 
Void * currentinstruction; __asm {call l1l1: Pop currentinstruction}

That's half the instruction count of your bloated girly-code ."

But if you sit down and race the two code sequences, you'll find that the function-call version is faster by a factor of two! How can that be?

The reason is the "hidden variables" inside the processor. all modern processors contain much more state than you can see from the instruction sequence. there are tlbs, L1 and L2 caches, all sorts of stuff that you can't see. the Hidden variable that is important here is the return address predictor.

The more recent Pentium (and I believe also athlon) processors maintain an internal stack that is updated by eachCallAndRETInstruction. WhenCallIs executed, the return address is pushed both onto the real stack (the one thatESPRegister points to) as well as to the internal return address predictor stack;RETInstruction pops the top address of the return address predictor stack as well as the real stack.

The return address predictor stack is used when the processor decodesRETInstruction. It looks at the top of the return address predictor stack and says, "I bet thatRETInstruction is going to return to that address. "It then speculatively executes the instructions at that address. Since programs rarely fiddle with return addresses on the stack, these predictions tend to be highly accurate.

That's why the "optimization" turns out to be slower. Let's say that at the point ofCall L1Instruction, the return address predictor stack looks like this:

Return address
Predictor Stack:
  Caller1 -> Caller2 -> Caller3 -> ...
Actual Stack:   Caller1 -> Caller2 -> Caller3 -> ...

Here,Caller1Is the function's caller,Caller1Is the function's caller, and so on. so far, the return address predictor stack is right on target. (I 've drawn the actual stack below the return address predictor stack so you can see that they match .)

Now you executeCallInstruction. The return address predictor stack and the actual stack now look like this:

return address
predictor Stack:
L1 -> caller1 -> caller2 -> caller3 -> ...
actual Stack: L1 -> caller1 -> caller2 -> caller3 -> ...

but instead of executing a RET instruction, you pop off the return address. this removes it from the actual stack, but doesn't remove it from the return address predictor stack.

Return address
Predictor Stack:
  L1 -> Caller1 -> Caller2 -> Caller3 -> ...
Actual Stack:   Caller1 -> Caller2 -> Caller3 -> Caller4 -> ...

I think you can see where this is going.

Eventually your function returns. The processor decodes yourRETInstruction and looks at the return address predictor stack and says, "My predictor Stack says that thisRETIs going to returnL1. I will begin speculatively executing there ."

But oh no, the value on the top of the real stack isn'tL1At all. It'sCaller1. The processor's return address predictor predicted incorrectly, and it ended up wasting its time studying the wrong code!

The effects of this bad guess don't end there. AfterRETInstruction, the return address predictor stack looks like this:

Return address
Predictor Stack:
  Caller1 -> Caller2 -> Caller3 -> ...
Actual Stack:   Caller2 -> Caller3 -> Caller4 -> ...

Eventually your caller returns. Again, the processor consults its return address predictor stack and speculatively executesCaller1. But that's not where you're returning to. You're really returningCaller2.

And so on. By mismatchingCallAndRETInstructions, you managed to cause every single return address prediction on the stack to be wrong. notice in the digoal that, in the absence of somebody playing games with the return address predictor stack of the type that created the problem initially,Not a single prediction on the return address predictor stack will be correct. None of the predicted return addresses match up with actual return addresses.

Your Peephole Optimization has proven to be inclusighted.

Some processors expose this predictor more explictly. the Alpha AXP, for example, has several types of control flow instructions, all of which have the same logical effect, but which hint to the processor how it shoshould maintain its internal predictor stack. for example,BRInstruction says, "Jump to this address, but do not push the old address onto the predictor stack." On the other hand,JSRInstruction says, "Jump to this address, and push the old address onto the predictor stack." There is alsoRETInstruction that says, "Jump to this address, and pop an address from the predictor stack." (there's also a fourth type that isn't used much .)

Moral of the story: just because something looks better doesn't mean that it necessarilyIsBetter.

Published Thursday, December 16,200 4 am by oldnewthinghttp: // blogs.msdn.com/oldnewthing/archive/2004/12/16/317157.aspx

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.