Optimization is often counter-intuitive
Anybody who's done intensive optimization knows that optimization is often counter-intuitive. Things you think wocould be faster often aren't.
Consider, for example, the exercise of obtaining the current Instruction Pointer. There's the NA implements ve solution:
_ Declspec (noinline) void * getcurrentaddress () {return _ returnaddress ();}... void * currentinstruction = getcurrentaddress ();
If you look at the disassembly, you'll get something like this:
Getcurrentaddress: mov eax, [esp] ret... Call getcurrentaddress mov [currentinstruction], eax
"APIs," you say to yourself, "Look at how inefficient that is. I can reduce that to two instructions. watch:
Void * currentinstruction; __asm {call l1l1: Pop currentinstruction}
That's half the instruction count of your bloated girly-code ."
But if you sit down and race the two code sequences, you'll find that the function-call version is faster by a factor of two! How can that be?
The reason is the "hidden variables" inside the processor. all modern processors contain much more state than you can see from the instruction sequence. there are tlbs, L1 and L2 caches, all sorts of stuff that you can't see. the Hidden variable that is important here is the return address predictor.
The more recent Pentium (and I believe also athlon) processors maintain an internal stack that is updated by eachCall
AndRET
Instruction. WhenCall
Is executed, the return address is pushed both onto the real stack (the one thatESP
Register points to) as well as to the internal return address predictor stack;RET
Instruction pops the top address of the return address predictor stack as well as the real stack.
The return address predictor stack is used when the processor decodesRET
Instruction. It looks at the top of the return address predictor stack and says, "I bet thatRET
Instruction is going to return to that address. "It then speculatively executes the instructions at that address. Since programs rarely fiddle with return addresses on the stack, these predictions tend to be highly accurate.
That's why the "optimization" turns out to be slower. Let's say that at the point ofCall L1
Instruction, the return address predictor stack looks like this:
Return address Predictor Stack: |
|
Caller1 |
-> |
Caller2 |
-> |
Caller3 |
-> |
... |
Actual Stack: |
|
Caller1 |
-> |
Caller2 |
-> |
Caller3 |
-> |
... |
Here,Caller1
Is the function's caller,Caller1
Is the function's caller, and so on. so far, the return address predictor stack is right on target. (I 've drawn the actual stack below the return address predictor stack so you can see that they match .)
Now you executeCall
Instruction. The return address predictor stack and the actual stack now look like this:
return address predictor Stack: |
|
L1 |
-> |
caller1 |
-> |
caller2 |
-> |
caller3 |
-> |
... |
actual Stack: |
|
L1 |
-> |
caller1 |
-> |
caller2 |
-> |
caller3 |
-> |
... |
but instead of executing a RET
instruction, you pop off the return address. this removes it from the actual stack, but doesn't remove it from the return address predictor stack.
Return address Predictor Stack: |
|
L1 |
-> |
Caller1 |
-> |
Caller2 |
-> |
Caller3 |
-> |
... |
Actual Stack: |
|
Caller1 |
-> |
Caller2 |
-> |
Caller3 |
-> |
Caller4 |
-> |
... |
I think you can see where this is going.
Eventually your function returns. The processor decodes yourRET
Instruction and looks at the return address predictor stack and says, "My predictor Stack says that thisRET
Is going to returnL1
. I will begin speculatively executing there ."
But oh no, the value on the top of the real stack isn'tL1
At all. It'sCaller1
. The processor's return address predictor predicted incorrectly, and it ended up wasting its time studying the wrong code!
The effects of this bad guess don't end there. AfterRET
Instruction, the return address predictor stack looks like this:
Return address Predictor Stack: |
|
Caller1 |
-> |
Caller2 |
-> |
Caller3 |
-> |
... |
Actual Stack: |
|
Caller2 |
-> |
Caller3 |
-> |
Caller4 |
-> |
... |
Eventually your caller returns. Again, the processor consults its return address predictor stack and speculatively executesCaller1
. But that's not where you're returning to. You're really returningCaller2
.
And so on. By mismatchingCall
AndRET
Instructions, you managed to cause every single return address prediction on the stack to be wrong. notice in the digoal that, in the absence of somebody playing games with the return address predictor stack of the type that created the problem initially,Not a single prediction on the return address predictor stack will be correct. None of the predicted return addresses match up with actual return addresses.
Your Peephole Optimization has proven to be inclusighted.
Some processors expose this predictor more explictly. the Alpha AXP, for example, has several types of control flow instructions, all of which have the same logical effect, but which hint to the processor how it shoshould maintain its internal predictor stack. for example,BR
Instruction says, "Jump to this address, but do not push the old address onto the predictor stack." On the other hand,JSR
Instruction says, "Jump to this address, and push the old address onto the predictor stack." There is alsoRET
Instruction that says, "Jump to this address, and pop an address from the predictor stack." (there's also a fourth type that isn't used much .)
Moral of the story: just because something looks better doesn't mean that it necessarilyIsBetter.
Published Thursday, December 16,200 4 am by oldnewthinghttp: // blogs.msdn.com/oldnewthing/archive/2004/12/16/317157.aspx