Topics from StackOverflow why are processing a sorted array faster than an unsorted array?
Here is a piece of C + + code that seems very peculiar. For some strange reason, sorting the data miraculously makes the code almost six times faster:
#include <algorithm> #include <ctime> #include <iostream>int main () { //Generate Data const unsigned arraySize = 32768; int data[arraysize]; for (unsigned c = 0; c < arraySize; ++c) data[c] = Std::rand ()%; // !!! With this, the next loop runs faster std::sort (data, data + arraySize); Test clock_t start = Clock (); A long long sum = 0; for (unsigned i = 0; i < 100000; ++i) { //Primary loop for (unsigned c = 0; c < arraySize; ++c)
{ if (Data[c] >=) sum + = Data[c]; } } Double elapsedtime = static_cast<double> (Clock ()-start)/clocks_per_sec; Std::cout << elapsedtime << Std::endl; Std::cout << "sum =" << sum << Std::endl;}
- without
std::sort (data, data + arraySize);
, the code runs in 11.54 seconds.
- with the sorted data, the code runs in 1.93 seconds.
Import Java.util.arrays;import Java.util.random;public class main{public static void Main (string[] args) { //Generate data int arraySize = 32768; int data[] = new Int[arraysize]; Random rnd = new random (0); for (int c = 0; c < arraySize; ++c) data[c] = rnd.nextint ()%; // !!! With this, the next loop runs faster arrays.sort (data); Test Long start = System.nanotime (); Long sum = 0; for (int i = 0; i < 100000; ++i) { //Primary loop for (int c = 0; c < arraySize; ++c) { if (Data[c] >=) Sum + = Data[c];} } System.out.println ((System.nanotime ()-start)/1000000000.0); System.out.println ("sum =" + sum);} }
The answer is:You are the victim ofBranch predictionfail.What is Branch prediction?
Now for the sake of argument, suppose this was back in the 1800s-before long distance or radio communication.
You is the operator of a junction and you hear a train coming. You have the no idea which-it'll go. You stop the train to ask the captain which direction he wants. And then you set the switch appropriately.
Trains is heavy and has a lot of inertia. So they take forever to start up and slow down.
Is there a better? You guess which direction the train would go!
- If you guessed right, it continues on.
- If you guessed wrong, the captain would stop, back up, and yell at your to flip the switch. Then it can restart down the other path.
If You guess right everytime, the train would never has to stop.
If You guess wrong too often, the train would spend a lot of time stopping, backing up, and restarting.
consider an if-statement: At the processor level, it's a branch instruction:
You is a processor and you see a branch. You have the no idea which-it'll go. What does? You halt execution and wait until the previous instructions is complete. Then you continue down the correct path.
Modern processors is complicated and has a long pipelines. So they take forever to ' warm up ' and ' Slow down '.
Is there a better? You guess which direction the branch would go!
- If you guessed right, you continue executing.
- If you are guessed wrong, you need-to-flush the pipeline and roll back to the branch. Then you can restart down the other path.
If You guess right everytime, the execution would never has to stop.
If You guess wrong too often, you spend a lot of time stalling, rolling back, and restarting.
This is branch prediction. I admit it ' s not the best analogy since the train could just signal the direction with a flag. But in computers, the processor doesn ' t know which direction a branch would go until the last moment.
So what would you strategically guess to minimize the number of times that the train must back up and go off the other pat H? Past history! If the train goes left 99% of the time and then you guess left. If It alternates, then you alternate your guesses. If It goes one-every 3 times, you guess the same ...
in the other words, you try to identify a pattern and follow it. This is more or less how branch predictors work.
Most applications has well-behaved branches. So modern branch predictors would typically achieve >90% hit rates. But when faced with unpredictable branches with no recognizable patterns, branch predictors is virtually useless.
Further reading: "Branch Predictor" article on Wikipedia.
As hinted from above, the culprit are this if-statement:
if (data[c] >= 128) sum += data[c];
Notice the data is evenly distributed between 0 and 255. When the data is sorted, roughly the first half of the iterations would not enter the if-statement. After that, they'll all enter the if-statement.
This was very friendly to the branch predictor since the branch consecutively goes the same direction many times. Even a simple saturating counter would correctly predict the branch except for the few iterations after it switches directi On.
Quick Visualization:
T=Branch Takenn=Branch notTakendata[] = 0, 1, 2, 3, 4, ... 126, 127, -, 129, the, ... -, 251, 252, ...Branch=N -n n n N...n N T t t... T T t... =nnnnnnnnnnnn...NNNNNNNTTTTTTTTT...TTTTTTTTTT(Easy to predict)
However, when the data was completely random, the branch predictor is rendered useless because it can ' t predict random data . Thus there would probably be around 50% misprediction. (no better than random guessing)
Data[] = 226, 185, the, 158, 198, 144, 217, -, 202, 118, -, Max, 177, 182, 133, ...Branch=T,T,N,T,T,T,T,N,T,N,N,T,T,T,N... =TTNTTTTNTNNTTTN... (completely random-Hard to predict)
So what can is done?
If the compiler isn ' t able to optimize the branch into a conditional move, you can try some hacks if you're willing to SA Crifice readability for performance.
Replace:
if (data[c] >= 128) sum += data[c];
With:
int t = (data[c] - 128) >> 31;sum += ~t & data[c];
This eliminates the branch and replaces it with some bitwise operations.
(Note that this hack was not a strictly equivalent to the original if-statement. But in this case, it's valid for all the input values of data[]
.)
Benchmarks:core i7 920 @ 3.5 GHz
C + +-Visual Studio 2010-x64 Release
// Branch - Randomseconds = 11.777// Branch - Sortedseconds = 2.352// Branchless - Randomseconds = 2.564// Branchless - Sortedseconds = 2.587
Java-netbeans 7.1.1 JDK 7-x64
// Branch - Randomseconds = 10.93293813// Branch - Sortedseconds = 5.643797077// Branchless - Randomseconds = 3.113581453// Branchless - Sortedseconds = 3.186068823
Observations:
- With the Branch: There is a huge difference between the sorted and unsorted data.
- With the Hack: There is no difference between sorted and unsorted data.
- In the C + + case, the hack are actually a tad slower than with the branch when the data is sorted.
A general rule of thumb was to avoid data-dependent branching in critical loops. (such as in this example)
Why is an ordered array faster than an unordered array?