Here is a piece of C++ code that seems very peculiar. For some strange reason, sorting the data miraculously makes the code almost six times faster:

#include <algorithm>
#include <ctime>
#include <iostream>

int main()
{
    // Generate data
    const unsigned arraySize = 32768;
    int data[arraySize];

    for (unsigned c = 0; c < arraySize; ++c)
        data[c] = std::rand() % 256;

    // !!! With this, the next loop runs faster
    std::sort(data, data + arraySize);

    // Test
    clock_t start = clock();
    long long sum = 0;

    for (unsigned i = 0; i < 100000; ++i)
    {
        // Primary loop
        for (unsigned c = 0; c < arraySize; ++c)
        {
            if (data[c] >= 128)
                sum += data[c];
        }
    }

    double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;

    std::cout << elapsedTime << std::endl;
    std::cout << "sum = " << sum << std::endl;
}
  • Without std::sort(data, data + arraySize);, the code runs in 11.54 seconds.
  • With the sorted data, the code runs in 1.93 seconds.

Initially, I thought this might be just a language or compiler anomaly. So I tried it in Java:

import java.util.Arrays;
import java.util.Random;

public class Main
{
    public static void main(String[] args)
    {
        // Generate data
        int arraySize = 32768;
        int data[] = new int[arraySize];

        Random rnd = new Random(0);
        for (int c = 0; c < arraySize; ++c)
            data[c] = rnd.nextInt() % 256;

        // !!! With this, the next loop runs faster
        Arrays.sort(data);

        // Test
        long start = System.nanoTime();
        long sum = 0;

        for (int i = 0; i < 100000; ++i)
        {
            // Primary loop
            for (int c = 0; c < arraySize; ++c)
            {
                if (data[c] >= 128)
                    sum += data[c];
            }
        }

        System.out.println((System.nanoTime() - start) / 1000000000.0);
        System.out.println("sum = " + sum);
    }
}

With a somewhat similar, but less extreme result.


My first thought was that sorting brings the data into the cache, but my next thought was how silly that is, because the array was just generated.

  • What is going on?
  • Why is a sorted array faster than an unsorted array?
  • The code is summing up some independent terms, and the order should not matter.
share|edit|flag
212 upvote
  flag
What architecture did you run on? Did you compile with good optimization settings? I just tried your code, with and without the sort (the C++ variant) and did not find any runtime difference. Having a look at the assembler output (gcc.godbolt.org is handy for that) I could also see that there is no branch done on the if, but a cmovge is being used. When using -O2 I see a difference in speed only, but not with -O3... –  PlasmaHH Jun 27 '12 at 14:10
110 upvote
  flag
@GManNickG: I did investigate a bit further, and things are "funny". With O3, both versions (sort/non sort) are the same speed (4.5) but with O2, both are different (3.1/15.7) so I looked at the O2 version. There is a branch. So gcc seems to optimize for "random data" here. To further test if it is branch prediction, I tested the O2 code not with sort, but in the creation phase I set/removed the top bit of the byte for one half, but not the other. Things are the same result here, so it really has nothing to do with the data being sorted, but with the if condition being true/false for one half. –  PlasmaHH Jun 27 '12 at 14:16
71 upvote
  flag
Just to add more fun, on my CPU, when alternating the bits in the input, the branch predictor seems to be able to recognize the pattern. The same for some other alternating bit patterns. –  PlasmaHH Jun 27 '12 at 14:37
56 upvote
  flag
@JustinDanielson: I highly doubt that is the case. Even if you know what branch prediction was, you may not realize that is the case right away (although I admit you may be able to figure it out yourself). However, I do think this question can benefit future readers, so even if the OP knew the answer, it is valid question. 12,040 view in 9 hours (jeebus!) –  Jesse Good Jun 27 '12 at 23:48
56 upvote
  flag
@JustinDanielson: I knew what branch prediction was, but I've never seen it make such a difference (it's just a background helper, after all) so it never came to mind. Most of my rep is from knowing the C++ language, not from knowing hardware. Sorry I'm fallible. –  GManNickG Jun 28 '12 at 0:29
add / show 20 more comments

protected by Joe Jun 29 '12 at 4:02

This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.

9 Answers

up vote 10441 down vote accepted
+600

You are the victim of branch prediction fail.


What is Branch Prediction?

Consider a railroad junction:

Image by Mecanismo, from Wikimedia Commons: http://commons.wikimedia.org/wiki/File:Entroncamento_do_Transpraia.JPG

Now for the sake of argument, suppose this is back in the 1800s - before long distance or radio communication.

You are the operator of a junction and you hear a train coming. You have no idea which way it will go. You stop the train to ask the captain which direction he wants. And then you set the switch appropriately.

Trains are heavy and have a lot of inertia. So they take forever to start up and slow down.

Is there a better way? You guess which direction the train will go!

  • If you guessed right, it continues on.
  • If you guessed wrong, the captain will stop, back up, and yell at you to flip the switch. Then it can restart down the other path.

If you guess right every time, the train will never have to stop.
If you guess wrong too often, the train will spend a lot of time stopping, backing up, and restarting.


Consider an if-statement: At the processor level, it is a branch instruction:

enter image description here

You are a processor and you see a branch. You have no idea which way it will go. What do you do? You halt execution and wait until the previous instructions are complete. Then you continue down the correct path.

Modern processors are complicated and have long pipelines. So they take forever to "warm up" and "slow down".

Is there a better way? You guess which direction the branch will go!

  • If you guessed right, you continue executing.
  • If you guessed wrong, you need to flush the pipeline and roll back to the branch. Then you can restart down the other path.

If you guess right every time, the execution will never have to stop.
If you guess wrong too often, you spend a lot of time stalling, rolling back, and restarting.


This is branch prediction. I admit it's not the best analogy since the train could just signal the direction with a flag. But in computers, the processor doesn't know which direction a branch will go until the last moment.

So how would you strategically guess to minimize the number of times that the train must back up and go down the other path? You look at the past history! If the train goes left 99% of the time, then you guess left. If it alternates, then you alternate your guesses. If it goes one way every 3 times, you guess the same...

In other words, you try to identify a pattern and follow it. This is more or less how branch predictors work.

Most applications have well-behaved branches. So modern branch predictors will typically achieve >90% hit rates. But when faced with unpredictable branches with no recognizable patterns, branch predictors are virtually useless.

Further Reading: Branch_predictor.


As hinted from above, the culprit is this if-statement:

if (data[c] >= 128)
    sum += data[c];

Notice that the data is evenly distributed between 0 and 255. When the data is sorted, roughly the first half of the iterations will not enter the if-statement. After that, they will all enter the if-statement.

This is very friendly to the branch predictor since the branch consecutively goes the same direction many times. Even a simple saturating counter will correctly predict the branch except for the few iterations after it switches direction.

Quick visualization:

T = branch taken
N = branch not taken

data[] = 0, 1, 2, 3, 4, ... 126, 127, 128, 129, 130, ... 250, 251, 252, ...
branch = N  N  N  N  N  ...   N    N    T    T    T  ...   T    T    T  ...

       = NNNNNNNNNNNN ... NNNNNNNTTTTTTTTT ... TTTTTTTTTT  (easy to predict)

However, when the data is completely random, the branch predictor is rendered useless because it can't predict random data. Thus there will probably be around 50% misprediction. (no better than random guessing)

data[] = 226, 185, 125, 158, 198, 144, 217, 79, 202, 118,  14, 150, 177, 182, 133, ...
branch =   T,   T,   N,   T,   T,   T,   T,  N,   T,   N,   N,   T,   T,   T,   N  ...

       = TTNTTTTNTNNTTTN ...   (completely random - hard to predict)

So what can be done?

If the compiler isn't able to optimize the branch into a conditional move, you can try some hacks if you are willing to sacrifice readability for performance.

Replace:

if (data[c] >= 128)
    sum += data[c];

with:

int t = (data[c] - 128) >> 31;
sum += ~t & data[c];

This eliminates the branch and replaces it with some bitwise operations.

(Note that this hack is not strictly equivalent to the original if-statement. But in this case, it's valid for all the input values of data[].)

Benchmarks: Core i7 920 @ 3.5 GHz

C++ - Visual Studio 2010 - x64 Release

//  Branch - Random
seconds = 11.777

//  Branch - Sorted
seconds = 2.352

//  Branchless - Random
seconds = 2.564

//  Branchless - Sorted
seconds = 2.587

Java - Netbeans 7.1.1 JDK 7 - x64

//  Branch - Random
seconds = 10.93293813

//  Branch - Sorted
seconds = 5.643797077

//  Branchless - Random
seconds = 3.113581453

//  Branchless - Sorted
seconds = 3.186068823

Observations:

  • With the Branch: There is a huge difference between the sorted and unsorted data.
  • With the Hack: There is no difference between sorted and unsorted data.
  • In the C++ case, the hack is actually a tad slower than with the branch when the data is sorted.

A general rule of thumb is to avoid data-dependent branching in critical loops. (such as in this example)


Update :

  • GCC 4.6.1 with -O3 or -ftree-vectorize on x64 is able to generate a conditional move. So there is no difference between the sorted and unsorted data - both are fast.

  • VC++ 2010 is unable to generate conditional moves for this branch even under /Ox.

  • Intel Compiler 11 does something miraculous. It interchanges the two loops, thereby hoisting the unpredictable branch to the outer loop. So not only is it immune the mispredictions, it is also twice as fast as whatever VC++ and GCC can generate! In other words, ICC took advantage of the test-loop to defeat the benchmark...

  • If you give the Intel Compiler the branchless code, it just out-right vectorizes it... and is just as fast as with the branch (with the loop interchange).

This goes to show that even mature modern compilers can vary wildly in their ability to optimize code...

share|edit|flag
48 upvote
  flag
Note that with the "hack" (which is equivalent to the cmovge optimization gcc does with -O3, as noted in my comment to the question) it might be possible that the speed is a bit slower than in the case where branch prediction works "perfectly". So this is once more a case where you might want to optimize your code not only for the data structure, but also for its contents. –  PlasmaHH Jun 27 '12 at 14:19
156 upvote
  flag
One way you can make the train analogy better is if you say that the only way the operator can know if the switch is correct is if the captain gives him a thumbs up or thumbs down, and the captain sits at the back of the train such that the operator can only see him when the captain passes him. This way if the switch is incorrect the train would have to stop, back up, and then take the correct route. –  Thomas Jun 27 '12 at 16:30
20 upvote
  flag
@J-16SDiZ Note that I did mention that it is not strictly equivalent to the original if-statement. data[x] is always between 0 and 255 so it doesn't get near that corner case. –  Mysticial Jun 29 '12 at 3:04
27 upvote
  flag
I'm amazed at both the question and the answer - this has explained something I only barley knew about. But it raises a question for me. Should you optimize your code to take into account things like branch prediction? Or would that be a case of pre-mature optimization? Knowing the data you are processing would seem to drive the implementation. –  Peter M Jun 29 '12 at 12:50
237 upvote
  flag
I would like to point out that the reason the Intel compiler does the loop swap is actually far more impressive than just to help out branch prediction. The loops written as is are almost 100% guaranteed to cause millions of cache misses on every iteration of the outer loop. If you flip the two loops, you get at most 32768 cache misses (disregarding os preemption.) A missed branch prediction here costs nanoseconds. A cache miss can cause milliseconds if it has to fetch from memory. That's a 10000000x improvement. –  Michael Graczyk Jul 7 '12 at 5:45
add / show 72 more comments

Branch prediction. With a sorted array, the condition data[c] >= 128 is first false for a streak of values, then becomes true for all later values. That's easy to predict. With an unsorted array, you pay the branching cost.

share|edit|flag
80 upvote
  flag
Wow. I knew branch prediction was important - never thought of this speedup factor though. –  Eugen Rieck Jun 27 '12 at 13:56
3 upvote
  flag
@EugenRieck: Ha, yeah, didn't even cross my mind this would be it. Never seen such a large difference because of it. –  GManNickG Jun 27 '12 at 14:02
33 upvote
  flag
That is because the body of the loop is small (a single statement). If it had been a larger block, then the cost of the wrong path would be less. –  VSOverFlow Jun 27 '12 at 23:38
3 upvote
  flag
@Shubham: Because the branch is eventually executed (and then the surrounding context is completely known)? It might be possible to determine this earlier, but when the branch is executed is at least a lower bound in some sense. –  cic Aug 8 '12 at 13:00
add comment

The reason why the performance improves drastically when the data are sorted is that the branch prediction penalty is removed, as explained beautifully in Mysticial's answer.

Now, if we look at the code

if (data[c] >= 128)
    sum += data[c];

we can find that the meaning of this particular if... else... branch is to add something when a condition is satisfied. This type of branch can be easily transformed into a conditional move statement, which would be compiled into a conditional move instruction, cmovl, in an x86 system. The branch and thus the potential branch prediction penalty is removed.

In C, thus C++, the statment, which would compile directly (without any optimization) into the conditional move instruction in x86, is the ternary operator ... ? ... : .... So we rewrite the above statement into an equivalent one:

sum += data[c] >=128 ? data[c] : 0;

While maintaining readability, we can check the speedup factor.

On an Intel Core i7-2600K @ 3.4 GHz and Visual Studio 2010 Release Mode, the benchmark is (format copied from Mysticial):

x86

//  Branch - Random
seconds = 8.885

//  Branch - Sorted
seconds = 1.528

//  Branchless - Random
seconds = 3.716

//  Branchless - Sorted
seconds = 3.71

x64

//  Branch - Random
seconds = 11.302

//  Branch - Sorted
 seconds = 1.830

//  Branchless - Random
seconds = 2.736

//  Branchless - Sorted
seconds = 2.737

The result is robust in multiple tests. We get great speedup when the branch result is unpredictable, but we suffer a little bit when it is predictable. In fact, when using a conditional move, the performance is the same regardless of the data pattern.

Now let's look more closely by investigating at the x86 assembly they generate. For simplicity, we use two functions max1 and max2.

max1 uses the conditional branch if... else ...:

int max1(int a, int b) {
    if (a > b)
        return a;
    else
        return b;
}

max2 uses the ternary operator ... ? ... : ...:

int max2(int a, int b) {
    return a > b ? a : b;
}

On a x86-64 machine, GCC -S generates the assembly below.

:max1
    movl    %edi, -4(%rbp)
    movl    %esi, -8(%rbp)
    movl    -4(%rbp), %eax
    cmpl    -8(%rbp), %eax
    jle     .L2
    movl    -4(%rbp), %eax
    movl    %eax, -12(%rbp)
    jmp     .L4
.L2:
    movl    -8(%rbp), %eax
    movl    %eax, -12(%rbp)
.L4:
    movl    -12(%rbp), %eax
    leave
    ret

:max2
    movl    %edi, -4(%rbp)
    movl    %esi, -8(%rbp)
    movl    -4(%rbp), %eax
    cmpl    %eax, -8(%rbp)
    cmovge  -8(%rbp), %eax
    leave
    ret

max2 uses much less code due to the usage of instruction cmovge. But the real gain is that max2 does not involve branch jumps, jmp, which would have a significant performance penalty if the predicted result is not right.

So why can a conditional move perform better?

In a typical x86 processor, the execution of an instruction is divided to several stages. Roughly, we have different hardware to deal with different stages. So we do not have to wait for one instruction to finish to start a new one. This is called pipelining.

In a branch case, the following instruction is determined by the preceding one, so we can not do pipelining. We have to either wait or predict.

In a conditional move case, the execution conditional move instruction is divided into several stages, but the earlier stages like Fetch, Decode, does not depend on the result of previous instruction, only latter stages need the result. So we wait a fraction of one instruction's execution time. This is why the conditional move version is slower than the branch when prediction is easy.

The book Computer Systems: A Programmer's Perspective, second edition explains this in detail. You can check Section 3.6.6 for Conditional Move Instructions, entire Chapter 4 for Processor Architecture, and Section 5.11.2 for a special treatment for Branch Prediction and Misprediction Penalties.

Sometimes, some modern compilers can optimize our code to assembly with better performance, sometimes some compilers can't (the code in question is using Visual Studio's native compiler). Knowing the performance difference between branch and conditional move when unpredictable can help us write code with better performance when the scenario gets so complex that the compiler can not optimize them automatically.

share|edit|flag
42 upvote
  flag
I'm confused as to how you got those results. Isn't the ternary operator just an inline branch? –  Tullo Jun 28 '12 at 3:12
20 upvote
  flag
@Tullo For a function below: int max(int a, int b) { return a > b ? a : b; } On a x86-64 machine, GCC -S generates assembly below movl %edi, -4(%rbp) | movl %esi, -8(%rbp) | movl -4(%rbp), %eax | cmpl %eax, -8(%rbp) | cmovge -8(%rbp), %eax | leave | ret It uses conditional move "cmovge" instead of a jump instruction. It could be better pipelined in the processor than a branch. A bit nasty to insert code into comment,btw. –  WiSaGaN Jun 28 '12 at 5:25
7 upvote
  flag
In your Edit section, you forgot to ask the compiler to optimize the code. By default, GCC doesn't perform any optimisation. Adding -O2 will give the same assembler code for max1() and max2() –  ydroneaud Jun 28 '12 at 12:58
8 upvote
  flag
There's no default optimization level unless you add -O to your GCC command lines. (And you can't have a worst english than mine ;) –  ydroneaud Jun 28 '12 at 14:04
4 upvote
  flag
Please please please please don't benchmark unoptimized code. If GCC compiles your two examples to the same assembly with -O2, then the two pieces of code are equivalent, end of story. –  Justin L. Oct 10 '12 at 19:38
add / show 14 more comments

If you are curious about even more optimizations that can be done to this code, consider this... Starting with the original loop:

for (unsigned i = 0; i < 100000; ++i)
{
    for (unsigned j = 0; j < arraySize; ++j)
    {
        if (data[j] >= 128)
            sum += data[j];
    }
}

With loop interchange, we can safely change this loop to:

for (unsigned j = 0; j < arraySize; ++j)
{
    for (unsigned i = 0; i < 100000; ++i)
    {
        if (data[j] >= 128)
            sum += data[j];
    }
}

Then, you can see that the "if" conditional is constant throughout the execution of the "i" loop, so you can hoist the "if" out:

for (unsigned j = 0; j < arraySize; ++j)
{
    if (data[j] >= 128)
    {
        for (unsigned i = 0; i < 100000; ++i)
        {
            sum += data[j];
        }
    }
}

Then, you see that the inner loop can be collapsed into one single expression, assuming the floating point model allows it (/fp:fast is thrown, for example)

for (unsigned j = 0; j < arraySize; ++j)
{
    if (data[j] >= 128)
    {
        sum += data[j] * 100000;
    }
}

That one is 100,000x faster than before (-8

share|edit|flag
47 upvote
  flag
+1 for commenting on loop swap. See my comment on Mystical's answer. People reading this thread should note that thinking about memory layout and caching is almost always WAY more important than optimizing for branch prediction. (100000x improvement versus 3x improvement) –  Michael Graczyk Jul 7 '12 at 5:49
85 upvote
  flag
Yes, but the 100,000 loop was just to make the benchmark long enough that the timings would be significant. In a real application, this kind of opportunity is rare, and the branch prediction remains a significant factor. –  Adrian McCarthy Jul 11 '12 at 17:22
30 upvote
  flag
@JasonWilliams: I think you misunderstood the point I was trying to make. The loop to 100,000 is part of the benchmarking framework--it's not part of the code we're trying to optimize. –  Adrian McCarthy Jul 12 '12 at 17:59
2 upvote
  flag
I love the optimizations but Agree with Adrian that they don't necessarily apply in this case because it's specifically a benchmark test.... in "real code", you execute the portion commented as "primary loop" only once. They are executing it 100K times to even out any glitches from context switches, startup time, cache warming, etc. It's standard practice to run a loop N times or to execute code benchmarks several times to find the best and/or average time. These optimizations effectly eliminate the benchmarking of the "real code". –  Adisak Jul 18 '12 at 15:54
22 upvote
  flag
If you want to cheat, you might as well take the multiplication outside the loop and do sum*=100000 after the loop. –  Jyaif Oct 11 '12 at 1:48
add / show 3 more comments

No doubt some of us would be interested in ways of identifying code that is problematic for the CPU's branch-predictor. The Valgrind tool cachegrind has a branch-predictor simulator, enabled by using the --branch-sim=yes flag. Running it over the examples in this question, with the number of outer loops reduced to 10000 and compiled with g++, gives these results:

Sorted:

==32551== Branches:        656,645,130  (  656,609,208 cond +    35,922 ind)
==32551== Mispredicts:         169,556  (      169,095 cond +       461 ind)
==32551== Mispred rate:            0.0% (          0.0%     +       1.2%   )

Unsorted:

==32555== Branches:        655,996,082  (  655,960,160 cond +  35,922 ind)
==32555== Mispredicts:     164,073,152  (  164,072,692 cond +     460 ind)
==32555== Mispred rate:           25.0% (         25.0%     +     1.2%   )

Drilling down into the line-by-line output produced by cg_annotate we see for the loop in question:

Sorted:

          Bc    Bcm Bi Bim
      10,001      4  0   0      for (unsigned i = 0; i < 10000; ++i)
           .      .  .   .      {
           .      .  .   .          // primary loop
 327,690,000 10,016  0   0          for (unsigned c = 0; c < arraySize; ++c)
           .      .  .   .          {
 327,680,000 10,006  0   0              if (data[c] >= 128)
           0      0  0   0                  sum += data[c];
           .      .  .   .          }
           .      .  .   .      }

Unsorted:

          Bc         Bcm Bi Bim
      10,001           4  0   0      for (unsigned i = 0; i < 10000; ++i)
           .           .  .   .      {
           .           .  .   .          // primary loop
 327,690,000      10,038  0   0          for (unsigned c = 0; c < arraySize; ++c)
           .           .  .   .          {
 327,680,000 164,050,007  0   0              if (data[c] >= 128)
           0           0  0   0                  sum += data[c];
           .           .  .   .          }
           .           .  .   .      }

This lets you easily identify the problematic line - in the unsorted version the if (data[c] >= 128) line is causing 164,050,007 mispredicted conditional branches (Bcm) under cachegrind's branch-predictor model, whereas it's only causing 10,006 in the sorted version.


Alternatively, on Linux you can use the performance counters subsystem to accomplish the same task, but with native performance using CPU counters.

perf stat ./sumtest_sorted

Sorted:

 Performance counter stats for './sumtest_sorted':

  11808.095776 task-clock                #    0.998 CPUs utilized          
         1,062 context-switches          #    0.090 K/sec                  
            14 CPU-migrations            #    0.001 K/sec                  
           337 page-faults               #    0.029 K/sec                  
26,487,882,764 cycles                    #    2.243 GHz                    
41,025,654,322 instructions              #    1.55  insns per cycle        
 6,558,871,379 branches                  #  555.455 M/sec                  
       567,204 branch-misses             #    0.01% of all branches        

  11.827228330 seconds time elapsed

Unsorted:

 Performance counter stats for './sumtest_unsorted':

  28877.954344 task-clock                #    0.998 CPUs utilized          
         2,584 context-switches          #    0.089 K/sec                  
            18 CPU-migrations            #    0.001 K/sec                  
           335 page-faults               #    0.012 K/sec                  
65,076,127,595 cycles                    #    2.253 GHz                    
41,032,528,741 instructions              #    0.63  insns per cycle        
 6,560,579,013 branches                  #  227.183 M/sec                  
 1,646,394,749 branch-misses             #   25.10% of all branches        

  28.935500947 seconds time elapsed

It can also do source code annotation with dissassembly.

perf record -e branch-misses ./sumtest_unsorted
perf annotate -d sumtest_unsorted
 Percent |      Source code & Disassembly of sumtest_unsorted
------------------------------------------------
...
         :                      sum += data[c];
    0.00 :        400a1a:       mov    -0x14(%rbp),%eax
   39.97 :        400a1d:       mov    %eax,%eax
    5.31 :        400a1f:       mov    -0x20040(%rbp,%rax,4),%eax
    4.60 :        400a26:       cltq   
    0.00 :        400a28:       add    %rax,-0x30(%rbp)
...

See the performance tutorial for more details.

share|edit|flag
26 upvote
  flag
the tools shown here were more inspiring than the chosen answer itself –  nurettin Apr 13 '13 at 6:56
upvote
  flag
Perhaps but it doesn't explain why the sorted array is faster if you don't know anything about Branch prediction. It is however a very inspiring post. –  Arlaud Pierre Nov 20 '13 at 8:47
2 upvote
  flag
@ArlaudAgbePierre: I didn't see any point in reiterating what several other answers had said about that. –  caf Nov 21 '13 at 0:09
upvote
  flag
Of course not, your answer is very interesting. I'm merely explaining why the answer that nurettin found "less inspiring" got chosen in the first place instead of yours. But we're both totally fine with that. –  Arlaud Pierre Nov 21 '13 at 9:20
4 upvote
  flag
@tall.b.lo: The 25% is of all branches - there are two branches in the loop, one for data[c] >= 128 (which has a 50% miss rate as you suggest) and one for the loop condition c < arraySize which has ~0% miss rate. –  caf Dec 9 '13 at 4:29
add / show 1 more comment

As data is distributed between 0 and 255 when array is sorted, around first half of the iterations will not enter the if-statement (if statement shared below).

if (data[c] >= 128)
    sum += data[c];

Question is what make the above statement not execute in certain case as in case of sorted data? Here comes the "Branch predictor" a branch predictor is a digital circuit that tries to guess which way a branch (e.g. an if-then-else structure) will go before this is known for sure. The purpose of the branch predictor is to improve the flow in the instruction pipeline. Branch predictors play a critical role in achieving high effective performance!

Lets do some bench marking to understand it better

The performance of an if-statement depends on whether its condition has a predictable pattern. If the condition is always true or always false, the branch prediction logic in the processor will pick up the pattern. On the other hand, if the pattern is unpredictable, the if-statement will be much more expensive.

Let’s measure the performance of this loop with different conditions:

for (int i = 0; i < max; i++) if (condition) sum++;

Here are the timings of the loop with different True-False patterns:

Condition           Pattern              Time (ms)

(i & 0×80000000) == 0   T repeated        322

(i & 0xffffffff) == 0   F repeated        276

(i & 1) == 0            TF alternating    760

(i & 3) == 0            TFFFTFFF         513

(i & 2) == 0            TTFFTTFF         1675

(i & 4) == 0            TTTTFFFFTTTTFFFF 1275

(i & 8) == 0            8T 8F 8T 8F      752

(i & 16) == 0           16T 16F 16T 16F  490

A “bad” true-false pattern can make an if-statement up to six times slower than a “good” pattern! Of course, which pattern is good and which is bad depends on the exact instructions generated by the compiler and on the specific processor.

So there is no doubt about impact of branch prediction on performance!

share|edit|flag
3 upvote
  flag
You don't show the timings of the "random" TF pattern. –  Mooing Duck Feb 23 '13 at 2:31
add comment

Just read up on the thread and I feel an answer is missing. A common way to eliminate branch prediction that I've found to work particularly good in managed languages is a table lookup instead of using a branch. (although I haven't tested it in this case)

This approach works in general if:

  1. It's a small table and is likely to be cached in the processor
  2. You are running things in a quite tight loop and/or the processor can pre-load the data

Background and why

Pfew, so what the hell is that supposed to mean?

From a processor perspective, your memory is slow. To compensate for the difference in speed, they build in a couple of caches in your processor (L1/L2 cache) that compensate for that. So imagine that you're doing your nice calculations and figure out that you need a piece of memory. The processor will get his 'load' operation and loads the piece of memory into cache - and then uses the cache to do the rest of the calculations. Because memory is relatively slow, this 'load' will slow down your program.

Like branch prediction, this was optimized in the Pentium processors: the processor predicts that it needs to load a piece of data and attempts to load that into the cache before the operation actually hits the cache. As we've already seen, branch prediction sometimes goes horribly wrong -- in the worst case scenario you need to go back and actually wait for a memory load, which will take forever (in other words: failing branch prediction is bad, a memory load after a branch prediction fail is just horrible!).

Fortunately for us, if the memory access pattern is predictable, the processor will load it in its fast cache and all is well.

First thing we need to know is what is small? While smaller is generally better, a rule of thumb is to stick to lookup tables that are <=4096 bytes in size. As an upper limit: if your lookup table is larger than 64K it's probably worth reconsidering.

Constructing a table

So we've figured out that we can create a small table. Next thing to do is get a lookup function in place. Lookup functions are usually small functions that use a couple of basic integer operations (and, or, xor, shift, add, remove and perhaps a multiply). What you want is to have your input translated by the lookup function to some kind of 'unique key' in your table, which then simply gives you the answer of all the work you wanted it to do.

In this case: >=128 means we can keep the value, <128 means we get rid of it. The easiest way to do that is by using an 'AND': if we keep it, we AND it with 7FFFFFFF ; if we want to get rid of it, we AND it with 0. Notice also that 128 is a power of 2 -- so we can go ahead and make a table of 32768/128 integers and fill it with one zero and a lot of 7FFFFFFFF's.

Managed languages

You might wonder why this works well in managed languages. After all, managed languages check the boundaries of the arrays with a branch to ensure you don't mess up...

Well, not exactly... :-)

There has been quite some work on eliminating this branch for managed languages. For example:

for (int i=0; i<array.Length; ++i)
   // use array[i]

in this case it's obvious to the compiler that the boundary condition will never hit. At least the Microsoft JIT compiler (but I expect Java does similar things) will notice this and remove the check all together. WOW - that means no branch. Similarly, it will deal with other obvious cases.

If you run into trouble with lookups on managed languages - the key is to add a & 0x[something]FFF to your lookup function to make the boundary check predictable - and watch it going faster.

The result for this case

// generate data
int arraySize = 32768;
int[] data = new int[arraySize];

Random rnd = new Random(0);
for (int c = 0; c < arraySize; ++c)
    data[c] = rnd.Next(256);


// Too keep the spirit of the code in-tact I'll make a separate lookup table
// (I assume we cannot modify 'data' or the number of loops)
int[] lookup = new int[arraySize/128];

for (int c = 0; c < arraySize/128; ++c)
    lookup[c] = (c>=1)?0x7FFFFFFF:0

// test
DateTime startTime = System.DateTime.Now;
long sum = 0;

for (int i = 0; i < 100000; ++i)
{
    // primary loop
    for (int j = 0; j < arraySize; ++j)
    {
        sum += data[j] * lookup[c>>7]; // divide by 128
    }
}

DateTime endTime = System.DateTime.Now;
Console.WriteLine(endTime - startTime);
Console.WriteLine("sum = " + sum);

Console.ReadLine();
share|edit|flag
2 upvote
  flag
You want to bypass the branch-predictor, why? It's an optimization. –  Dustin Oprea Apr 24 '13 at 17:50
12 upvote
  flag
Because no branch is better than a branch :-) In a lot of situations this is simply a lot faster... if you're optimizing, it's definitely worth a try. They also use it quite a bit in f.ex. graphics.stanford.edu/~seander/bithacks.html –  Stefan de Bruijn Apr 24 '13 at 21:57
upvote
  flag
In general lookup tables can be fast, but have you ran the tests for this particular condition? You'll still have a branch condition in your code, only now it's moved to the look up table generation part. You still wouldn't get your perf boost –  Zain Dec 19 '13 at 21:45
upvote
  flag
@Zain if you really want to know... Yes: 15 seconds with the branch and 10 with my version. Regardless, it's a useful technique to know either way. –  Stefan de Bruijn Dec 20 '13 at 18:57
add comment

In the sorted case, you can do better than relying on successful branch prediction or any branchless comparison trick: completely remove the branch.

Indeed, the array is partitioned in a contiguous zone with data < 128 and another with data >= 128. So you should find the partition point with a dichtomic search (using Lg(arraySize) = 15 comparisons), then do a straight accumulation from that point.

Something like (unchecked)

int i= 0, j, k= arraySize;
while (i < k)
{
  j= (i + k) >> 1;
  if (data[j] >= 128)
    k= j;
  else
    i= j;
}
sum= 0;
for (; i < arraySize; i++)
  sum+= data[i];

or, slightly more obfuscated

int i, j, k;
for (i= 0, k= arraySize; i < k; (data[j] >= 128 ? k : i)= j)
  j= (i + k) >> 1;
for (sum= 0; i < arraySize; i++)
  sum+= data[i];

A yet faster approach, that gives an approximate solution for both sorted or unsorted is: sum= 3137536; (assuming a truly uniform distribution, 16384 samples with expected value 191.5) :-)

share|edit|flag
2 upvote
  flag
sum= 3137536 - clever. That's kinda obviously not the point of the question. The question is clearly about explaining surprising performance characteristics. I'm inclined to say that the addition of doing std::partition instead of std::sort is valuable. Though the actual question extends to more than just the synthetic benchmark given. –  sehe Jul 24 '13 at 16:31
upvote
  flag
It's probably best if you remove the "troll" elements of the answer to prevent (understandable) downvoting because of it. (The troll elements would be the obfuscated version as well as the statistics joke :/) –  sehe Jul 24 '13 at 16:32
upvote
  flag
Your first loop is completely unreadable. It is, presumably, a binary search, but... –  DeadMG Jul 24 '13 at 16:38
upvote
  flag
@DeadMG: this is indeed not the standard dichotomic search for a given key, but a search for the partitioning index; it requires a single compare per iteration. But don't rely on this code, I have not checked it. If you are interested in a guaranteed correct implementation, let me know. –  Yves Daoust Jul 24 '13 at 20:37
add comment

One way to avoid branch prediction errors is to build a lookup table, and index it using the data. Stefan de Bruijn discussed that in his answer.

But in this case, we know values are in the range [0, 255] and we only care about values >= 128. That means we can easily extract a single bit that will tell us whether we want a value or not: by shifting the data to the right 7 bits, we are left with a 0 bit or a 1 bit, and we only want to add the value when we have a 1 bit. Let's call this bit the "decision bit".

By using the 0/1 value of the decision bit as an index into an array, we can make code that will be equally fast whether the data is sorted or not sorted. Our code will always add a value, but when the decision bit is 0, we will add the value somewhere we don't care about. Here's the code:

// Test
clock_t start = clock();
long long a[] = {0, 0};
long long sum;

for (unsigned i = 0; i < 100000; ++i)
{
    // Primary loop
    for (unsigned c = 0; c < arraySize; ++c)
    {
        int j = (data[c] >> 7);
        a[j] += data[c];
    }
}

double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;
sum = a[1];

This code wastes half of the adds, but never has a branch prediction failure. It's tremendously faster on random data than the version with an actual if statement.

But in my testing, an explicit lookup table was slightly faster than this, probably because indexing into a lookup table was slightly faster than bit shifting. This shows how my code sets up and uses the lookup table (unimaginatively called lut for "LookUp Table" in the code). Here's the C++ code:

// declare and then fill in the lookup table
int lut[256];
for (unsigned c = 0; c < 256; ++c)
    lut[c] = (c >= 128) ? c : 0;

// use the lookup table after it is built
for (unsigned i = 0; i < 100000; ++i)
{
    // Primary loop
    for (unsigned c = 0; c < arraySize; ++c)
    {
        sum += lut[data[c]];
    }
}

In this case the lookup table was only 256 bytes, so it fit nicely in cache and all was fast. This technique wouldn't work well if the data was 24-bit values and we only wanted half of them... the lookup table would be far too big to be practical. On the other hand, we can combine the two techniques shown above: first shift the bits over, then index a lookup table. For a 24-bit value that we only want the top half value, we could potentially shift the data right by 12 bits, and be left with a 12-bit value for a table index. A 12-bit table index implies a table of 4096 values, which might be practical.

share|edit|flag
upvote
  flag
Right, you can also just use the bit directly and multiply (data[c]>>7 - which is discussed somewhere here as well); I intentionally left this solution out, but of course you are correct. Just a small note: The rule of thumb for lookup tables is that if it fits in 4KB (because of caching), it'll work - preferably make the table as small as possible. For managed languages I'd push that to 64KB, for low-level languages like C++ and C, I'd probably reconsider (that's just my experience). Since typeof(int) = 4, I'd try to stick to max 10 bits. –  Stefan de Bruijn Jul 29 '13 at 12:05
upvote
  flag
I think indexing with the 0/1 value will probably be faster than an integer multiply, but I guess if performance is really critical you should profile it. I agree that small lookup tables are essential to avoid cache pressure, but clearly if you have a bigger cache you can get away with a bigger lookup table, so 4KB is more a rule of thumb than a hard rule. I think you meant sizeof(int) == 4? That would be true for 32-bit. My two-year-old cell phone has a 32KB L1 cache, so even a 4K lookup table might work, especially if the lookup values were a byte instead of an int. –  steveha Jul 29 '13 at 22:02
upvote
  flag
Well... I should look this up downstairs in my Intel books, but I recall that an L1 cache doesn't work like that. From what I recall, it's very, very unlikely that it'll fill completely with a consecutive block of RAM. In other words, you might have 128KB L1, but that doesn't mean it'll get filled with a single lookup table. We only know for sure that a cache line is filled (which is a few bytes) and we know your page table is optimized for 4K. Hence the 4K rule of thumb. Either way, these are just details; you're right to rely on a profiler in these cases. –  Stefan de Bruijn Jul 30 '13 at 7:00
upvote
  flag
Umm...no, you have an if condition still hidden inside your lookup table generation code. No cookie for you –  Zain Dec 19 '13 at 21:41
1 upvote
  flag
@Zain, try actually benchmarking my code and then decide whether to award me a cookie or not. There is a world of difference between an if in the code to generate a short lookup table, and an if in the main loop processing a large data set. If you really want, you can make a static initializer for the lookup table, but the cost of setting it up is trivial. The loop that fills in the lookup table acts like the sorted list: the if always branches one way on the first half of the table, and always branches the other way on the second half, so there is only one mispredicted branch. –  steveha Dec 19 '13 at 22:07
add comment

Your Answer

 

Not the answer you're looking for? Browse other questions tagged or ask your own question.