When did my code break!?

That was my first reaction when I found this bug. Most of the tests failed. The program itself produced wrong results. No crashes though, and valgrind didn’t detect any problems.

Why do I still not have continuous integration for this thing?!

It was strange that such an obvious bug went unnoticed for so long and broke a rather core part of the program. I don’t even remember having touched that part of the code recently!

Where do I even begin to debug this problem?

I suppose there’s always git-bisect. But I was in the middle of implementing something so I didn’t really want to mess up the work tree (yes, I know there’s git-stash, but I’m also lazy). Hence, I ran git-bisect on the laptop instead. I picked a random point in the not-too-distant past and ran the tests … “PASSED”, and then started bisecting from there.

Alas, bisecting proved futile: all of the commits passed the tests. Strange … is it because of a faulty library? No, this part of the code doesn’t even use any of the external libraries. Moreover, both my desktop and laptop run Arch Linux, so AFAIK the compilers and system libraries are the same too.

While bisecting, I made a subconscious note of the compiler used on the laptop: it was Clang, as specified in the makefile. I don’t remember why I did that. Does it matter? Well, on the desktop – where I initially found this bug – I used GCC. Could that be the cause?

The answer turned out to be yes. So this is clearly a bug caused by optimization. But I have used GCC before, so it must be a recent thing.

“Recent” in a loose sense. The compiler was upgraded 2 months ago, as Pacman’s logs indicate.

Finding code that worked for a few years suddenly fail because of a compiler upgrade is rather disheartening – who knows what other bugs lurk in the code? Then again, it could be a compiler bug too … but it seems unlikely that no-one noticed this same bug within the past 2 months.

Still, I thought it might be useful to at least pin down which optimization pass caused the problem. Let’s start with the big hammer: tone down the -On level. The bug disappeared at -O0, so even a mere -O1 was enough to trigger the problem.

Now to figure out what kinds of optimizations -O1 turns on. Hopefully there’s some useful info on the GCC documentation. It does list several optimizations that are enabled by -O1, but there’s a fine print:

Depending on the target and how GCC was configured, a slightly different set of optimizations may be enabled at each -O level than those listed here. You can invoke GCC with -Q --help=optimizers to find out the exact set of optimizations that are enabled at each level.

So the list shown on the docs is actually not exhaustive, and also has a few that don’t apply to my specific platform. Using the -Q --help=optimizers flag, I managed to produce a list of optimizations – there’s apparently quite a lot! With a little of grep -v and some Emacs regexes I reduced the list down to a single, very long line of flags. It looked a little bit like this:

And that wasn’t even a quarter of the flags!

But the bug did not appear. Did I miss a flag somewhere?

Oh, wait:

Most optimizations are only enabled if an -O level is set on the command line. Otherwise they are disabled, even if individual optimization flags are specified.

Well then how am I supposed to specify explicitly which passes to turn on? The only way I could think of is to set -O1, and the explicitly negate all the optimizations that I don’t want. That seems … backwards, but there seemed no other way. So now what I did was to write a long command like this:

and then comment out the optimizations that I want turned on. Mentally, this was kind of confusing and led to several mistakes as I tried to manually bisect the optimization flags to see who is to blame for breaking my code. But eventually I found that -finline and -ftree-dse both have to be turned on for the bug to manifest.

Oddly enough, the bug manifested in a different way. It caused a segmentation fault. It didn’t used to do that. In retrospect, getting a segfault was much better than getting the wrong result, because it very likely crashes at exactly the location where the program went horribly wrong. Trying to figure out why I got the wrong result would’ve been much harder because it’s hard to tell which part of a mathematical calculation goes wrong. OTOH, narrowing down the precise optimizations that led to this bug did not do much to actually guide me in finding the source of the bug. It was only in retrospect that I was able to explain why those two optimizations caused the code to crash.

One idea I tried – which did not work – was to compare the assembly with and without optimization. First off, I have no real assembly experience – I can barely remember the order of arguments in Intel vs AT&T syntax! Secondly, the assembly was over 30MiB in size! Even running a diff takes a while, and kdiff3 as usual is hopeless against such a large file. (Graphical diffs are a lot more pleasant to read.)

What did work was to try and pare down the program until only the buggy part was left. My main concern was to make sure the bug remains reproducible. Optimization bugs could be finnicky: they might disappear if you remove seemingly unrelated parts of the program. But I was lucky that the bug didn’t go away. In fact it seemed very “robust” (in a twisted sense of that word).

As I tore up the program, I had to make an important decision: I found two independently buggy regions in the code; one of them caused a wrong result, while the other segfaulted. Which one should I keep?

Segfault seemed like a more obvious bug, so I went with that one instead. It was probably a good choice. The other one would probably’ve left me repeating the calculations on paper.

After tearing up the program, I tried examining the assembly again, but it was still quite big (~3MB). Kdiff3 took several minutes to run. While it was loading, I ran GDB with the hopes of catching the segfault in action.

It showed me a specific line of code in the source file where it crashed – that was quite helpful. But when I tried to print out the local variables, they were all optimized out. Trying to decipher the assembly led me nowhere either. All I figured out was that it tried read a double from memory and failed. Here’s an artist’s impression of the code:

I guess I could just do the old-fashioned approach of printfing the variables. It’s worth a shot. I hoped that it would not cause the bug to disappear.

It did not, thankfully. The array index x was clearly garbage: a number that was over several million. Considering that I’m running an extremely small test case, that is far too high. Where did this garbage originate?

One thing I tried right away was to print the result (and also arguments) from inside the function get_index_thing. But in haste, I forgot that it’s actually returning an std::tuple, which caused printf to print garbage as well (it was not the same garbage as before though, but I somehow missed that initially). This mistake led me on a wild-goose chase far elsewhere – I thought one of the earlier initialization steps was done incorrectly.

Eventually I found that that (a) printf with %lu does not work on a std::tuple, even if it has only a single-element, and (b) it’s morning and I desperately need sleep.

But I’m so close, can’t give up now! After correctly printing the value that is being returned in get_index_thing, I find that it is actually sane. So, somewhere between returning that result and printing it in the caller, something went horribly awry.

The unmove looked suspicious. I distinctly recall telling myself that unmove can be somewhat dangerous to use, but having not worked with this part of the code for so long, I’m a bit foggy on the intricate details of references.

Wait a second, it’s being stored in an auto&&? Wouldn’t that mean the value’s lifetime ends immediately before the next line?

Uh-oh.

All I had to do was to change auto&& to auto and the code magically started working again. (Well, there are a few other places in the code where I committed the same offense, so I had to fix them too.)

Turns out, unmove is a bit of a red herring in this particular example, as removing it did not fix the problem. However, it definitely makes similar mistakes easier. Take, for example, this simple code which reproduces this bug:

To make this more obvious, we can specialize the types for int. Note that the return type is int& which, combined with auto&&, leads to a deduced type of int&.

When unmove is called, 42 is bound as a temporary that lasts until the end of that particular line. Thus, by the time line #2 is reached, the value 42 is gone. There are some exceptions to this rule that allow the lifetime of a reference to be extended, but none of those apply here because the value is being passed through an intermediary.

On the GCC 5.3.0, this causes the program to exit with 0, as opposed to 42. So it looks that GCC finally tightened the rules around the lifetime of a variable stored by reference. The -ftree-dse (dead store elimination) optimization probably decided that line #1 can be deleted since the variable p expires right way. In other words, it eliminated the buggy line of code entirely, in a literal sense!

As you know, undefined behavior can seriously ruin your day. In this case, it has ruined my night of sleep. It is now morning as I write this postmortem.

I still wonder though, why this problem go unnoticed for so long?