Bad code elimination

When did my code break!?

That was my first reaction when I found this bug. Most of the tests failed. The program itself produced wrong results. No crashes though, and valgrind didn’t detect any problems.

Why do I still not have continuous integration for this thing?!

It was strange that such an obvious bug went unnoticed for so long and broke a rather core part of the program. I don’t even remember having touched that part of the code recently!

Where do I even begin to debug this problem?

I suppose there’s always git-bisect. But I was in the middle of implementing something so I didn’t really want to mess up the work tree (yes, I know there’s git-stash, but I’m also lazy). Hence, I ran git-bisect on the laptop instead. I picked a random point in the not-too-distant past and ran the tests … “PASSED”, and then started bisecting from there.

Alas, bisecting proved futile: all of the commits passed the tests. Strange … is it because of a faulty library? No, this part of the code doesn’t even use any of the external libraries. Moreover, both my desktop and laptop run Arch Linux, so AFAIK the compilers and system libraries are the same too.

While bisecting, I made a subconscious note of the compiler used on the laptop: it was Clang, as specified in the makefile. I don’t remember why I did that. Does it matter? Well, on the desktop – where I initially found this bug – I used GCC. Could that be the cause?

The answer turned out to be yes. So this is clearly a bug caused by optimization. But I have used GCC before, so it must be a recent thing.

“Recent” in a loose sense. The compiler was upgraded 2 months ago, as Pacman’s logs indicate.

Finding code that worked for a few years suddenly fail because of a compiler upgrade is rather disheartening – who knows what other bugs lurk in the code? Then again, it could be a compiler bug too … but it seems unlikely that no-one noticed this same bug within the past 2 months.

Still, I thought it might be useful to at least pin down which optimization pass caused the problem. Let’s start with the big hammer: tone down the -On level. The bug disappeared at -O0, so even a mere -O1 was enough to trigger the problem.

Now to figure out what kinds of optimizations -O1 turns on. Hopefully there’s some useful info on the GCC documentation. It does list several optimizations that are enabled by -O1, but there’s a fine print:

Depending on the target and how GCC was configured, a slightly different set of optimizations may be enabled at each -O level than those listed here. You can invoke GCC with -Q --help=optimizers to find out the exact set of optimizations that are enabled at each level.

So the list shown on the docs is actually not exhaustive, and also has a few that don’t apply to my specific platform. Using the -Q --help=optimizers flag, I managed to produce a list of optimizations – there’s apparently quite a lot! With a little of grep -v and some Emacs regexes I reduced the list down to a single, very long line of flags. It looked a little bit like this:

And that wasn’t even a quarter of the flags!

But the bug did not appear. Did I miss a flag somewhere?

Oh, wait:

Most optimizations are only enabled if an -O level is set on the command line. Otherwise they are disabled, even if individual optimization flags are specified.

Well then how am I supposed to specify explicitly which passes to turn on? The only way I could think of is to set -O1, and the explicitly negate all the optimizations that I don’t want. That seems … backwards, but there seemed no other way. So now what I did was to write a long command like this:

and then comment out the optimizations that I want turned on. Mentally, this was kind of confusing and led to several mistakes as I tried to manually bisect the optimization flags to see who is to blame for breaking my code. But eventually I found that -finline and -ftree-dse both have to be turned on for the bug to manifest.

Oddly enough, the bug manifested in a different way. It caused a segmentation fault. It didn’t used to do that. In retrospect, getting a segfault was much better than getting the wrong result, because it very likely crashes at exactly the location where the program went horribly wrong. Trying to figure out why I got the wrong result would’ve been much harder because it’s hard to tell which part of a mathematical calculation goes wrong. OTOH, narrowing down the precise optimizations that led to this bug did not do much to actually guide me in finding the source of the bug. It was only in retrospect that I was able to explain why those two optimizations caused the code to crash.

One idea I tried – which did not work – was to compare the assembly with and without optimization. First off, I have no real assembly experience – I can barely remember the order of arguments in Intel vs AT&T syntax! Secondly, the assembly was over 30MiB in size! Even running a diff takes a while, and kdiff3 as usual is hopeless against such a large file. (Graphical diffs are a lot more pleasant to read.)

What did work was to try and pare down the program until only the buggy part was left. My main concern was to make sure the bug remains reproducible. Optimization bugs could be finnicky: they might disappear if you remove seemingly unrelated parts of the program. But I was lucky that the bug didn’t go away. In fact it seemed very “robust” (in a twisted sense of that word).

As I tore up the program, I had to make an important decision: I found two independently buggy regions in the code; one of them caused a wrong result, while the other segfaulted. Which one should I keep?

Segfault seemed like a more obvious bug, so I went with that one instead. It was probably a good choice. The other one would probably’ve left me repeating the calculations on paper.

After tearing up the program, I tried examining the assembly again, but it was still quite big (~3MB). Kdiff3 took several minutes to run. While it was loading, I ran GDB with the hopes of catching the segfault in action.

It showed me a specific line of code in the source file where it crashed – that was quite helpful. But when I tried to print out the local variables, they were all optimized out. Trying to decipher the assembly led me nowhere either. All I figured out was that it tried read a double from memory and failed. Here’s an artist’s impression of the code:

I guess I could just do the old-fashioned approach of printfing the variables. It’s worth a shot. I hoped that it would not cause the bug to disappear.

It did not, thankfully. The array index x was clearly garbage: a number that was over several million. Considering that I’m running an extremely small test case, that is far too high. Where did this garbage originate?

One thing I tried right away was to print the result (and also arguments) from inside the function get_index_thing. But in haste, I forgot that it’s actually returning an std::tuple, which caused printf to print garbage as well (it was not the same garbage as before though, but I somehow missed that initially). This mistake led me on a wild-goose chase far elsewhere – I thought one of the earlier initialization steps was done incorrectly.

Eventually I found that that (a) printf with %lu does not work on a std::tuple, even if it has only a single-element, and (b) it’s morning and I desperately need sleep.

But I’m so close, can’t give up now! After correctly printing the value that is being returned in get_index_thing, I find that it is actually sane. So, somewhere between returning that result and printing it in the caller, something went horribly awry.

The unmove looked suspicious. I distinctly recall telling myself that unmove can be somewhat dangerous to use, but having not worked with this part of the code for so long, I’m a bit foggy on the intricate details of references.

Wait a second, it’s being stored in an auto&&? Wouldn’t that mean the value’s lifetime ends immediately before the next line?

Uh-oh.

All I had to do was to change auto&& to auto and the code magically started working again. (Well, there are a few other places in the code where I committed the same offense, so I had to fix them too.)

Turns out, unmove is a bit of a red herring in this particular example, as removing it did not fix the problem. However, it definitely makes similar mistakes easier. Take, for example, this simple code which reproduces this bug:

To make this more obvious, we can specialize the types for int. Note that the return type is int& which, combined with auto&&, leads to a deduced type of int&.

When unmove is called, 42 is bound as a temporary that lasts until the end of that particular line. Thus, by the time line #2 is reached, the value 42 is gone. There are some exceptions to this rule that allow the lifetime of a reference to be extended, but none of those apply here because the value is being passed through an intermediary.

On the GCC 5.3.0, this causes the program to exit with 0, as opposed to 42. So it looks that GCC finally tightened the rules around the lifetime of a variable stored by reference. The -ftree-dse (dead store elimination) optimization probably decided that line #1 can be deleted since the variable p expires right way. In other words, it eliminated the buggy line of code entirely, in a literal sense!

As you know, undefined behavior can seriously ruin your day. In this case, it has ruined my night of sleep. It is now morning as I write this postmortem.

I still wonder though, why this problem go unnoticed for so long?

Comments

Undefined behavior

When the C (or C++) standard says “doing X is undefined behavior”, they’re not telling you that your program will exit with an error, corrupt your memory, or even crash for that matter. These are only a few of ways in which undefined behavior can manifest.

There is a reason they are called “nasal demons”. In principle, undefined behavior could wipe your entire hard drive.

Of course, in the real world, compilers are too lazy to actually make your program erase the disk. Instead, they tend to take simpler route and assume that undefined behavior is “impossible”.

It doesn’t matter if it’s actually not impossible. The compilers don’t care. They want the most optimal code, and they will get the most optimal code even if it means wreaking havoc on your program.

Consider the following example:

Suppose you don’t pass any arguments to this program (i.e. argc == 1). In this case, the pointer returned by getnumptr points to n, which contains 42, so you should expect 42 to be printed.

However, if you do pass arguments to the program (i.e. argc > 1), then you’re in the realm of undefined behavior. On some compilers, the program will crash with a segmentation fault, which is probably what most people expect.

On less forgiving compilers however, the program may simply print 42.

How could one get a 42 by dereferencing a NULL pointer?

Well, the compiler was free to assume that the situation of dereferencing a NULL pointer was “impossible”, and hence didn’t bother to emit the code for that case at all. The if block was entirely elided from the generated assembly. All that remained was:

When they say undefined behavior means anything goes, they really mean it.

This article is brought to you by undefined behavior.

Comments

Haskell library: bound

Learning how Edward Kmett’s bound library worked was quite a challenge for me. Even though there is a nice tutorial, I found it rather difficult to penetrate – the puns, while humorous, didn’t help either. Moreover, polymorphic recursion itself is a rather strange concept that takes some getting used to.

In the end, I had to read through code very carefully in order to understand the inner workings of the library. In the process, I made some sketches to guide myself. Not surprisingly, the pictures themselves explain the concept much more clearly than the code or the text! (At least I found that to be the case, YMMV.)

Here’s a picture that shows an expression tree in untyped lambda calculus. Click to zoom in! (You can also get the SVG version.)

A random expression tree represented using the bound library.

There are a couple interesting things to note:

  • Important observation: for a given expression there are multiple distinct representations! This is how the library manages to be efficient (what Edward refers to as “you can lift whole trees”). However, this is for all intents and purposes an implementation detail: whenever you test for equality using (==), the representation is automatically canonicalized. Note that the tree show above is not canonical.

  • The F node negates the effect of the enclosing Scope on its children . Nested F nodes will negate additional nested Scopes, until there are no more Scopes to negate. This is enforced by the type system, so you can’t screw this up by accident.

    F nodes can be pushed towards the leaves: in doing so, it replicates itself whenever it encounters a fork in the tree. Pushing every F node to the leaves produces the canonical representation.

  • The B leaf binds to the prevailing Scope − i.e. the innermost scope that isn’t negated. If all Scopes have been negated, then B leaves cannot appear (also enforced by the type system).

    If the type variable b is not (), then one needs to also specify the which subvariable of the prevailing Scope it should bind to. This is used to bind multiple variables simultaneously.

  • Monadic bind (>>=) applies the given function to every variable and grafts the result in its place. This is used in substitution, a.k.a. instantiation.

  • Lifting creates a Scope from an existing expression. In traditional implementations based on de Bruijn indices, this would require a traversal of the entire expression, but in the bound library this is done simply by wrapping the entire tree inside F, which negates the effect of the enclosing Scope.

    Lifting occurs whenever an expression needs to be substituted into something that is deeper in scope.

Comments