# Haskell library: bound

Learning how Edward Kmett's bound library worked was quite a challenge for me. Even though there is a nice tutorial, I found it rather difficult to penetrate – the puns, while humorous, didn't help either. Moreover, polymorphic recursion itself is a rather strange concept that takes some getting used to.

In the end, I had to read through code very carefully in order to understand the inner workings of the library. In the process, I made some sketches to guide myself. Not surprisingly, the pictures themselves explain the concept much more clearly than the code or the text! (At least I found that to be the case, YMMV.)

Here's a picture that shows an expression tree in untyped lambda calculus. Click to zoom in! (You can also get the SVG version.)

There are a couple interesting things to note:

• Important observation: for a given expression there are multiple distinct representations! This is how the library manages to be efficient (what Edward refers to as "you can lift whole trees"). However, this is for all intents and purposes an implementation detail: whenever you test for equality using (==), the representation is automatically canonicalized. Note that the tree show above is not canonical.

• The F node negates the effect of the enclosing Scope on its children . Nested F nodes will negate additional nested Scopes, until there are no more Scopes to negate. This is enforced by the type system, so you can't screw this up by accident.

F nodes can be pushed towards the leaves: in doing so, it replicates itself whenever it encounters a fork in the tree. Pushing every F node to the leaves produces the canonical representation.

• The B leaf binds to the prevailing Scope − i.e. the innermost scope that isn't negated. If all Scopes have been negated, then B leaves cannot appear (also enforced by the type system).

If the type variable b is not (), then one needs to also specify the which subvariable of the prevailing Scope it should bind to. This is used to bind multiple variables simultaneously.

• Monadic bind (>>=) applies the given function to every variable and grafts the result in its place. This is used in substitution, a.k.a. instantiation.

• Lifting creates a Scope from an existing expression. In traditional implementations based on de Bruijn indices, this would require a traversal of the entire expression, but in the bound library this is done simply by wrapping the entire tree inside F, which negates the effect of the enclosing Scope.

Lifting occurs whenever an expression needs to be substituted into something that is deeper in scope.

# Löb and möb in JavaScript

Strange loops in Haskell, but now in JavaScript! :D

The name of this function probably comes from Löb's theorem, which is the mathematical analogue of the loeb function.

You may or may not want to read up on the original Haskell version and commentary by David Luposchainsky first.

For the adventurous, here's a JSFiddle of this entire article (results will only appear in the developer console).

## Remark about lazy evaluation

Since JavaScript uses strict/eager evaluation, lazy values must be indicated explicitly. We denote them here using the Lazy prefix:

-- Haskell
type Lazy a = () -> a

This makes the type signatures slightly more complicated than usual.

## Löb

First, let's look at the loeb function, specialized for arrays. The original lazy Haskell definition looked like this when written out fully:

-- Haskell
loeb :: [[a] -> a] -> [a]
loeb formulas = results
where evaluate formula = formula results
results = map evaluate formulas

In an eager language, this needs to be modified slightly:

-- Haskell
loeb :: [[Lazy a] -> a] -> [Lazy a]
loeb formulas = results
where evaluate formula () = formula results
results = map evaluate formulas

This has the slight downside of having to manually force out the results afterwards.

Translating into JavaScript is straightforward, but the code is uglier:

function loeb(formulas) {
function evaluate(formula) { return function() {
return formula(results);
} }
var results = formulas.map(evaluate);
return results;
}

The typical use for this function is as a spreadsheet evaluator. Here's an example of a simple 3-cell spreadsheet:

A1:       2
A2:       3
A3:  =A1+A2
console.log("spreadsheet result: ", loeb([
function(cells) { return 2; },
function(cells) { return 3; },
function(cells) { return cells[0]() + cells[1](); }
]).map(function(x){ return x(); }))

The .map(function(x){ return x(); }) forces every cell to be evaluated. You should see [2, 3, 5] in the console.

## Möb

The loeb function can be generalized by adding an explicit map parameter, allowing our choice of the mapping function.

The Haskell type signature looks quite scary, even though the implementation is virtually identical:

-- Haskell
moeb :: (((t -> a) -> Lazy a) -> s -> t) -> s -> t
moeb map formulas = results
where evaluate formula () = formula results
results = map evaluate formulas

But since we're in JavaScript-land we don't care about type signatures. We just copy-paste our implementation, but allow the map function to be chosen arbitrarily.

function moeb(map) { return function (formulas) {
function evaluate(formula) { return function() {
return formula(results);
} }
var results = map(evaluate)(formulas);
return results;
} }

It's easy to recover loeb from moeb. We just need to define a helper function (literally just a curried version of .map).

function map(func) { return function(array) {
return array.map(func);
} }

Then, we have that:

var the_same_loeb = moeb(map);

See for yourself :p

We can also define the fixpoint combinator (a.k.a. Y combinator). But first we need another helper function, the identity function.

function id(x) {
return x;
}

Then the fixpoint combinator is just:

var fix = moeb(id);

Here's an example of using the fixpoint combinator to implement the factorial function. Not the sanest way of implementating factorials, I might add.

var factorial = fix(function(fac) { return function(x) {
return x > 0 ? x * fac()(x - 1) : 1;
} })();
console.log("factorial of 3 is", factorial(3));

# Introduction to the GNU Build System (Autotools)

This is intended to be a brief introduction to the GNU build system (Autotools) for beginners. The reader is assumed to have a basic understanding of the process of compilation and linking.

## What is a build system? ⇡

Running the compiler and linker by hand can be a tedious chore when there are numerous files, as is typical in large projects. This is where the build system comes in – it automates the workflow to aid the developers as well as the the users in the process of building software. The process can be roughly divided into three stages:

1. configuration: adapting the build process to the target system;
2. compilation and linking: translating source code into machine code;
3. installation: copying the final product to its final location.

Build systems assist in some or all of these steps.

## What is Autotools? ⇡

Build systems come in all shapes and varieties. For this article we focus on the GNU build system, also known as Autotools, which is one of the oldest build systems and remains quite popular on Unix-like platforms.

An easy way to tell if the project uses Autotools is to look for the presence for its primary input files:

• configure.ac, and
• Makefile.am.

Note that Autotools is not a monolithic build system. Rather, it is more like a collection of different tools that work together to help build a project. Of these tools, the two most critical ones are:

Autoconf is a tool that generates a configure script for the configuration step. The script detects the available features of the target system and adjusts the build process where needed. The script is written in the shell language, but in a somewhat unusual way to maximize portability across different platforms. It is automatically generated from the input file (configure.ac) via the m4 macro language.

Automake is a tool that aids in the construction of makefiles. It consumes input files (Makefile.am) to generate makefile precursors, known as Makefile.in. These precursors are later consumed by the configure script to construct the complete makefiles, which may then be used to build and install the program.

Here is a highly simplified sketch of the typical Autotools workflow:

(A more detailed diagram as well as description be found on Wikipedia.)

1. Initially, the developer runs autoreconf to generate the configure script from configure.ac and a Makefile.in from every Makefile.am (of which there may be several, hidden in the subdirectories):

autoreconf -i
2. Afterwards, the user runs ./configure to generate a Makefile from every Makefile.in.

./configure

Generally, when the user downloads a package, step 1 has already been done, hence the user only needs to run the configure script. The user does not need to invoke autoreconf unless the configure script is absent.

If you are interested in learning how to use Autotools in your own projects, consider reading the official docs or one of the many Autotools tutorials.

## How do I build programs that use Autotools? ⇡

First, make sure you have changed the working directory to the project directory that contains the configure script.

Next, skim through the README and/or INSTALL documents to make sure that any required dependencies are installed. Note that if the dependencies are installed in an unusual location you may need to adjust the search paths for

This must be done before running configure (the next step), otherwise it may fail or link against the wrong library.

Finally, you can then build the program by following the traditional 3-step procedure:

./configuremakemake install

However, sometimes this fails, or perhaps, the default configuration is not what you want.

### What should I do if ./configure fails? ⇡

The first thing to do is to look for the same error message in config.log. Just above the error message there will be a record of the last few actions configure tried to do but failed.

Most of the steps in a configure script are feature tests: they typically run a test program to find out if some feature is available and whether it has any specific quirks. If a test fails and there is no fallback plan, then the entire configure script is aborted. There will usually be an error message associated with this (e.g. a compiler error) but it is buried inside config.log.

Sometimes the reasons are not obvious, or the configure script may even give misleading reasons! Therefore, always dig into the log to uncover the true cause. It could be an improperly set environment variable, a missing library/tool, or a flag that needs to be overriden, etc.

In the worst case, the configure.ac may have been faulty and as result you may have to debug the configure script itself.

### How do I customize configuration process? ⇡

To see what customizations are available, run

./configure --help

The options available will vary from project to project, so read it carefully. In my experience, the two most commonly used flags are:

• --prefix=/some/path: sets the path to which the project will be installed. If not provided, the default is /usr/local, which means

• executables are installed to /usr/local/bin,
• libraries are installed to /usr/local/lib,
• manuals are installed to /usr/local/man, and
• additional files are installed to /usr/local/share.

A commonly used alternative is the \$HOME directory, useful if you don't have superuser privileges.

• --with-somepackage=someflags: overrides the linker flag(s) used for somepackage. This is useful if you want to use a substitute library that the script is not aware of. Be sure to quote the argument if it contains special characters or consists of more than one flag.

For example, if a project expects the BLAS library and you wish to use Intel MKL instead, you can specify --with-blas=-lmkl_rt.