Thursday, 28 May 2009

A really simple tracing debugger: part 2

In my last post I showed how to get a readable execution trace of a process using a simple ptrace()-based tracer (itrace) and binutils' addr2line. By doing a bit more work we can extract the function call tree from the trace. In order to distinguish callers and callees, we can use the stack pointer (%esp), which itrace outputs.

Here's the call tree for a "hello world" program using glibc 2.9 on i386, statically linked:

$ ./itrace ./hellow-static | python treeify.py ./hellow-static
_start ??:0 8048130
  __libc_start_main glibc/csu/libc-start.c:93 8048220
    _dl_aux_init glibc/elf/dl-support.c:165 80518e0
    _dl_discover_osversion glibc/elf/../sysdeps/unix/sysv/linux/dl-sysdep.c:67 80521e0
      __uname ??:0 806e160
    __pthread_initialize_minimal glibc/csu/libc-tls.c:247 8048750
      __libc_setup_tls glibc/csu/libc-tls.c:111 8048550
        __sbrk glibc/misc/sbrk.c:34 80505d0
          __brk glibc/misc/../sysdeps/unix/sysv/linux/i386/brk.c:36 806f9c0
          __brk glibc/misc/../sysdeps/unix/sysv/linux/i386/brk.c:36 806f9c0
        memcpy glibc/string/memcpy.c:33 804fc10
      init_slotinfo glibc/csu/libc-tls.c:83 80486cb
        init_static_tls glibc/csu/libc-tls.c:100 8048708
  _dl_setup_stack_chk_guard glibc/csu/../sysdeps/unix/sysv/linux/dl-osinfo.h:76 8048283
    __libc_init_first glibc/csu/../sysdeps/unix/sysv/linux/init-first.c:44 80522f0
      __setfpucw glibc/math/../sysdeps/i386/setfpucw.c:29 8056270
      __libc_init_secure glibc/elf/enbl-secure.c:34 8052190
      _dl_non_dynamic_init glibc/elf/dl-support.c:235 8051b50
        getenv glibc/stdlib/getenv.c:36 8056ea0
          strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
        getenv glibc/stdlib/getenv.c:36 8056ea0
          strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
        _dl_init_paths glibc/elf/dl-load.c:630 8072ae0
          _dl_important_hwcaps glibc/elf/dl-support.c:306 8051b20
          __libc_malloc glibc/malloc/malloc.c:3540 804eb60
            malloc_hook_ini glibc/malloc/hooks.c:35 804f360
              ptmalloc_init glibc/malloc/arena.c:426 804b370
                ptmalloc_init_minimal glibc/malloc/arena.c:374 804b39f
                  __getpagesize glibc/misc/../sysdeps/unix/sysv/linux/getpagesize.c:29 8050650
                  __linkin_atfork glibc/nptl/../nptl/sysdeps/unix/sysv/linux/register-atfork.c:115 80513a0
                next_env_entry glibc/malloc/arena.c:344 804b45d
            __libc_malloc glibc/malloc/malloc.c:3540 804eb60
              _int_malloc glibc/malloc/malloc.c:4130 804cae0
                malloc_consolidate glibc/malloc/malloc.c:4835 804afd0
                malloc_init_state glibc/malloc/malloc.c:2412 804a8e0
              sYSMALLOc glibc/malloc/malloc.c:2929 804d200
                __default_morecore glibc/malloc/morecore.c:48 804f9c0
                  __sbrk glibc/misc/sbrk.c:34 80505d0
                    __brk glibc/misc/../sysdeps/unix/sysv/linux/i386/brk.c:36 806f9c0
                __default_morecore glibc/malloc/morecore.c:48 804f9c0
                  __sbrk glibc/misc/sbrk.c:34 80505d0
                    __brk glibc/misc/../sysdeps/unix/sysv/linux/i386/brk.c:36 806f9c0
          __libc_malloc glibc/malloc/malloc.c:3540 804eb60
            _int_malloc glibc/malloc/malloc.c:4130 804cae0
        getenv glibc/stdlib/getenv.c:36 8056ea0
          strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
        getenv glibc/stdlib/getenv.c:36 8056ea0
          strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
        getenv glibc/stdlib/getenv.c:36 8056ea0
          strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
        getenv glibc/stdlib/getenv.c:36 8056ea0
          strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
      dl_platform_init glibc/elf/../sysdeps/i386/dl-machine.h:273 8051c38
        getenv glibc/stdlib/getenv.c:36 8056ea0
          strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
    __init_misc glibc/misc/init-misc.c:31 8051120
      rindex glibc/string/../sysdeps/i386/strrchr.S:37 8067a30
    __cxa_atexit glibc/stdlib/cxa_atexit.c:34 8048ab0
      __new_exitfn glibc/stdlib/cxa_atexit.c:63 8048960
    __libc_csu_init glibc/csu/elf-init.c:65 80487b0
      _init /build/buildd/glibc-2.7/build-tree/amd64-i386/csu/crti.S:15 80480f4
        _init ??:0 8048116
          frame_dummy crtstuff.c:0 80481b0
            __register_frame_info ??:0 80a2b50
              __register_frame_info_bases ??:0 80a2ab0
                __i686.get_pc_thunk.bx ??:0 80a29f9
          __do_global_ctors_aux crtstuff.c:0 80a4b10
        _init /build/buildd/glibc-2.7/build-tree/amd64-i386/csu/crtn.S:10 8048120
    _setjmp glibc/setjmp/../sysdeps/i386/bsd-_setjmp.S:36 8048830
    main ??:0 80481f0
      _IO_puts glibc/libio/ioputs.c:34 8048b10
        strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
        _IO_new_file_xsputn glibc/libio/fileops.c:1288 8065870
          _IO_new_file_overflow glibc/libio/fileops.c:829 80661a0
            _IO_doallocbuf glibc/libio/genops.c:419 8049780
              _IO_file_doallocate glibc/libio/filedoalloc.c:88 808bd50
                _IO_file_stat glibc/libio/fileops.c:1225 8065b90
                  ___fxstat64 glibc/io/../sysdeps/unix/sysv/linux/fxstat64.c:46 804fe20
                mmap glibc/misc/../sysdeps/unix/sysv/linux/i386/mmap.S:81 8051080
                _IO_setb glibc/libio/genops.c:404 8049670
            _IO_new_do_write glibc/libio/fileops.c:493 8065a50
          _IO_default_xsputn glibc/libio/genops.c:452 8049850
      _IO_acquire_lock_fct glibc/libio/libioP.h:968 8048b8d
    exit glibc/stdlib/exit.c:34 8048870
      __libc_csu_fini glibc/csu/elf-init.c:91 8048770
      _fini /build/buildd/glibc-2.7/build-tree/amd64-i386/csu/crti.S:41 80a5574
        _fini ??:0 80a5587
          __do_global_dtors_aux crtstuff.c:0 8048160
            fini glibc/dlfcn/dlerror.c:207 80975e0
              check_free glibc/dlfcn/dlerror.c:188 8097560
            __deregister_frame_info ??:0 80a3c30
            __deregister_frame_info_bases ??:0 80a3b20
              __i686.get_pc_thunk.bx ??:0 80a29f9
        _fini /build/buildd/glibc-2.7/build-tree/amd64-i386/csu/crtn.S:21 80a558c
      _IO_cleanup glibc/libio/genops.c:1007 8049fd0
        _IO_flush_all_lockp glibc/libio/genops.c:823 8049db0
          _IO_new_file_overflow glibc/libio/fileops.c:829 80661a0
            _IO_new_do_write glibc/libio/fileops.c:493 8065a50
              new_do_write glibc/libio/fileops.c:505 8065760
                _IO_new_file_write glibc/libio/fileops.c:1261 8065a80
                  ?? ??:0 8
                __libc_write ??:0 804ffa0
                  __write_nocancel ??:0 804ffaa
      _IO_unbuffer_write glibc/libio/genops.c:951 8049fe8
        _IO_new_file_setbuf glibc/libio/fileops.c:445 8066380
          _IO_default_setbuf glibc/libio/genops.c:562 80496d0
            _IO_new_file_sync glibc/libio/fileops.c:891 80660d0
            _IO_setb glibc/libio/genops.c:404 8049670
      _exit glibc/posix/../sysdeps/unix/sysv/linux/i386/_exit.S:25 804fdc0
Believe it or not, this trace is quite illuminating. Working out that these calls will happen by reading the glibc source could take a long time!

The Python script below (treeify.py) does two things:

  • Firstly, it filters the trace to include only function entry points. We can just about infer function entry points from addr2line's output. We assume that when a function name (plus filename and line number) appears for the first time in the trace, the current address is the function's entry point. This doesn't work fully when inline functions are instantiated multiple times though. We could use nm to find symbol addresses, but addr2line gives us source filenames.
  • Secondly, it works out the nesting of the call tree by looking at the stack pointer.
import subprocess
import sys

def read():
    for line in sys.stdin:
        try:
            regs = [int(x, 16) for x in line.split(" ")]
            yield {"eip": regs[0], "esp": regs[1]}
        # Ignore lines interspersed with other output!
        except (ValueError, IndexError):
            pass

def addr2line(iterable):
    proc = subprocess.Popen(["addr2line", "-e", sys.argv[1], "-f"],
                            stdin=subprocess.PIPE, stdout=subprocess.PIPE)
    for regs in iterable:
        proc.stdin.write("%x\n" % regs["eip"])
        a = proc.stdout.readline().rstrip("\n")
        b = proc.stdout.readline().rstrip("\n")
        regs["func"] = "%s %s" % (a, b)
        yield regs

def entry_points(iterable):
    funcs = {}
    # We treat the first address we see for the function as its entry
    # point, and only report those entries from this point on.
    for regs in iterable:
        func = regs["func"].split(":")[0]
        if funcs.setdefault(func, regs["eip"]) == regs["eip"]:
            yield regs

def add_nesting(iterable):
    stack = [2 ** 64]
    for regs in iterable:
        stack_pos = regs["esp"]
        if stack_pos < stack[-1]:
            stack.append(stack_pos)
        while stack_pos > stack[-1]:
            stack.pop()
        regs["indent"] = "  " * len(stack)
        yield regs

for x in add_nesting(entry_points(addr2line(read()))):
    print x["indent"], x["func"], "%x" % x["eip"]

Saturday, 16 May 2009

A really simple tracing debugger

gdb can be really useful for debugging when it prints the information you want, but sometimes it will fail to give a useful backtrace, and it can be hard to find out what happened just before your process crashed.

I faced that problem when porting glibc to Native Client. gdb (or strace -i) could tell me the value of the instruction pointer when the process crashed, but it couldn't produce a backtrace because it didn't know how to read memory with NaCl's x86 segmentation setup. (I should point out that gdb has since been ported.) Often I could look up the address with addr2line or objdump -d to find the cause of the problem. However, that didn't help when the process died by jumping to address 0, or when it jumped to _start when running atexit handlers.

I found another way to do debugging, using a trace of the program's execution.

So, here's how to create a debugger with a single shell pipeline (plus a small C program) that works on statically-linked executables.

The first part is a tool called itrace, which starts a subprocess and single-steps its execution using Linux's ptrace() system call and prints the process's eip and esp registers (instruction pointer and stack pointer) at every step. (The code is below.)

Here's what we get if we run itrace on a simple statically-linked executable:

$ ./itrace ./hellow-static | less
8048130 ffce4a50
8048132 ffce4a50
8048133 ffce4a54
8048135 ffce4a54
8048138 ffce4a50
...
To make this intelligible, all we have to do is pipe the output of itrace through addr2line. This tool is part of binutils and translates addresses to source code filenames and line numbers using the debugging information in the executable. Also, pipe through uniq to remove duplicate lines:
$ ./itrace ./hellow-static | addr2line -e ./hellow-static | uniq | less
??:0
/work/glibc/csu/libc-start.c:93
/work/glibc/csu/libc-start.c:103
/work/glibc/csu/libc-start.c:93
/work/glibc/csu/libc-start.c:103
/work/glibc/csu/libc-start.c:93
/work/glibc/csu/libc-start.c:103
/work/glibc/csu/libc-start.c:106
/work/glibc/csu/libc-start.c:103
/work/glibc/csu/libc-start.c:106
...
/work/glibc/stdlib/exit.c:88
/work/glibc/stdlib/exit.c:90
/work/glibc/posix/../sysdeps/unix/sysv/linux/i386/_exit.S:25
/work/glibc/posix/../sysdeps/unix/sysv/linux/i386/_exit.S:29
/work/glibc/posix/../sysdeps/unix/sysv/linux/i386/_exit.S:30
addr2line prints "??:0" for addresses that it can't find in debugging information. If your executable hasn't been built with debugging information - the static libc.a in Debian/Ubuntu isn't built with debugging enabled - you can pass the -f option to addr2line to print function names as well, and then filter out the useless "??:0" lines:
$ cat hellow.c <<END
#include <stdio.h>
int main() { printf("Hello, world!\n"); return 0; }
END
$ gcc -static hellow.c -o hellow-static
$ ./itrace ./hellow-static | addr2line -e ./hellow-static -f | grep -v "??:0" | uniq | less
_start
__libc_start_main
_dl_aux_init
__libc_start_main
_dl_discover_osversion
__uname
_dl_discover_osversion
__libc_start_main
__pthread_initialize_minimal
__libc_setup_tls
...
_IO_default_setbuf
_IO_new_file_setbuf
_IO_unbuffer_write
_IO_cleanup
exit
_exit
This tells us what happened before the program exited. In this case, it was executing exit() and flushing stdio buffers.

Tracing debugging can often be more helpful than postmortem debugging. Debugging with printf is still popular, after all! However, single-stepping using ptrace() has quite a high overhead so this is not really suitable for longer-running processes. Potentially Valgrind could give us an execution trace more efficiently using its dynamic translation approach, but that would still be quite slow. Modern x86 CPUs have a "Branch Trace Store" feature for recording address of basic blocks as they are executed which has the potential to be a faster method, and there is a proposal to make this available in Linux through the ptrace() interface.

itrace.c

This assumes i386, so if you're on a x86-64 system you can build it with:
gcc -m32 itrace.c -o itrace
#include <stdio.h>
#include <sys/wait.h>
#include <unistd.h>

#include <linux/user.h>
#include <sys/ptrace.h>

int main(int argc, char **argv)
{
  int pid = fork();
  if(pid == 0) {
    if(ptrace(PTRACE_TRACEME) < 0) {
      perror("ptrace");
      _exit(1);
    }
    execvp(argv[1], argv + 1);
    perror("exec");
    _exit(1);
  }
  while(1) {
    int status;
    struct user_regs_struct regs;
    if(waitpid(pid, &status, 0) < 0)
      perror("waitpid");
    if(!WIFSTOPPED(status))
      break;
    if(ptrace(PTRACE_GETREGS, pid, 0, ®s) < 0)
      perror("ptrace/GETREGS");
    printf("%lx %lx\n", regs.eip, regs.esp);
    if(ptrace(PTRACE_SINGLESTEP, pid, 0, 0) < 0)
      perror("ptrace/SINGLESTEP");
  }
  return 0;
}

Monday, 4 May 2009

Progress on Native Client

Back in January I wrote that I was porting glibc to Native Client. I have made some good progress since then.

The port is at the stage where it can run simple statically-linked and dynamically-linked executables, from the command line and from the web browser.

In particular, Python works. I have put together a simple interactive top-level that demonstrates running Python from the web browser:

Upstream NaCl doesn't support any filename-based calls such as open(), but we do. In this setup, open() does not of course access the real filesystem on the host system. open() in NaCl-glibc sends a message to the Javascript code on the web page. The Javascript code can fetch the requested file, create a file descriptor for the file using the plugin's API, and send the file descriptor to the NaCl subprocess in reply.

This has not involved modifying Python at all, although I have added an extension module to wrap a couple of NaCl-specific system calls (imc_sendmsg() and imc_recvmsg()). The Python build runs to completion, including the parts where it runs the executable it builds. A simple instruction-rewriting trick means that dynamically-linked NaCl executables can run outside of NaCl, directly under Linux, linked against the regular Linux glibc. This means we can avoid some of the problems associated with cross-compilation when building for NaCl.

This work has involved extending Native Client:

  • Adding support for dynamic loading of code. Initially I have focused on just making it work. Now I'll focus on ensuring this is secure.
  • Writing a custom browser plugin for NaCl which allows Javascript code to handle asynchronous callbacks from NaCl subprocesses.
  • Making various changes to the NaCl versions of gcc and binutils to support dynamically linked code.
Hopefully these changes can get merged upstream eventually. Some of the toolchain changes have already gone in.

See the web page for instructions on how to get the code and try it out.

Saturday, 18 April 2009

Python variable binding semantics, part 2

Time for me to follow up my previous blog post and explain the code snippets.

Snippet 1

funcs = []
for x in (1, 2, 3):
    funcs.append(lambda: x)
for f in funcs:
    print f()
Although you might expect this Python code snippet to print "1 2 3", it actually prints "3 3 3". The reason for this is that the same mutable variable binding for x is used across all iterations of the loop. The for loop does not create new variable bindings.

The function closures that are created inside the loop do not capture the value of x; they capture the cell object that x is bound to under the hood (or the globals dictionary if this code snippet is run at the top level), and each iteration mutates the same cell object.

Interestingly, Python does not restrict the target of the for loop to be a variable or tuple of variables. It can be any lvalue expression that can appear on the left hand side of an assignment, including indexing expressions. For example:

x = [10, 20]
for x[0] in (1,2,3):
    pass
print x # Prints [3, 20]

Snippet 2

This problem is Python's variable binding semantics is not specific to lambdas and also occurs if you use def.
funcs = []
for x in (1, 2, 3):
    def myfunc():
        return x
    funcs.append(myfunc)
for f in funcs:
    print f()
This also prints "3 3 3".

Snippet 3

The remaining snippets are examples of ways to work around this problem. They print the desired "1 2 3".

One way is to use default arguments:

funcs = []
for x in (1, 2, 3):
    funcs.append(lambda x=x: x)
for f in funcs:
    print f()
This is actually the trick that was used to get the effect of lexical scoping before lexical scoping was added to Python.

The default argument captures the value of x at the point where the function closure is created, and this value gets rebound to x when the function is called.

Although it is concise, I'm not too keen on this workaround. It is an abuse of default arguments. There is a risk that you accidentally pass too many arguments to the function and thereby override the captured value of x.

Snippet 4

funcs = []
for x in (1, 2, 3):
    def g(x):
        funcs.append(lambda: x)
    g(x)
for f in funcs:
    print f()
This is quite an ugly workaround, but it's perhaps the most general one. The loop body is wrapped in a function. The x inside function g and the x outside are statically different variable bindings. A new mutable binding of x is created on every call to g, but this binding is never modified.

Snippets 5 and 6

The remaining two snippets are attempts to make the code less ugly, by giving names to the intermediate functions that are introduced to rebind x and thereby give the desired behaviour.
funcs = []
def append_a_func(x):
    funcs.append(lambda: x)
for x in (1, 2, 3):
    append_a_func(x)
for f in funcs:
    print f()

def make_func(x):
    return lambda: x
funcs = []
for x in (1, 2, 3):
    funcs.append(make_func(x))
for f in funcs:
    print f()
I suppose there is a downside to de-uglifying the code. If the code looks too normal, there's a risk that someone will refactor it to the incorrect version without testing that it works properly when looping over lists containing multiple items.

Can the language be fixed?

The root cause is that Python does away with explicit variable binding declarations. If a variable is assigned within a function -- including being assigned as the target of a for loop -- it becomes local to the function, but variables are never local to smaller statement blocks such as a loop body. Python doesn't have block scoping. There are a couple of naive ways in which you might try to fix the language so that the code does what you would expect, both of which have problems. We could change the semantics of function closure creation so that closures take a snapshot of variables' values. But this would break mutually recursive function definitions, and cases where functions are referred to before they are defined, such as:
def f(): # This "def" would raise an UnboundLocalError or a NameError
    g()
def g():
    pass
We could change the semantics of for loops so that the target variable is given a variable binding that is local to the loop body. But this wouldn't change the status of variable bindings created by assignment, so this code would still have the unwanted behaviour:
for x in (1, 2, 3):
    y = x
    funcs.append(lambda: y)

Linting

It should be possible to create a lint tool to detect this hazard. It could look for closures that capture variables that are assigned by or in loops. I wonder how many false positives it would give on a typical codebase.

Javascript

Javascript suffers from the same problem:
funcs = [];
for(var x = 0; x < 3; x++) {
    funcs.push(function() { return x; })
}
for(var i = 0; i < funcs.length; i++) {
    print(funcs[i]());
}
This code prints "3 3 3" when run using rhino.

In Javascript's case the affliction is entirely gratuitous. It happens because var declarations are implicitly hoisted up to the top of the function, for no apparent reason. It is gratuitous because this is not the result of a trade-off.

The good news, though, is that the problem is recognised and a fix to Javascript is planned. I think the plan is for Ecmascript 3.1 to introduce a let declaration as an alternative to var without the hoisting behaviour.

List comprehensions

Back to Python. The same problem also applies to list comprehensions and generator expressions.
funcs = [(lambda: x) for x in (1, 2, 3)]
for f in funcs:
    print f()

funcs = list((lambda: x) for x in (1, 2, 3))
for f in funcs:
    print f()

These both print "3 3 3".

(Note that I added brackets around the lambdas for clarity but the syntax does not require them.)

This is forgivable for list comprehensions, at least in Python 2.x, because the assignment to x escapes into the surrounding function. (See my earlier post.)

But for generators (and for list comprehensions in Python 3.0), the scope of x is limited to the comprehension. Semantically, it would be easy to limit the scope of x to within a loop iteration, so that each iteration introduces a new variable binding.

Tuesday, 31 March 2009

Python variable binding semantics

What do the six following chunks of code do, and what would you like them to do? Which do you prefer?
funcs = []
for x in (1, 2, 3):
    funcs.append(lambda: x)
for f in funcs:
    print f()

funcs = []
for x in (1, 2, 3):
    def myfunc():
        return x
    funcs.append(myfunc)
for f in funcs:
    print f()

funcs = []
for x in (1, 2, 3):
    funcs.append(lambda x=x: x)
for f in funcs:
    print f()

funcs = []
for x in (1, 2, 3):
    def g(x):
        funcs.append(lambda: x)
    g(x)
for f in funcs:
    print f()

funcs = []
def append_a_func(x):
    funcs.append(lambda: x)
for x in (1, 2, 3):
    append_a_func(x)
for f in funcs:
    print f()

def make_func(x):
    return lambda: x
funcs = []
for x in (1, 2, 3):
    funcs.append(make_func(x))
for f in funcs:
    print f()

Sunday, 22 February 2009

OpenStreetMap

I tried out OpenStreetMap the other day after reading the recent article about it on LWN. Amazingly it looks pretty complete for the parts of the UK and US that I looked at. I don't know how much comes from people gathering data by travelling around or by entering data from satellite photos or out-of-copyright maps, but whichever of these it is, it's very impressive.

It is also better than Google Maps in some ways:

  • It has more details: it shows footpaths, rivers and streams, wooded areas, and paths across parks. It has outlines of interesting buildings where people have entered data for them.
  • I think the default renderer (Mapnik) looks better than the Google Maps equivalent, especially when zoomed out to a level where streets are one pixel thick but still distinguishable. Google Maps gives too much prominence to the major roads -- it renders them thickly, in bright colours, and with large labels, which tends to drown out the details. Mapnik is more subtle. The map just looks more interesting.
  • It uses more of the browser window and doesn't waste as much space on sidebars. It's almost a trivial point, but it makes a difference.

Friday, 16 January 2009

Testing using golden files in Python

This is the third post in a series about automated testing in Python (earlier posts: 1, 2). This post is about testing using golden files.

A golden test is a fancy way of doing assertEquals() on a string or a directory tree, where the expected output is kept in a separate file or files -- the golden files. If the actual output does not match the expected output, the test runner can optionally run an interactive file comparison tool such as Meld to display the differences and allow you to selectively merge the differences into the golden file.

This is useful when

  • the data being checked is large - too large to embed into the Python source; or
  • the data contains relatively inconsequential details, such as boilerplate text or formatting, which might be changed frequently.
As an example, suppose you have code to format some data as HTML. You can have a test which creates some example data, formats this as HTML, and compares the result to a golden file. Something like this:
class ExampleTest(golden_test.GoldenTestCase):

    def test_formatting_html(self):
        obj = make_some_example_object()
        temp_dir = self.make_temp_dir()
        format_as_html(obj, temp_dir)
        self.assert_golden(temp_dir, os.path.join(os.path.dirname(__file__),
                                                  "golden-files"))

if __name__ == "__main__":
    golden_test.main()
By default, the test runs non-interactively, which is what you want on a continuous integration machine, and it will print a diff if it fails. To switch on the semi-interactive mode which runs Meld, you run the test with the option --meld.

Here is a simple version of the test helper (taken from here):

import os
import subprocess
import sys
import unittest

class GoldenTestCase(unittest.TestCase):

    run_meld = False

    def assert_golden(self, dir_got, dir_expect):
        assert os.path.exists(dir_expect), dir_expect
        proc = subprocess.Popen(["diff", "--recursive", "-u", "-N",
                                 "--exclude=.*", dir_expect, dir_got],
                                stdout=subprocess.PIPE)
        stdout, stderr = proc.communicate()
        if len(stdout) > 0:
            if self.run_meld:
                # Put expected output on the right because that is the
                # side we usually edit.
                subprocess.call(["meld", dir_got, dir_expect])
            raise AssertionError(
                "Differences from golden files found.\n"
                "Try running with --meld to update golden files.\n"
                "%s" % stdout)
        self.assertEquals(proc.wait(), 0)

def main():
    if sys.argv[1:2] == ["--meld"]:
        GoldenTestCase.run_meld = True
        sys.argv.pop(1)
    unittest.main()
(It's a bit cheesy to modify global state on startup to enable melding, but because unittest doesn't make it easy to pass parameters into tests this is the simplest way of doing it.)

Golden tests have the same sort of advantages that are associated with test-driven development in general.

  • Golden files are checked into version control and help to make changesets self-documenting. A changeset that affects the program's output will include patches that demonstrate how the output is affected. You can see the history of the program's output in version control. (This assumes that everyone runs the tests before committing!)
  • Sometimes you can point people at the golden files if they want to see example output. For HTML, sometimes you can contrive the CSS links to work so that the HTML looks right when viewed in a browser.
  • And of course, this can catch cases where you didn't intend to change the program's output.
My typical workflow is to add a test with some example input that I want the program to handle, and run the test with --meld until the output I want comes up on the left-hand side in Meld. I mark the output as OK by copying it over to the right-hand side. This is not quite test-first, because I am letting the test suggest what its expected output should be. But it is possible to do this in an even more test-first manner by typing the output you expect into the right-hand side.

Other times, one change will affect many locations in the golden files, and adding a new test is not necessary. It's usually not too difficult to quickly eyeball the differences with Meld.

Here are some of the things that I have used golden files to test:

  • formatting of automatically-generated e-mails
  • automatically generated configuration files
  • HTML formatting logs (build_log_test.py and its golden files)
  • pretty-printed output of X Windows messages in xjack-xcb (golden_test.py and golden_test_data). This ended up testing several components in one go:
    • the XCB protocol definitions for X11
    • the encoders and decoders that work off of the XCB protocol definitions
    • the pretty printer for the decoder
On the other hand, sometimes the overhead of having a separate file isn't worth it, and if the expected output is small (maybe 5 lines or so) I might instead paste it into the Python source as a multiline string.

It can be tempting to overuse golden tests. As with any test suite, avoid creating one big example that tries to cover all cases. (This is particularly tempting if the test is slow to run.) Try to create smaller, separate examples. The test helper above is not so good at this, because if you are not careful it can end up running Meld several times. In the past I have put several examples into (from unittest's point of view) one test case, so that it runs Meld only once.

Golden tests might not work so well if components can be changed independently. For example, if your XML library changes its whitespace pretty-printing, the tests' output could change. This is less of a problem if your code is deployed with tightly-controlled versions of libraries, because you can just update the golden files when you upgrade libraries.

A note on terminology: I think I got the term "golden file" from my previous workplace, where other people were using them, and the term seems to enjoy some limited use judging from Google. "Golden test", however, may have been a term that I have made up and that no-one else outside my workplace is using for this meaning.