Monday, 7 December 2009

Modular vs. monolithic build systems

I have spent some time packaging software as Debian packages. While the Debian packaging system has its faults, it has the nice property that it is modular. This post is an attempt to articulate what aspects of Debian packaging -- and other modular build systems -- are worth replicating, and why it is worth co-operating with these systems rather than ignoring them or working around them.

What is a modular build?

  1. A modular build consists of a set of modules,
  2. each of which can be built separately.
  3. Each module's build produces some output (a directory tree).
  4. A module may depend on the outputs of other modules, but it can't reach inside the others' build trees.
  5. There is a common interface that each module provides for building itself.
  6. The build tool can be replaced with another. The description of the module set is separate from the modules themselves.

What is a non-modular, monolithic build?

  1. A monolithic build consists of one big build tree.
  2. Any part of the build can reference any other part via relative filenames.
  3. It might consist of multiple checkouts from version control, but they have to be checked out to specific directory tree locations (as in the Chromium build).

Some examples of modular builds:

  • Build systems/tools:
    • Debian packages (and presumably RPMs too)
    • JHBuild
    • Zero-Install
    • Nix
  • Module interfaces:
    • GNU autotools (./configure && make && make install)
    • Python distutils (setup.py)
  • Software collections:
    • GNOME
    • Xorg (7.0 onwards)
    • Sugar
Examples of monolithic builds:
  • XFree86 (and Xorg 6.9): Before Xorg was modularised, there was a big makefile that built everything, from Xlib to the X server to example X clients.
  • Chromium web browser: This uses a tool called "gyp" to generate a big makefile which compiles individual source files from several libraries, including WebKit, V8 and the Native Client IPC library. It ignores WebKit's own build system.
  • Native Client: One SCons build builds the core code as well as the NPAPI browser plugin and example code; it needs to know how to cross-compile NaCl code as well as compile host system code. Another makefile builds the compiler toolchain from tarballs and patch files that are checked into SVN.
  • CPython: The standard library builds many Python C extensions.

Modular build systems offer a number of advantages:

  • You can download and build only the parts you need. This can be a big help if some modules are huge but seldom change while the modules you work on are small and fast to build.
  • Some systems (such as Debian packages) give you binary packages so you don't need to build the dependencies of the modules that you want to work on. JHBuild doesn't provide this but it could be achieved with a little work.
  • Dependencies are clearer.
  • External interfaces are clearer too.
  • It is possible to change one module's version independently of other modules (to the extent that differing versions are compatible).
  • They are relatively easy to use in a decentralised way. It is easy to create a new version of a module set which adds or removes modules.
  • You don't have to check huge dependencies into your version control system. Some projects check in monster tarballs or source trees, which dwarf the project's own code. If you avoid this practice you will make it easier for distributions to package your software.

The two categories can coexist: Each module may internally be a monolithic build which can be arbitrarily complex. Autotools is an example of that. This is not too bad because at least we have contained the complexity within the module. The layer on top, which connects modules together, can be relatively simple.

Despite its faults, autotools is very amenable to being part of a modular build:

  • The build tree does not need to be kept around after doing "make install".
  • Output can be directed using "--prefix=foo" and "make install DESTDIR=foo".
  • Inputs can be specified via --prefix and PATH and other environment variables.
  • The build tree can be separate from the source tree. It's easy to have multiple build trees with different build options.

The systems I listed as modular all have their own problems. The main problem with Debian packages is that they are installed system-wide, which requires root access and makes it difficult to install multiple versions of a package. It is possible to work around this problem using chroots. JHBuild, Zero-Install and Nix avoid this problem. JHBuild and Zero-Install are not so good at capturing immutable snapshots of package sets. Nix is good at capturing snapshots, but Nix makes it difficult to change a library without rebuilding everything that uses it.

Despite these problems, these systems have a nice property: they are layered. It is possible to mix and match modules and replace the build layer. Hence it is possible to build Xorg and GNOME either with JHBuild or as Debian packages. In turn, there is a choice of tools for building Debian source packages. There is even a tool for making sets of Debian packages from JHBuild module descriptions.

These systems do not interoperate perfectly, but they do work and scale.

There are some arguments for having a monolithic system. In some situations it is difficult to split pieces of software into separately-built modules. For example, Plash-glibc is currently built by symlinking the source for the Plash IPC library into the glibc source tree, so that glibc builds it with the correct compiler flags and with the glibc internal header files. Ideally the IPC library would be built as a separate module, but for now it is better not to.

Still, if you can find good module boundaries, it is a good idea to take advantage of them.

Tuesday, 1 December 2009

read_file() and write_file() in Python

Here are two functions I would like to see in the Python standard library:
def read_file(filename):
  fh = open(filename, "r")
  try:
      return fh.read()
  finally:
      fh.close()

def write_file(filename, data):
  fh = open(filename, "w")
  try:
      fh.write(data)
  finally:
      fh.close()

In the first case, it is possible to do open(filename).read() instead of read_file(filename). However, this is not good practice because if you are using a Python implementation that uses a garbage collection implementation other than reference counting, there will be a delay before the Python file object, and the file descriptor it wraps, are freed. So you will temporarily leak a scarce resource.

I have sometimes argued that GC should really know about file descriptors: If the open() syscall fails with EMFILE or ENFILE, the Python standard library should run GC and retry. (However this is probably not going to work for reliably receiving FDs in messages using recvmsg().) But in the case of read_file() it is really easy to inform the system of the lifetime of the FD using close(), so there is no excuse not to do so!

In the second case, again it is possible to do open(filename, "w").write(data) instead of write_file(filename, data), but in the presence of non-refcounting GC it will not only temporarily leak an FD, it will also do the wrong thing! The danger is that the file's contents will be only partially written, because Python file objects are buffered, and the buffer is not flushed until you do close() or flush(). If you use open() this way and test only on CPython, you won't find out that your program breaks on Jython, IronPython or PyPy (all of which normally use non-refcounting GC, as far as I know).

Maybe CPython should be changed so that the file object's destructor does not flush the buffer. That would break some programs but expose the problem. Maybe it could be optional.

read_file() and write_file() calls also look nicer. I don't want the ugliness of open-coding these functions.

Don't write code like this:

fh = open(os.path.join(temp_dir, "foo"), "w")
fh.write(blah)
fh.close()
fh = open(os.path.join(temp_dir, "bar"), "w")
fh.write(stuff)
fh.close()

Instead, write it like this:

write_file(os.path.join(temp_dir, "foo"), blah)
write_file(os.path.join(temp_dir, "bar"), stuff)

Until Python includes these functions in the standard library, I will copy and paste them into every Python codebase I work on in protest! They are too trivial, in my opinion, to be worth the trouble of depending on an external library.

I'd like to see these functions as builtins, because they should be as readily available as open(), for which they are alternatives. However if they have to be imported from a module I won't complain!

Wednesday, 18 November 2009

CapPython revisited

[I've had a draft of this post floating around since June, and it's about time I posted it.]

Guido van Rossum's criticisms of CapPython from earlier in the year are largely right, though I would put the emphasis differently. A lot of existing Python code will not pass CapPython's static verifier. But I was not expecting large amounts of code to pass the verifier.

CapPython was an experiment to see if Python could be tamed using only a static verifier - similar to Joe-E, a subset of Java - without resorting to automated rewriting of code. I had a hunch that it was possible. I also wanted a concrete system to show why I preferred to avoid some of the Python constructs that CapPython prohibits. Some of Joe-E's restrictions are also quite severe, such as prohibiting try...finally in order to prevent non-deterministic execution (something, incidentally, that CapPython does not try to do). It turned out that it was better to abandon the goal of being static-only when it came to loading Python modules.

My goal was that CapPython code should look like fairly idiomatic Python code (and be able to run directly under Python), not that all or most idiomatic Python code would be valid CapPython code. As a consequence, I didn't want CapPython to require special declarations, such as decorators on classes, for making objects encapsulated. Instead I wanted objects to be encapsulated by default. Maybe that was an arbitrary goal.

CapPython managed to meet that goal, but only partially, and only by depending on Python 2.x's unbound methods feature, which was removed in Python 3.0.

The reason my goal was met only partially is that in some circumstances it is necessary to wrap class objects (using a decorator such as @sealed in my earlier post) so that the class's constructor is exposed but inheritance is blocked. Whether this is necessary depends on whether the class object is authorityless. A class object is authorityless if it does not reference any authority-carrying objects (directly, or indirectly through objects captured by method function closures). If a class object is authorityless, it should be safe to grant other, untrusted modules the ability to create derived classes. Otherwise, the class object ought to be wrapped. The problem is that it is not trivial to make this determination. Furthermore, permitting inheritance is the default, and this is not a safe default.

If we have to go around putting @sealed decorators on all class definitions because the non-@sealed default is not-quite-encapsulated, we may as well consider alternatives where the default is not-at-all-encapsulated.

The main alternative is to resurrect CPython's restricted mode and add a wrapper type for making encapsulated objects (i.e. a non-broken version of Bastion along the lines of my earlier post).

If modifying CPython is considered acceptable, and one's goal is to add object-capabilities to Python in the most practical way (rather than to investigate a specific approach), resurrecting restricted mode is probably a better way to do it than CapPython.

Monday, 28 September 2009

A combined shell and terminal

Last year I wrote about a couple of features that I'd like to see in a hypothetical integrated shell and terminal.

Since then I have implemented my idea! The result is Coconut Shell. It is a combined shell and terminal. It is intended to look and behave much the same as Bash + GNOME Terminal. I am now using it as my main shell/terminal.

I find the most useful feature is finish notifications: When a long-running command finishes, it flashes the window's task bar icon. A trivial example:

Now I no longer have to keep checking minimised windows to see whether a build has finished or whether packages have finished downloading and installing.

It works with tabs too:

This had an unforeseen benefit. It helps when I'm testing GUI applications. When I close down the GUI application, the tab that I ran the application from gets highlighted, which makes it easier to find the tab to re-run the app from. I don't search through windows as much as I used to. (I tend to have a lot of terminal windows and tabs open at a time!)

How it works:

Coconut Shell is implemented in Python. The terminal and shell run in the same process. It uses libvte to provide the terminal emulator widget -- the same as GNOME Terminal. It uses Pyrepl as a readline-alike to read commands from the terminal.

Normal shells rely on Unix process groups to stop backgrounded jobs from reading from the terminal. Coconut Shell doesn't rely on process groups in the same way. Instead, it creates a new TTY FD pair for every command it launches, and it forwards input and output to and from the terminal. This is partly out of necessity (process groups get in the way of reading from multiple TTY FDs at the same time), and partly because it is more robust. It stops jobs from interfering with each other and making themselves unbackgroundable. It also demonstrates that Unix never needed the whole sorry mess of process groups and session IDs and the weirdness they inflict on TTY file descriptors, because job control can be done by forwarding IO instead.

There's more information on the home page.

Friday, 18 September 2009

Logging with stack traces

I find that postmortem debugging is useful (e.g. examining process state in gdb or pdb after a crash or traceback), but trying to debug using breakpoints is an exercise in tedium. I prefer debugging using logging. However, logs obtained just by adding printf statements tend to lack context information, and if you add lots of printfs the logs require more effort to read.

You can get context information in Python by doing traceback.print_stack() after every message, but the result tends to be unreadable. Below is a tool which prints stack context in a more readable tree format. I found it very useful a couple of months ago when investigating a bug in which a recursive invocation of a callback function caused failures later on. Had I not seen the stack context, I might not have realised that the callback was being called recursively.

Here is some example output. It comes from running the Tahoe client, while monkey-patching some socket functions to show which sockets the process is binding and connecting to.

0:/usr/bin/twistd:21:<module>
 1:/usr/lib/python2.5/site-packages/twisted/scripts/twistd.py:27:run
  2:/usr/lib/python2.5/site-packages/twisted/application/app.py:379:run
   3:/usr/lib/python2.5/site-packages/twisted/scripts/twistd.py:23:runApp
    4:/usr/lib/python2.5/site-packages/twisted/application/app.py:158:run
     5:/usr/lib/python2.5/site-packages/twisted/scripts/_twistd_unix.py:213:postApplication
      6:/usr/lib/python2.5/site-packages/twisted/scripts/_twistd_unix.py:174:startApplication
       7:/usr/lib/python2.5/site-packages/twisted/application/service.py:228:privilegedStartService
        8:/usr/lib/python2.5/site-packages/twisted/application/service.py:228:privilegedStartService
         9:/usr/lib/python2.5/site-packages/twisted/application/service.py:228:privilegedStartService
          10:/usr/lib/python2.5/site-packages/twisted/application/internet.py:68:privilegedStartService
           11:/usr/lib/python2.5/site-packages/twisted/application/internet.py:86:_getPort
            12:/usr/lib/python2.5/site-packages/twisted/internet/posixbase.py:467:listenTCP
             13:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:730:startListening
              14:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:718:createInternetSocket
               15:/usr/lib/python2.5/site-packages/twisted/internet/base.py:724:createInternetSocket
                16:tahoe-client.tac:55:__init__
                 create socket <Socket object at 0x8eb580c> (2, 1)
              14:tahoe-client.tac:64:bind
               bind <Socket object at 0x8eb580c> (('', 56530),)
foolscap.pb.Listener starting on 56530
         9:/usr/lib/python2.5/site-packages/twisted/application/service.py:228:privilegedStartService
          10:/usr/lib/python2.5/site-packages/twisted/application/internet.py:68:privilegedStartService
           11:/usr/lib/python2.5/site-packages/twisted/application/internet.py:86:_getPort
            12:/usr/lib/python2.5/site-packages/twisted/internet/posixbase.py:467:listenTCP
             13:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:730:startListening
              14:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:718:createInternetSocket
               15:/usr/lib/python2.5/site-packages/twisted/internet/base.py:724:createInternetSocket
                16:tahoe-client.tac:55:__init__
                 create socket <Socket object at 0x8eb5844> (2, 1)
              14:tahoe-client.tac:64:bind
               bind <Socket object at 0x8eb5844> (('0.0.0.0', 3456),)
nevow.appserver.NevowSite starting on 3456
Starting factory <nevow.appserver.NevowSite instance at 0x8eb07ac>
My pid: 27115
      6:/usr/lib/python2.5/site-packages/twisted/application/app.py:113:runReactorWithLogging
       7:/usr/lib/python2.5/site-packages/twisted/internet/posixbase.py:220:run
        8:/usr/lib/python2.5/site-packages/twisted/internet/posixbase.py:228:mainLoop
         9:/usr/lib/python2.5/site-packages/twisted/internet/base.py:561:runUntilCurrent
          10:/usr/lib/python2.5/site-packages/foolscap/eventual.py:26:_turn
           11:/usr/lib/python2.5/site-packages/allmydata/node.py:259:_startService
            12:/usr/lib/python2.5/site-packages/twisted/internet/defer.py:191:addCallback
             13:/usr/lib/python2.5/site-packages/twisted/internet/defer.py:182:addCallbacks
              14:/usr/lib/python2.5/site-packages/twisted/internet/defer.py:317:_runCallbacks
               15:/usr/lib/python2.5/site-packages/allmydata/node.py:259:<lambda>
                16:/usr/lib/python2.5/site-packages/allmydata/util/iputil.py:93:get_local_addresses_async
                 17:/usr/lib/python2.5/site-packages/allmydata/util/iputil.py:137:get_local_ip_for
                  18:/usr/lib/python2.5/site-packages/twisted/internet/posixbase.py:388:listenUDP
                   19:/usr/lib/python2.5/site-packages/twisted/internet/udp.py:84:startListening
                    20:/usr/lib/python2.5/site-packages/twisted/internet/udp.py:89:_bindSocket
                     21:/usr/lib/python2.5/site-packages/twisted/internet/base.py:724:createInternetSocket
                      22:tahoe-client.tac:55:__init__
                       create socket <Socket object at 0x8eb58ec> (2, 2)
                     21:tahoe-client.tac:64:bind
                      bind <Socket object at 0x8eb58ec> (('', 0),)
twisted.internet.protocol.DatagramProtocol starting on 53017
Starting protocol <twisted.internet.protocol.DatagramProtocol instance at 0x8eb0dcc>
                  18:/usr/lib/python2.5/site-packages/twisted/internet/udp.py:181:connect
                   19:tahoe-client.tac:58:connect
                    connect <Socket object at 0x8eb58ec> (('198.41.0.4', 7),)
(Port 53017 Closed)
Stopping protocol <twisted.internet.protocol.DatagramProtocol instance at 0x8eb0dcc>
         9:/usr/lib/python2.5/site-packages/twisted/internet/base.py:561:runUntilCurrent
          10:/usr/lib/python2.5/site-packages/foolscap/eventual.py:26:_turn
           11:/usr/lib/python2.5/site-packages/twisted/internet/defer.py:239:callback
            12:/usr/lib/python2.5/site-packages/twisted/internet/defer.py:304:_startRunCallbacks
             13:/usr/lib/python2.5/site-packages/twisted/internet/defer.py:317:_runCallbacks
              14:/usr/lib/python2.5/site-packages/allmydata/client.py:136:_start_introducer_client
               15:/usr/lib/python2.5/site-packages/twisted/application/service.py:148:setServiceParent
                16:/usr/lib/python2.5/site-packages/twisted/application/service.py:260:addService
                 17:/usr/lib/python2.5/site-packages/allmydata/introducer/client.py:58:startService
                  18:/usr/lib/python2.5/site-packages/foolscap/pb.py:901:connectTo
                   19:/usr/lib/python2.5/site-packages/foolscap/reconnector.py:60:startConnecting
                    20:/usr/lib/python2.5/site-packages/foolscap/reconnector.py:87:_connect
                     21:/usr/lib/python2.5/site-packages/foolscap/pb.py:830:getReference
                      22:/usr/lib/python2.5/site-packages/twisted/internet/defer.py:107:maybeDeferred
                       23:/usr/lib/python2.5/site-packages/foolscap/pb.py:856:_getReference
                        24:/usr/lib/python2.5/site-packages/foolscap/pb.py:947:getBrokerForTubRef
                         25:/usr/lib/python2.5/site-packages/foolscap/negotiate.py:1332:connect
                          26:/usr/lib/python2.5/site-packages/foolscap/negotiate.py:1359:connectToAll
                           27:/usr/lib/python2.5/site-packages/twisted/internet/posixbase.py:474:connectTCP
                            28:/usr/lib/python2.5/site-packages/twisted/internet/base.py:665:connect
                             29:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:885:_makeTransport
                              30:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:593:__init__
                               31:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:497:createInternetSocket
                                32:tahoe-client.tac:55:__init__
                                 create socket <Socket object at 0x8eb5a74> (2, 1)
                           27:/usr/lib/python2.5/site-packages/twisted/internet/posixbase.py:474:connectTCP
                            28:/usr/lib/python2.5/site-packages/twisted/internet/base.py:665:connect
                             29:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:885:_makeTransport
                              30:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:593:__init__
                               31:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:497:createInternetSocket
                                32:tahoe-client.tac:55:__init__
                                 create socket <Socket object at 0x8c59e64> (2, 1)
         9:/usr/lib/python2.5/site-packages/twisted/internet/base.py:561:runUntilCurrent
          10:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:506:resolveAddress
           11:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:513:_setRealAddress
            12:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:539:doConnect
             13:tahoe-client.tac:61:connect_ex
              connect_ex <Socket object at 0x8eb5a74> (('127.0.0.1', 45651),)
          10:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:506:resolveAddress
           11:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:513:_setRealAddress
            12:/usr/lib/python2.5/site-packages/twisted/internet/tcp.py:539:doConnect
             13:tahoe-client.tac:61:connect_ex
              connect_ex <Socket object at 0x8c59e64> (('192.168.20.244', 45651),)
Here is the code for the logging tool:
import sys

laststack = []

def tlog(*msgs):
    """Outputs a log message with stack trace context."""
    global laststack
    try:
        raise Exception()
    except Exception:
        frame = sys.exc_info()[2].tb_frame.f_back
    stack = extract(frame)
    for i, fr in enumerate(stack):
        if not (i < len(laststack) and laststack[i] is fr):
            lineno = fr.f_lineno
            co = fr.f_code
            filename = co.co_filename
            name = co.co_name
            print " " * i + ("%i:%s:%s:%s" % (i, filename, lineno, name))
    print " " * len(stack) + " ".join(map(str, msgs))
    laststack = stack

def extract(frame):
    got = []
    while frame is not None:
        got.append(frame)
        frame = frame.f_back
    got.reverse()
    return got

This makes use of the fact that Python's stack frames are heap-allocated, so they have identity, and it is possible to hold on to a stack frame object after the frame has exited. This means it is possible to distinguish between two different invocations of the same function, even when the tracebacks are textually identical. This would not be possible in C -- gdb would have to approximate.

For completeness, here is the monkey-patching code that I used:

import socket

class Socket(socket.socket):
    def __init__(self, *args, **kwargs):
        tlog("create socket", self, args)
        super(Socket, self).__init__(*args, **kwargs)
    def connect(self, *args, **kwargs):
        tlog("connect", self, args)
        super(Socket, self).connect(*args, **kwargs)
    def connect_ex(self, *args, **kwargs):
        tlog("connect_ex", self, args)
        super(Socket, self).connect_ex(*args, **kwargs)
    def bind(self, *args, **kwargs):
        tlog("bind", self, args)
        super(Socket, self).bind(*args, **kwargs)

socket.socket = Socket

Monday, 31 August 2009

How to do model checking of Python code

I read about eXplode in early 2007. The paper describes a neat technique, inspired by model checking, for exhaustively exploring all execution paths of a program. The difference from model checking is that it doesn't require a model of the program to be written in a special language; it can operate on a program written in a normal language. The authors of the paper use the technique to check the correctness of Linux filesystems, and version control systems layered on top of them, in the face of crashes.

The basic idea is to have a special function, choose(), that performs a non-deterministic choice, e.g.

   choose([1, 2, 3])

Conceptually, this forks execution into three branches, returning 1, 2 or 3 in each branch.

The program can call choose() multiple times, so you end up with a choice tree.

The job of the eXplode check runner is to explore that choice tree, and to try to find sequences of choices that make the program fail.

eXplode does not try to snapshot the state of the program when choose() is called. (That's what other model checkers have tried to do, and it's what would happen if choose() were implemented using the Unix fork() call.)

Instead, eXplode restores the state of the program by running it from the start and replaying a previously-recorded sequence of choices by returning them one-by-one from choose(). A node in the choice tree is represented by the sequence of choices required to get to it, and so to explore that node's subtree the check runner will repeatedly run the program from the start, replaying the choices for that subtree. The check runner does a breadth-first traversal of the choice tree, so it has a queue of nodes to visit, and each visit will add zero or more subnodes to the queue. (See the code below for a concrete implementation.)

This replay technique means that the program must be deterministic, otherwise the choice tree won't get explored properly. All non-determinism must occur through choose().

There seem to be two situations in which you might call choose():

  1. You can call choose() from your test code to generate input.

    In eXplode, the test program writes files and tries to read them back, and it uses choose() to pick what kind of writes it does.

    So far I have used choose() in tests to generate input in two different ways:

    In one case, I started with valid input and mangled it in different ways, to check the error handling of the software-under-test. The test code used choose() to pick a file to delete or line to remove. The idea was that the software-under-test should give a nicely-formatted error message, specifying the faulty file, rather than a Python traceback. There was only a bounded number of faults to introduce so the choice tree was finite.

    In another case, I built up XML trees to check that the software under test could process them. In this case the choice tree is infinite. I decided to prune it arbitrarily at a certain depth so that the test could run as part of a fairly quickly-running test suite.

    In both these cases, using the eXplode technique wasn't strictly necessary. It would have been possible to write the tests without choose(), perhaps by copying the inputs before modifying them. But I think choose() makes the test more concise, and not all objects have interfaces for making copies.

  2. You can also call choose() from the software-under-test to introduce deliberate faults, or from a fake object that the software-under-test is instantiated with.

    In eXplode, the filesystem is tested with a fake block device which tries to simulate all the possible failures modes of a real hard disc (note that Flash drives have different failure modes!). The fake device uses choose() to decide when to simulate a crash and restart, and which blocks should be successfully written to the virtual device when the crash happens.

    You can also wrap malloc() so that it non-deterministically returns NULL in order to test that software handles out-of-memory conditions gracefully. Testing error paths is difficult to do and this looks like a good way of doing it.

    In these cases, the calls to choose() can be buried deeply in the call stack, and the eXplode check runner is performing an inversion of control that would be hard to do in a conventional test case.

Here's an example of how to implement the eXplode technique in Python:


import unittest


# If anyone catches and handles this, it will break the checking model.
class ModelCheckEscape(Exception):

    pass


class Chooser(object):

    def __init__(self, chosen, queue):
        self._so_far = chosen
        self._index = 0
        self._queue = queue

    def choose(self, choices):
        if self._index < len(self._so_far):
            choice = self._so_far[self._index]
            if choice not in choices:
                raise Exception("Program is not deterministic")
            self._index += 1
            return choice
        else:
            for choice in choices:
                self._queue.append(self._so_far + [choice])
            raise ModelCheckEscape()


def check(func):
    queue = [[]]
    while len(queue) > 0:
        chosen = queue.pop(0)
        try:
            func(Chooser(chosen, queue))
        except ModelCheckEscape:
            pass
        # Can catch other exceptions here and report the failure
        # - along with the choices that caused it - and then carry on.


class ModelCheckTest(unittest.TestCase):

    def test(self):
        got = []

        def func(chooser):
            # Example of how to generate a Cartesian product using choose():
            v1 = chooser.choose(["a", "b", "c"])
            v2 = chooser.choose(["x", "y", "z"])
            # We don't normally expect the function being tested to
            # capture state - in this case the "got" list - but we
            # do it here for testing purposes.
            got.append(v1 + v2)

        check(func)
        self.assertEquals(sorted(got),
                          sorted(["ax", "ay", "az",
                                  "bx", "by", "bz",
                                  "cx", "cy", "cz"]))


if __name__ == "__main__":
    unittest.main()

Sunday, 28 June 2009

Encapsulation in Python: two approaches

Time for another post on how object-capability security might be done in Python.

Suppose function closures in Python were encapsulated (i.e. the attributes func_closure, func_globals etc. were removed) and were the only route for getting encapsulation for non-builtin objects, as is the case in Python's old restricted mode (a.k.a. rexec). There would be two ways of defining new encapsulated objects. Let's call them the Cajita approach and the Bastion approach.

The Cajita-style approach (named after the Cajita subset of Javascript) is to define an object using a record of function closures, one closure per method. Here's an example, from a post from Ka-Ping Yee from 2003:

def Counter():
    self = Namespace()
    self.i = 0

    def next():
        self.i += 1
        return self.i

    return ImmutableNamespace(next)
(There are some more examples - a read-only FileReader object, and Mint and Purse objects based on the example from E - in Tav's post on object-capabilities in Python.)

This is instead of the far more idiomatic, but unencapsulated class definition:

class Counter(object):

    def __init__(self):
        self._i = 0

    def next(self):
        self._i += 1
        return self._i
(Ignoring that the more idiomatic way to do this is to use a generator or itertools.count().)

I am calling this the Cajita approach because it is analagous to how objects are defined in the Cajita subset of Javascript (part of the Caja project), where this code would be written as:

function Counter() {
    var i = 0;
    return Obj.freeze({
        next: function () {
            i += 1;
            return i;
        },
    });
}
The Python version of the Cajita style is a lot more awkward because Python is syntactically stricter than Javascript:
  • Expressions cannot contain statements, so function definitions cannot be embedded in an object creation expression. As a result, method names have to be specified twice: once in the method definitions and again when passed to ImmutableNamespace. (It would be three times if ImmutableNamespace did not use __name__.)
  • A function can only assign to a variable in an outer scope by using the nonlocal declaration (recently introduced), and that is awkward. The example works around this by doing "self = Namespace()" and assigning to the attribute self.i.
The Cajita style also has a memory usage cost. The size of each object will be O(n) in the number of methods, because each method is created as a separate function closure with its own pointer to the object's state. This is not trivial to optimise away because doing so would change the object's garbage collection behaviour.

For these reasons it would be better not to use the Cajita style in Python.

The alternative is the approach taken by the Bastion module, which the Python standard library used to provide for use with Python's restricted mode. (Restricted mode and Bastion.py are actually still present in Python 2.6, but restricted mode has holes and is unsupported, and Bastion.py is disabled.)

Bastion provided wrapper objects that exposed only the public attributes of the object being wrapped (where non-public attribute names are those that begin with an underscore). This approach means we can define objects using class definitions as usual, and wrap the resulting objects. To make a class encapsulated and uninheritable, we just have to add a decorator:

@sealed
class Counter(object):

    def __init__(self):
        self._i = 0

    def next(self):
        self._i += 1
        return self._i
where "sealed" can be defined as follows:
# Minimal version of Bastion.BastionClass.
# Converts a function closure to an object.
class BastionClass:

    def __init__(self, get):
        self._get = get

    def __getattr__(self, attr):
        return self._get(attr)

# Minimal version of Bastion.Bastion.
def Bastion(object):
    # Create a function closure wrapping the object.
    def get(attr):
        if type(attr) is str and not attr.startswith("_"):
            return getattr(object, attr)
        raise AttributeError(attr)
    return BastionClass(get)

def sealed(klass):
    def constructor(*args, **kwargs):
        return Bastion(klass(*args, **kwargs))
    return constructor
>>> c = Counter()
>>> c.next()
1
>>> c.next()
2
>>> c._i
AttributeError: _i

Mutability problem

One problem with the Bastion approach is that Bastion's wrapper objects are mutable. Although you can't use the Bastion object to change the wrapped object's attributes, you can change the Bastion object's attributes. This means that these wrapper objects cannot be safely shared between mutually distrusting components. Multiple holders of the object could use it as a communications channel or violate each other's expectations.

There isn't an obvious way to fix this in pure Python. Overriding __setattr__ isn't enough. I expect the simplest way to deal with this is to implement Bastion in a C extension module instead of in Python. The same goes for ImmutableNamespace (referred to in the first example) - the Cajita-style approach faces the same issue.

Wrapping hazard

There is a potential hazard in defining encapsulated objects via wrappers that is not present in CapPython. In methods, "self" will be bound to the private view of the object. There is a risk of accidentally writing code that passes the unwrapped self object to untrusted code.

An example:

class LoggerMixin(object):

    def sub_log(self, prefix):
        return PrefixLog(self, prefix)

@sealed
class PrefixLog(LoggerMixin):

    def __init__(self, log, prefix):
        self._log = log
        self._prefix = log

    def log(self, message):
        self._log.log("%s: %s" % (prefix, message))
(This isn't a great example because of the circular relationship between the two classes.)

This hazard is something we could lint for. It would be easy to warn about cases where self is used outside of an attribute access.

The method could be changed to:

    def sub_log(self, prefix):
        return PrefixLog(seal_object(self), prefix)
This creates a new wrapper object each time which is not always desirable. To avoid this we could store the wrapper object in an attribute of the wrapped object. Note that that would create a reference cycle, and while CPython's cycle collector is usually fine for collecting cycles, creating a cycle for every object might not be a good idea. An alternative would be to memoize seal_object() using a weak dictionary.

Sunday, 14 June 2009

Python standard library in Native Client

The Python standard library now works under Native Client in the web browser, including the Sqlite extension module.

By that I mean that it is possible to import modules from the standard library, but a lot of system calls won't be available. Sqlite works for in-memory use.

Here's a screenshot of the browser-based Python REPL:

The changes needed to make this work were quite minor:

  • Implementing stat() in glibc so that it does an RPC to the Javascript running on the web page. Python uses stat() when searching for modules in sys.path.
  • Changing fstat() in NaCl so that it doesn't return a fixed inode number. Python uses st_ino to determine whether an extension module is the same as a previously-loaded module, which doesn't work if st_ino is the same for different files.
  • Some tweaks to nacl-glibc-gcc, a wrapper script for nacl-gcc.
  • Ensuring that NaCl-glibc's libpthread.so works.
I didn't have to modify Python or Sqlite at all. I didn't even have to work out what arguments to pass to Sqlite's configure script - the Debian package for Sqlite builds without modification. It's just a matter of setting PATH to override gcc to run nacl-glibc-gcc.

Thursday, 28 May 2009

A really simple tracing debugger: part 2

In my last post I showed how to get a readable execution trace of a process using a simple ptrace()-based tracer (itrace) and binutils' addr2line. By doing a bit more work we can extract the function call tree from the trace. In order to distinguish callers and callees, we can use the stack pointer (%esp), which itrace outputs.

Here's the call tree for a "hello world" program using glibc 2.9 on i386, statically linked:

$ ./itrace ./hellow-static | python treeify.py ./hellow-static
_start ??:0 8048130
  __libc_start_main glibc/csu/libc-start.c:93 8048220
    _dl_aux_init glibc/elf/dl-support.c:165 80518e0
    _dl_discover_osversion glibc/elf/../sysdeps/unix/sysv/linux/dl-sysdep.c:67 80521e0
      __uname ??:0 806e160
    __pthread_initialize_minimal glibc/csu/libc-tls.c:247 8048750
      __libc_setup_tls glibc/csu/libc-tls.c:111 8048550
        __sbrk glibc/misc/sbrk.c:34 80505d0
          __brk glibc/misc/../sysdeps/unix/sysv/linux/i386/brk.c:36 806f9c0
          __brk glibc/misc/../sysdeps/unix/sysv/linux/i386/brk.c:36 806f9c0
        memcpy glibc/string/memcpy.c:33 804fc10
      init_slotinfo glibc/csu/libc-tls.c:83 80486cb
        init_static_tls glibc/csu/libc-tls.c:100 8048708
  _dl_setup_stack_chk_guard glibc/csu/../sysdeps/unix/sysv/linux/dl-osinfo.h:76 8048283
    __libc_init_first glibc/csu/../sysdeps/unix/sysv/linux/init-first.c:44 80522f0
      __setfpucw glibc/math/../sysdeps/i386/setfpucw.c:29 8056270
      __libc_init_secure glibc/elf/enbl-secure.c:34 8052190
      _dl_non_dynamic_init glibc/elf/dl-support.c:235 8051b50
        getenv glibc/stdlib/getenv.c:36 8056ea0
          strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
        getenv glibc/stdlib/getenv.c:36 8056ea0
          strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
        _dl_init_paths glibc/elf/dl-load.c:630 8072ae0
          _dl_important_hwcaps glibc/elf/dl-support.c:306 8051b20
          __libc_malloc glibc/malloc/malloc.c:3540 804eb60
            malloc_hook_ini glibc/malloc/hooks.c:35 804f360
              ptmalloc_init glibc/malloc/arena.c:426 804b370
                ptmalloc_init_minimal glibc/malloc/arena.c:374 804b39f
                  __getpagesize glibc/misc/../sysdeps/unix/sysv/linux/getpagesize.c:29 8050650
                  __linkin_atfork glibc/nptl/../nptl/sysdeps/unix/sysv/linux/register-atfork.c:115 80513a0
                next_env_entry glibc/malloc/arena.c:344 804b45d
            __libc_malloc glibc/malloc/malloc.c:3540 804eb60
              _int_malloc glibc/malloc/malloc.c:4130 804cae0
                malloc_consolidate glibc/malloc/malloc.c:4835 804afd0
                malloc_init_state glibc/malloc/malloc.c:2412 804a8e0
              sYSMALLOc glibc/malloc/malloc.c:2929 804d200
                __default_morecore glibc/malloc/morecore.c:48 804f9c0
                  __sbrk glibc/misc/sbrk.c:34 80505d0
                    __brk glibc/misc/../sysdeps/unix/sysv/linux/i386/brk.c:36 806f9c0
                __default_morecore glibc/malloc/morecore.c:48 804f9c0
                  __sbrk glibc/misc/sbrk.c:34 80505d0
                    __brk glibc/misc/../sysdeps/unix/sysv/linux/i386/brk.c:36 806f9c0
          __libc_malloc glibc/malloc/malloc.c:3540 804eb60
            _int_malloc glibc/malloc/malloc.c:4130 804cae0
        getenv glibc/stdlib/getenv.c:36 8056ea0
          strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
        getenv glibc/stdlib/getenv.c:36 8056ea0
          strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
        getenv glibc/stdlib/getenv.c:36 8056ea0
          strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
        getenv glibc/stdlib/getenv.c:36 8056ea0
          strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
      dl_platform_init glibc/elf/../sysdeps/i386/dl-machine.h:273 8051c38
        getenv glibc/stdlib/getenv.c:36 8056ea0
          strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
    __init_misc glibc/misc/init-misc.c:31 8051120
      rindex glibc/string/../sysdeps/i386/strrchr.S:37 8067a30
    __cxa_atexit glibc/stdlib/cxa_atexit.c:34 8048ab0
      __new_exitfn glibc/stdlib/cxa_atexit.c:63 8048960
    __libc_csu_init glibc/csu/elf-init.c:65 80487b0
      _init /build/buildd/glibc-2.7/build-tree/amd64-i386/csu/crti.S:15 80480f4
        _init ??:0 8048116
          frame_dummy crtstuff.c:0 80481b0
            __register_frame_info ??:0 80a2b50
              __register_frame_info_bases ??:0 80a2ab0
                __i686.get_pc_thunk.bx ??:0 80a29f9
          __do_global_ctors_aux crtstuff.c:0 80a4b10
        _init /build/buildd/glibc-2.7/build-tree/amd64-i386/csu/crtn.S:10 8048120
    _setjmp glibc/setjmp/../sysdeps/i386/bsd-_setjmp.S:36 8048830
    main ??:0 80481f0
      _IO_puts glibc/libio/ioputs.c:34 8048b10
        strlen glibc/string/../sysdeps/i386/i486/strlen.S:34 804f9f0
        _IO_new_file_xsputn glibc/libio/fileops.c:1288 8065870
          _IO_new_file_overflow glibc/libio/fileops.c:829 80661a0
            _IO_doallocbuf glibc/libio/genops.c:419 8049780
              _IO_file_doallocate glibc/libio/filedoalloc.c:88 808bd50
                _IO_file_stat glibc/libio/fileops.c:1225 8065b90
                  ___fxstat64 glibc/io/../sysdeps/unix/sysv/linux/fxstat64.c:46 804fe20
                mmap glibc/misc/../sysdeps/unix/sysv/linux/i386/mmap.S:81 8051080
                _IO_setb glibc/libio/genops.c:404 8049670
            _IO_new_do_write glibc/libio/fileops.c:493 8065a50
          _IO_default_xsputn glibc/libio/genops.c:452 8049850
      _IO_acquire_lock_fct glibc/libio/libioP.h:968 8048b8d
    exit glibc/stdlib/exit.c:34 8048870
      __libc_csu_fini glibc/csu/elf-init.c:91 8048770
      _fini /build/buildd/glibc-2.7/build-tree/amd64-i386/csu/crti.S:41 80a5574
        _fini ??:0 80a5587
          __do_global_dtors_aux crtstuff.c:0 8048160
            fini glibc/dlfcn/dlerror.c:207 80975e0
              check_free glibc/dlfcn/dlerror.c:188 8097560
            __deregister_frame_info ??:0 80a3c30
            __deregister_frame_info_bases ??:0 80a3b20
              __i686.get_pc_thunk.bx ??:0 80a29f9
        _fini /build/buildd/glibc-2.7/build-tree/amd64-i386/csu/crtn.S:21 80a558c
      _IO_cleanup glibc/libio/genops.c:1007 8049fd0
        _IO_flush_all_lockp glibc/libio/genops.c:823 8049db0
          _IO_new_file_overflow glibc/libio/fileops.c:829 80661a0
            _IO_new_do_write glibc/libio/fileops.c:493 8065a50
              new_do_write glibc/libio/fileops.c:505 8065760
                _IO_new_file_write glibc/libio/fileops.c:1261 8065a80
                  ?? ??:0 8
                __libc_write ??:0 804ffa0
                  __write_nocancel ??:0 804ffaa
      _IO_unbuffer_write glibc/libio/genops.c:951 8049fe8
        _IO_new_file_setbuf glibc/libio/fileops.c:445 8066380
          _IO_default_setbuf glibc/libio/genops.c:562 80496d0
            _IO_new_file_sync glibc/libio/fileops.c:891 80660d0
            _IO_setb glibc/libio/genops.c:404 8049670
      _exit glibc/posix/../sysdeps/unix/sysv/linux/i386/_exit.S:25 804fdc0
Believe it or not, this trace is quite illuminating. Working out that these calls will happen by reading the glibc source could take a long time!

The Python script below (treeify.py) does two things:

  • Firstly, it filters the trace to include only function entry points. We can just about infer function entry points from addr2line's output. We assume that when a function name (plus filename and line number) appears for the first time in the trace, the current address is the function's entry point. This doesn't work fully when inline functions are instantiated multiple times though. We could use nm to find symbol addresses, but addr2line gives us source filenames.
  • Secondly, it works out the nesting of the call tree by looking at the stack pointer.
import subprocess
import sys

def read():
    for line in sys.stdin:
        try:
            regs = [int(x, 16) for x in line.split(" ")]
            yield {"eip": regs[0], "esp": regs[1]}
        # Ignore lines interspersed with other output!
        except (ValueError, IndexError):
            pass

def addr2line(iterable):
    proc = subprocess.Popen(["addr2line", "-e", sys.argv[1], "-f"],
                            stdin=subprocess.PIPE, stdout=subprocess.PIPE)
    for regs in iterable:
        proc.stdin.write("%x\n" % regs["eip"])
        a = proc.stdout.readline().rstrip("\n")
        b = proc.stdout.readline().rstrip("\n")
        regs["func"] = "%s %s" % (a, b)
        yield regs

def entry_points(iterable):
    funcs = {}
    # We treat the first address we see for the function as its entry
    # point, and only report those entries from this point on.
    for regs in iterable:
        func = regs["func"].split(":")[0]
        if funcs.setdefault(func, regs["eip"]) == regs["eip"]:
            yield regs

def add_nesting(iterable):
    stack = [2 ** 64]
    for regs in iterable:
        stack_pos = regs["esp"]
        if stack_pos < stack[-1]:
            stack.append(stack_pos)
        while stack_pos > stack[-1]:
            stack.pop()
        regs["indent"] = "  " * len(stack)
        yield regs

for x in add_nesting(entry_points(addr2line(read()))):
    print x["indent"], x["func"], "%x" % x["eip"]

Saturday, 16 May 2009

A really simple tracing debugger

gdb can be really useful for debugging when it prints the information you want, but sometimes it will fail to give a useful backtrace, and it can be hard to find out what happened just before your process crashed.

I faced that problem when porting glibc to Native Client. gdb (or strace -i) could tell me the value of the instruction pointer when the process crashed, but it couldn't produce a backtrace because it didn't know how to read memory with NaCl's x86 segmentation setup. (I should point out that gdb has since been ported.) Often I could look up the address with addr2line or objdump -d to find the cause of the problem. However, that didn't help when the process died by jumping to address 0, or when it jumped to _start when running atexit handlers.

I found another way to do debugging, using a trace of the program's execution.

So, here's how to create a debugger with a single shell pipeline (plus a small C program) that works on statically-linked executables.

The first part is a tool called itrace, which starts a subprocess and single-steps its execution using Linux's ptrace() system call and prints the process's eip and esp registers (instruction pointer and stack pointer) at every step. (The code is below.)

Here's what we get if we run itrace on a simple statically-linked executable:

$ ./itrace ./hellow-static | less
8048130 ffce4a50
8048132 ffce4a50
8048133 ffce4a54
8048135 ffce4a54
8048138 ffce4a50
...
To make this intelligible, all we have to do is pipe the output of itrace through addr2line. This tool is part of binutils and translates addresses to source code filenames and line numbers using the debugging information in the executable. Also, pipe through uniq to remove duplicate lines:
$ ./itrace ./hellow-static | addr2line -e ./hellow-static | uniq | less
??:0
/work/glibc/csu/libc-start.c:93
/work/glibc/csu/libc-start.c:103
/work/glibc/csu/libc-start.c:93
/work/glibc/csu/libc-start.c:103
/work/glibc/csu/libc-start.c:93
/work/glibc/csu/libc-start.c:103
/work/glibc/csu/libc-start.c:106
/work/glibc/csu/libc-start.c:103
/work/glibc/csu/libc-start.c:106
...
/work/glibc/stdlib/exit.c:88
/work/glibc/stdlib/exit.c:90
/work/glibc/posix/../sysdeps/unix/sysv/linux/i386/_exit.S:25
/work/glibc/posix/../sysdeps/unix/sysv/linux/i386/_exit.S:29
/work/glibc/posix/../sysdeps/unix/sysv/linux/i386/_exit.S:30
addr2line prints "??:0" for addresses that it can't find in debugging information. If your executable hasn't been built with debugging information - the static libc.a in Debian/Ubuntu isn't built with debugging enabled - you can pass the -f option to addr2line to print function names as well, and then filter out the useless "??:0" lines:
$ cat hellow.c <<END
#include <stdio.h>
int main() { printf("Hello, world!\n"); return 0; }
END
$ gcc -static hellow.c -o hellow-static
$ ./itrace ./hellow-static | addr2line -e ./hellow-static -f | grep -v "??:0" | uniq | less
_start
__libc_start_main
_dl_aux_init
__libc_start_main
_dl_discover_osversion
__uname
_dl_discover_osversion
__libc_start_main
__pthread_initialize_minimal
__libc_setup_tls
...
_IO_default_setbuf
_IO_new_file_setbuf
_IO_unbuffer_write
_IO_cleanup
exit
_exit
This tells us what happened before the program exited. In this case, it was executing exit() and flushing stdio buffers.

Tracing debugging can often be more helpful than postmortem debugging. Debugging with printf is still popular, after all! However, single-stepping using ptrace() has quite a high overhead so this is not really suitable for longer-running processes. Potentially Valgrind could give us an execution trace more efficiently using its dynamic translation approach, but that would still be quite slow. Modern x86 CPUs have a "Branch Trace Store" feature for recording address of basic blocks as they are executed which has the potential to be a faster method, and there is a proposal to make this available in Linux through the ptrace() interface.

itrace.c

This assumes i386, so if you're on a x86-64 system you can build it with:
gcc -m32 itrace.c -o itrace
#include <stdio.h>
#include <sys/wait.h>
#include <unistd.h>

#include <linux/user.h>
#include <sys/ptrace.h>

int main(int argc, char **argv)
{
  int pid = fork();
  if(pid == 0) {
    if(ptrace(PTRACE_TRACEME) < 0) {
      perror("ptrace");
      _exit(1);
    }
    execvp(argv[1], argv + 1);
    perror("exec");
    _exit(1);
  }
  while(1) {
    int status;
    struct user_regs_struct regs;
    if(waitpid(pid, &status, 0) < 0)
      perror("waitpid");
    if(!WIFSTOPPED(status))
      break;
    if(ptrace(PTRACE_GETREGS, pid, 0, ®s) < 0)
      perror("ptrace/GETREGS");
    printf("%lx %lx\n", regs.eip, regs.esp);
    if(ptrace(PTRACE_SINGLESTEP, pid, 0, 0) < 0)
      perror("ptrace/SINGLESTEP");
  }
  return 0;
}

Monday, 4 May 2009

Progress on Native Client

Back in January I wrote that I was porting glibc to Native Client. I have made some good progress since then.

The port is at the stage where it can run simple statically-linked and dynamically-linked executables, from the command line and from the web browser.

In particular, Python works. I have put together a simple interactive top-level that demonstrates running Python from the web browser:

Upstream NaCl doesn't support any filename-based calls such as open(), but we do. In this setup, open() does not of course access the real filesystem on the host system. open() in NaCl-glibc sends a message to the Javascript code on the web page. The Javascript code can fetch the requested file, create a file descriptor for the file using the plugin's API, and send the file descriptor to the NaCl subprocess in reply.

This has not involved modifying Python at all, although I have added an extension module to wrap a couple of NaCl-specific system calls (imc_sendmsg() and imc_recvmsg()). The Python build runs to completion, including the parts where it runs the executable it builds. A simple instruction-rewriting trick means that dynamically-linked NaCl executables can run outside of NaCl, directly under Linux, linked against the regular Linux glibc. This means we can avoid some of the problems associated with cross-compilation when building for NaCl.

This work has involved extending Native Client:

  • Adding support for dynamic loading of code. Initially I have focused on just making it work. Now I'll focus on ensuring this is secure.
  • Writing a custom browser plugin for NaCl which allows Javascript code to handle asynchronous callbacks from NaCl subprocesses.
  • Making various changes to the NaCl versions of gcc and binutils to support dynamically linked code.
Hopefully these changes can get merged upstream eventually. Some of the toolchain changes have already gone in.

See the web page for instructions on how to get the code and try it out.

Saturday, 18 April 2009

Python variable binding semantics, part 2

Time for me to follow up my previous blog post and explain the code snippets.

Snippet 1

funcs = []
for x in (1, 2, 3):
    funcs.append(lambda: x)
for f in funcs:
    print f()
Although you might expect this Python code snippet to print "1 2 3", it actually prints "3 3 3". The reason for this is that the same mutable variable binding for x is used across all iterations of the loop. The for loop does not create new variable bindings.

The function closures that are created inside the loop do not capture the value of x; they capture the cell object that x is bound to under the hood (or the globals dictionary if this code snippet is run at the top level), and each iteration mutates the same cell object.

Interestingly, Python does not restrict the target of the for loop to be a variable or tuple of variables. It can be any lvalue expression that can appear on the left hand side of an assignment, including indexing expressions. For example:

x = [10, 20]
for x[0] in (1,2,3):
    pass
print x # Prints [3, 20]

Snippet 2

This problem is Python's variable binding semantics is not specific to lambdas and also occurs if you use def.
funcs = []
for x in (1, 2, 3):
    def myfunc():
        return x
    funcs.append(myfunc)
for f in funcs:
    print f()
This also prints "3 3 3".

Snippet 3

The remaining snippets are examples of ways to work around this problem. They print the desired "1 2 3".

One way is to use default arguments:

funcs = []
for x in (1, 2, 3):
    funcs.append(lambda x=x: x)
for f in funcs:
    print f()
This is actually the trick that was used to get the effect of lexical scoping before lexical scoping was added to Python.

The default argument captures the value of x at the point where the function closure is created, and this value gets rebound to x when the function is called.

Although it is concise, I'm not too keen on this workaround. It is an abuse of default arguments. There is a risk that you accidentally pass too many arguments to the function and thereby override the captured value of x.

Snippet 4

funcs = []
for x in (1, 2, 3):
    def g(x):
        funcs.append(lambda: x)
    g(x)
for f in funcs:
    print f()
This is quite an ugly workaround, but it's perhaps the most general one. The loop body is wrapped in a function. The x inside function g and the x outside are statically different variable bindings. A new mutable binding of x is created on every call to g, but this binding is never modified.

Snippets 5 and 6

The remaining two snippets are attempts to make the code less ugly, by giving names to the intermediate functions that are introduced to rebind x and thereby give the desired behaviour.
funcs = []
def append_a_func(x):
    funcs.append(lambda: x)
for x in (1, 2, 3):
    append_a_func(x)
for f in funcs:
    print f()

def make_func(x):
    return lambda: x
funcs = []
for x in (1, 2, 3):
    funcs.append(make_func(x))
for f in funcs:
    print f()
I suppose there is a downside to de-uglifying the code. If the code looks too normal, there's a risk that someone will refactor it to the incorrect version without testing that it works properly when looping over lists containing multiple items.

Can the language be fixed?

The root cause is that Python does away with explicit variable binding declarations. If a variable is assigned within a function -- including being assigned as the target of a for loop -- it becomes local to the function, but variables are never local to smaller statement blocks such as a loop body. Python doesn't have block scoping. There are a couple of naive ways in which you might try to fix the language so that the code does what you would expect, both of which have problems. We could change the semantics of function closure creation so that closures take a snapshot of variables' values. But this would break mutually recursive function definitions, and cases where functions are referred to before they are defined, such as:
def f(): # This "def" would raise an UnboundLocalError or a NameError
    g()
def g():
    pass
We could change the semantics of for loops so that the target variable is given a variable binding that is local to the loop body. But this wouldn't change the status of variable bindings created by assignment, so this code would still have the unwanted behaviour:
for x in (1, 2, 3):
    y = x
    funcs.append(lambda: y)

Linting

It should be possible to create a lint tool to detect this hazard. It could look for closures that capture variables that are assigned by or in loops. I wonder how many false positives it would give on a typical codebase.

Javascript

Javascript suffers from the same problem:
funcs = [];
for(var x = 0; x < 3; x++) {
    funcs.push(function() { return x; })
}
for(var i = 0; i < funcs.length; i++) {
    print(funcs[i]());
}
This code prints "3 3 3" when run using rhino.

In Javascript's case the affliction is entirely gratuitous. It happens because var declarations are implicitly hoisted up to the top of the function, for no apparent reason. It is gratuitous because this is not the result of a trade-off.

The good news, though, is that the problem is recognised and a fix to Javascript is planned. I think the plan is for Ecmascript 3.1 to introduce a let declaration as an alternative to var without the hoisting behaviour.

List comprehensions

Back to Python. The same problem also applies to list comprehensions and generator expressions.
funcs = [(lambda: x) for x in (1, 2, 3)]
for f in funcs:
    print f()

funcs = list((lambda: x) for x in (1, 2, 3))
for f in funcs:
    print f()

These both print "3 3 3".

(Note that I added brackets around the lambdas for clarity but the syntax does not require them.)

This is forgivable for list comprehensions, at least in Python 2.x, because the assignment to x escapes into the surrounding function. (See my earlier post.)

But for generators (and for list comprehensions in Python 3.0), the scope of x is limited to the comprehension. Semantically, it would be easy to limit the scope of x to within a loop iteration, so that each iteration introduces a new variable binding.

Tuesday, 31 March 2009

Python variable binding semantics

What do the six following chunks of code do, and what would you like them to do? Which do you prefer?
funcs = []
for x in (1, 2, 3):
    funcs.append(lambda: x)
for f in funcs:
    print f()

funcs = []
for x in (1, 2, 3):
    def myfunc():
        return x
    funcs.append(myfunc)
for f in funcs:
    print f()

funcs = []
for x in (1, 2, 3):
    funcs.append(lambda x=x: x)
for f in funcs:
    print f()

funcs = []
for x in (1, 2, 3):
    def g(x):
        funcs.append(lambda: x)
    g(x)
for f in funcs:
    print f()

funcs = []
def append_a_func(x):
    funcs.append(lambda: x)
for x in (1, 2, 3):
    append_a_func(x)
for f in funcs:
    print f()

def make_func(x):
    return lambda: x
funcs = []
for x in (1, 2, 3):
    funcs.append(make_func(x))
for f in funcs:
    print f()

Sunday, 22 February 2009

OpenStreetMap

I tried out OpenStreetMap the other day after reading the recent article about it on LWN. Amazingly it looks pretty complete for the parts of the UK and US that I looked at. I don't know how much comes from people gathering data by travelling around or by entering data from satellite photos or out-of-copyright maps, but whichever of these it is, it's very impressive.

It is also better than Google Maps in some ways:

  • It has more details: it shows footpaths, rivers and streams, wooded areas, and paths across parks. It has outlines of interesting buildings where people have entered data for them.
  • I think the default renderer (Mapnik) looks better than the Google Maps equivalent, especially when zoomed out to a level where streets are one pixel thick but still distinguishable. Google Maps gives too much prominence to the major roads -- it renders them thickly, in bright colours, and with large labels, which tends to drown out the details. Mapnik is more subtle. The map just looks more interesting.
  • It uses more of the browser window and doesn't waste as much space on sidebars. It's almost a trivial point, but it makes a difference.

Friday, 16 January 2009

Testing using golden files in Python

This is the third post in a series about automated testing in Python (earlier posts: 1, 2). This post is about testing using golden files.

A golden test is a fancy way of doing assertEquals() on a string or a directory tree, where the expected output is kept in a separate file or files -- the golden files. If the actual output does not match the expected output, the test runner can optionally run an interactive file comparison tool such as Meld to display the differences and allow you to selectively merge the differences into the golden file.

This is useful when

  • the data being checked is large - too large to embed into the Python source; or
  • the data contains relatively inconsequential details, such as boilerplate text or formatting, which might be changed frequently.
As an example, suppose you have code to format some data as HTML. You can have a test which creates some example data, formats this as HTML, and compares the result to a golden file. Something like this:
class ExampleTest(golden_test.GoldenTestCase):

    def test_formatting_html(self):
        obj = make_some_example_object()
        temp_dir = self.make_temp_dir()
        format_as_html(obj, temp_dir)
        self.assert_golden(temp_dir, os.path.join(os.path.dirname(__file__),
                                                  "golden-files"))

if __name__ == "__main__":
    golden_test.main()
By default, the test runs non-interactively, which is what you want on a continuous integration machine, and it will print a diff if it fails. To switch on the semi-interactive mode which runs Meld, you run the test with the option --meld.

Here is a simple version of the test helper (taken from here):

import os
import subprocess
import sys
import unittest

class GoldenTestCase(unittest.TestCase):

    run_meld = False

    def assert_golden(self, dir_got, dir_expect):
        assert os.path.exists(dir_expect), dir_expect
        proc = subprocess.Popen(["diff", "--recursive", "-u", "-N",
                                 "--exclude=.*", dir_expect, dir_got],
                                stdout=subprocess.PIPE)
        stdout, stderr = proc.communicate()
        if len(stdout) > 0:
            if self.run_meld:
                # Put expected output on the right because that is the
                # side we usually edit.
                subprocess.call(["meld", dir_got, dir_expect])
            raise AssertionError(
                "Differences from golden files found.\n"
                "Try running with --meld to update golden files.\n"
                "%s" % stdout)
        self.assertEquals(proc.wait(), 0)

def main():
    if sys.argv[1:2] == ["--meld"]:
        GoldenTestCase.run_meld = True
        sys.argv.pop(1)
    unittest.main()
(It's a bit cheesy to modify global state on startup to enable melding, but because unittest doesn't make it easy to pass parameters into tests this is the simplest way of doing it.)

Golden tests have the same sort of advantages that are associated with test-driven development in general.

  • Golden files are checked into version control and help to make changesets self-documenting. A changeset that affects the program's output will include patches that demonstrate how the output is affected. You can see the history of the program's output in version control. (This assumes that everyone runs the tests before committing!)
  • Sometimes you can point people at the golden files if they want to see example output. For HTML, sometimes you can contrive the CSS links to work so that the HTML looks right when viewed in a browser.
  • And of course, this can catch cases where you didn't intend to change the program's output.
My typical workflow is to add a test with some example input that I want the program to handle, and run the test with --meld until the output I want comes up on the left-hand side in Meld. I mark the output as OK by copying it over to the right-hand side. This is not quite test-first, because I am letting the test suggest what its expected output should be. But it is possible to do this in an even more test-first manner by typing the output you expect into the right-hand side.

Other times, one change will affect many locations in the golden files, and adding a new test is not necessary. It's usually not too difficult to quickly eyeball the differences with Meld.

Here are some of the things that I have used golden files to test:

  • formatting of automatically-generated e-mails
  • automatically generated configuration files
  • HTML formatting logs (build_log_test.py and its golden files)
  • pretty-printed output of X Windows messages in xjack-xcb (golden_test.py and golden_test_data). This ended up testing several components in one go:
    • the XCB protocol definitions for X11
    • the encoders and decoders that work off of the XCB protocol definitions
    • the pretty printer for the decoder
On the other hand, sometimes the overhead of having a separate file isn't worth it, and if the expected output is small (maybe 5 lines or so) I might instead paste it into the Python source as a multiline string.

It can be tempting to overuse golden tests. As with any test suite, avoid creating one big example that tries to cover all cases. (This is particularly tempting if the test is slow to run.) Try to create smaller, separate examples. The test helper above is not so good at this, because if you are not careful it can end up running Meld several times. In the past I have put several examples into (from unittest's point of view) one test case, so that it runs Meld only once.

Golden tests might not work so well if components can be changed independently. For example, if your XML library changes its whitespace pretty-printing, the tests' output could change. This is less of a problem if your code is deployed with tightly-controlled versions of libraries, because you can just update the golden files when you upgrade libraries.

A note on terminology: I think I got the term "golden file" from my previous workplace, where other people were using them, and the term seems to enjoy some limited use judging from Google. "Golden test", however, may have been a term that I have made up and that no-one else outside my workplace is using for this meaning.

Sunday, 11 January 2009

On ABI and API compatibility

If you are going to create a new execution environment, such as a new OS, it can make things simpler if it presents the same interfaces as an existing system. It makes porting easier. So,
  • If you can, keep the ABI the same.
  • If you can't keep the ABI the same, at least keep the API the same.

Don't be tempted to say "we're changing X; we may as well take this opportunity to change Y, which has always bugged me". Only change things if there is a good reason.

For an example, let's look at the case of GNU Hurd.

  • In principle, the Hurd's glibc could present the same ABI as Linux's glibc (they share the same codebase, after all), but partly because of a different in threading libraries, they were made incompatible. Unifying the ABIs was planned, but it appears that 10 years later it has not happened (Hurd has a libc0.3 package instead of libc6).

    Using the same ABI would have meant that the same executables would work on Linux and the Hurd. Debian would not have needed to rebuild all its packages for a separate "hurd-i386" architecture. It would have saved a lot of effort.

    I suspect that if glibc were ported to the Hurd today, it would not be hard to make the ABIs the same. The threading code has changed a lot in the intervening time. I think it is cleaner now.

  • The Hurd's glibc also changed the API: they decided not to define PATH_MAX. The idea was that if there was a program that used fixed-length buffers for storing filenames, you'd be forced to fix it. Well, that wasn't a good idea. It just created unnecessary work. Hurd developers and users had enough on their plates without having to fix unrelated implementation quality issues in programs they wanted to use.
Similarly, it would help if glibc for NaCl (Native Client) could have the same ABI as glibc for Linux. Programs have to be recompiled for NaCl with nacl-gcc to meet the requirements of NaCl's validator, but could the resulting code still run directly on Linux, linked with the regular i386 Linux glibc and other libraries? The problem here is that code compiled for NaCl will expect all code addresses to be 32-byte-aligned. If the NaCl code is given a function address or a return address which is not 32-byte-aligned, things will go badly wrong when it tries to jump to it. Some background: In normal x86 code, to return from a function you write this:
        ret
This pops an address off the stack and jumps to it. In NaCl, this becomes:
        popl %ecx
        and $0xffffffe0, %ecx
        jmp *%ecx
This pops an address off the stack, rounds it down to the nearest 32 byte boundary and jumps to it. If the calling function's call instruction was not placed at the end of a 32 byte block (which NaCl's assembler will arrange), the return address will not be aligned and this code will jump to the wrong location.

However, there is a way around this. We can get the NaCl assembler and linker to keep a list of all the places where a forcible alignment instruction (the and $0xffffffe0, %ecx above) was inserted, and put this list into the executable or library in a special section or segment. Then when we want to run the executable or library directly on Linux, we can rewrite all these locations so that the sequence above becomes

        popl %ecx
        nop
        nop
        jmp *%ecx
or maybe even just
        ret
        nop
        nop
        nop
        nop
        nop
We can reuse the relocations mechanism to store these rewrites. The crafty old linker already does something similar for thread-local variable accesses. When it knows that a thread-local variable is being accessed from the library where it is defined, it can rewrite the general-purpose-but-slow instruction sequence for TLS variable access into a faster instruction sequence. The general purpose instruction sequence even contains nops to allow for rewriting to the slightly-longer fast sequence.

This arrangement for running NaCl-compiled code could significantly simplify the process of building and testing code when porting it to NaCl. It can help us avoid the difficulties associated with cross-compiling.

Sunday, 4 January 2009

What does NaCl mean for Plash?

Google's Native Client (NaCl), announced last month, is an ingenious hack to get around the problem that existing OSes don't provide adequate security mechanisms for sandboxing native code.

You can look at NaCl as an interesting combination of OS-based and language-based security mechanisms:

  • NaCl uses a code verifier to prevent use of unsafe instructions such as those that perform system calls. This is not a million miles away from programming language subsets like Cajita and Joe-E, except that it operates at the level of x86 instructions rather than source code.

    Since x86 instructions are variable-length and unaligned, NaCl has to stop you from jumping into an unsafe instruction hidden in the middle of a safe instruction. It does that by requiring that all indirect jumps are jumps to the start of 32-byte-aligned blocks; instructions are not allowed to straddle these blocks.

  • It uses the x86 architecture's little-used segmentation feature to limit memory accesses to a range of address space. So the processor is doing bounds checking for free.

    Actually, segmentation has been used before - in L4 and EROS's "small spaces" facility for switching between processes with small address spaces without flushing the TLB. NaCl gets the same benefit: switching between trusted and untrusted code should be fast; faster than trapping system calls with ptrace(), for example.

Plash is also a hack to get sandboxing, specifically on Linux, but it has some limitations:
  1. it doesn't block network access;
  2. it doesn't limit CPU and memory resource usage, so sandboxed programs can still cause denial of service;
  3. it requires a custom glibc, which can be a pain to build;
  4. it changes the API/ABI that sandboxed programs see in some small but significant ways:
    • some syscalls are effectively disabled; programs must go through libc for these calls, which stops statically linked programs from working;
    • /proc/self doesn't work, and Plash's architecture makes it hard to emulate /proc.
Interface changes mean some programs require patching, e.g. to not depend on /proc. If there were more people behind Plash, these interface changes wouldn't be a big problem. These problems can be addressed, with work. But Plash hasn't really caught on, so the manpower isn't there.

NaCl also breaks the ABI - it breaks it totally. Code must be recompiled. However, NaCl provides bigger benefits in return. It allows programs to be deployed in new contexts: on Windows; in a web browser. It is more secure than Plash, because it can block network access and limit the amount of memory a process can allocate. Also, because NaCl mediates access more completely, it would be easier to emulate interfaces like /proc.

NaCl isn't only useful as a browser plugin. We could use it as a general purpose OS security mechanism. We could have GNU/Linux programs running on Windows (without the Linux bit).

Currently NaCl does not support all the features you'd need in a modern OS. In particular, dynamic linking. NaCl doesn't yet support loading code beyond an initial statically linked ELF executable. But we can add this. I am making a start at porting glibc, along with its dynamic linker. After all, I have ported glibc once before!