Saturday, 23 October 2010

Process descriptors in FreeBSD-Capsicum

Capsicum is a set of new features for FreeBSD that adds better support for sandboxing, adding a capability mode in which the capabilities are Unix file descriptors (FDs). The features Capsicum adds are orthogonal, which is nice. One of the new features is process descriptors.

Capsicum adds a replacement for fork() called pdfork(), which returns a process descriptor (a new type of FD) rather than a PID. Similarly, there are replacements for wait() and kill() -- pdwait() and pdkill() -- which take FDs as arguments instead of PIDs.

The reason for the new interface is that kill() is not safe to allow in Capsicum's sandbox, because it provides ambient authority: it looks up its PID argument in a global namespace.

But even if you ignore sandboxing issues, this new interface is a significant improvement on POSIX process management:

  1. It allows the ability to wait on a process to be delegated to another process. In contrast, with wait()/waitpid(), a process's exit status can only be read by the process's parent.
  2. Process descriptors can be used with poll(). This avoids the awkwardness of having to use SIGCHLD, which doesn't work well if multiple libraries within the same process want to wait() for child processes.
  3. It gets rid of the race condition associated with kill(). Sending a signal to a PID is dodgy because the original process with this PID could have exited, and the kernel could have recycled the PID for an unrelated process, especially on a system where processes are spawned and exit frequently.

    kill() is only really safe when used by a parent process on its child, and only when the parent makes sure to use it before wait() has returned the child's exit status. pdkill() gets rid of this problem.

  4. In future, process descriptors can be extended to provide access to the process's internal state for debugging purposes, e.g. for reading registers and memory, or modifying memory mappings or the FD table. This would be an improvement on Linux's ptrace() interface.

However, there is one aspect of Capsicum's process descriptors that I think was a mistake: Dropping the process descriptor for a process causes the kernel to kill it. (By this I mean that if there are no more references to the process descriptor, because they have been close()'d or because the processes holding them have exited, the kernel will terminate the process.)

The usual principle of garbage collection in programming languages is that GC should not affect the observable behaviour of the program (except for resource usage). Capsicum's kill-on-close() behaviour violates that principle.

kill-on-close() is based on the assumption that the launcher of a subprocess always wants to check the subprocess's exit status. But this is not always the case. Exit status is just one communications channel, but not one we always care about. It's common to launch a process to communicate with it via IPC without wanting to have to wait() for it. (Double fork()ing is a way to do this in POSIX without leaving a zombie process.) Many processes are fire-and-forget.

The situation is analogous for threads. pthreads provides pthread_detach() and PTHREAD_CREATE_DETACHED as a way to launch a thread without having to pthread_join() it.

In Python, when you create a thread, you get a Thread object back, but if the Thread object is GC'd the thread won't be killed. In fact, finalisation of a thread object is implemented using pthread_detach() on Unix.

I know of one language implementation that (as I recall) will GC threads, Mozart/Oz, but this only happens when a thread can have no further observable effects. In Oz, if a thread blocks waiting for the resolution of a logic variable that can never be resolved (because no other thread holds a reference to it), then the thread can be GC'd. So deadlocked threads can be GC'd just fine. Similarly, if a thread holds no communications channels to the outside world, it can be GC'd safely.

Admittedly, Unix already violates this GC principle by the finalisation of pipe FDs and socket FDs, because dropping one endpoint of the socket/pipe pair is visible as an EOF condition on the other endpoint. However, this is usually used to free up resources or to unblock a process that would otherwise fail to make progress. Dropping a process descriptor would halt progress -- the opposite. Socket/pipe EOF can be used to do this too, but it is less common, and I'm not sure we should encourage this.

However, aside from this complaint, process descriptors are a good idea. It would be good to see them implemented in Linux.

Wednesday, 11 August 2010

My workflow with git-cl + Rietveld

Git's model of changes (which is shared by Mercurial, Bazaar and Monotone) makes it awkward to revise earlier patches. This can make things difficult when you are sending out multiple, dependent changes for code review.

Suppose I create changes A and B. B depends functionally on A, i.e. tests will not pass for B without A also being applied. There might or might not be a textual dependency (B might or might not modify lines of code modified by A).

Because code review is slow (high latency), I need to be able to send out changes A and B for review and still be able to continue working on further changes. But I also need to be able to revisit A to make changes to it based on review feedback, and then make sure B works with the revised A.

What I do is create separate branches for A and B, where B branches off of A. To revise change A, I "git checkout" its branch and add further commits. Later I can update B by checking it out and rebasing it onto the current tip of A. Uploading A or B to the review system or committing A or B upstream (to SVN) involves squashing their branch's commits into one commit. (This squashing means the branches contain micro-history that reviewers don't see and which is not kept after changes are pushed upstream.)

The review system in question is Rietveld, the code review web app used for Chromium and Native Client development. Rietveld does not have any special support for patch series -- it is only designed to handle one patch at a time, so it does not know about dependencies between changes. The tool for uploading changes from Git to Rietveld and later committing them to SVN is "git-cl" (part of depot_tools).

git-cl is intended to be used with one branch per change-under-review. However, it does not have much support for handling changes which depend on each other.

This workflow has a lot of problems:

  • When using git-cl on its own, I have to manually keep track that B is to be rebased on to A. When uploading B to Rietveld, I must do "git cl upload A". When updating B, I must first do "git rebase A". When diffing B, I have to do "git diff A". (I have written a tool to do this. It's not very good, but it's better than doing it manually.)
  • Rebasing B often produces conflicts if A has been squash-committed to SVN. That's because if branch A contained multiple patches, Git doesn't know how to skip over patches from A that are in branch B.
  • Rebasing loses history. Undoing a rebase is not easy.
  • In the case where B doesn't depend on A, rebasing branch B so that it doesn't include the contents of branch A is a pain. (Sometimes I will stack B on top of A even when it doesn't depend on A, so that I can test the changes together. An alternative is to create a temporary branch and "git merge" A and B into it, but creating further branches adds to the complexity.)
  • If there is a conflict, I don't find out about it until I check out and update the affected branch.
  • This gets even more painful if I want to maintain changes that are not yet ready for committing or posting for review, and apply them alongside changes that are ready for review.

There are all reasons why I would not recommend this workflow to someone who is not already very familiar with Git.

The social solution to this problem would be for code reviews to happen faster, which would reduce the need to stack up changes. If all code reviews reached a conclusion within 24 hours, that would be an improvement. But I don't think that is going to happen.

The technical solution would be better patch management tools. I am increasingly thinking that Darcs' set-of-patches model would work better for this than Git's DAG-of-commits model. If I could set individual patches to be temporarily applied or unapplied to the working copy, and reorder and group patches, I think it would be easier to revisit changes that I have posted for review.

Friday, 6 August 2010

CVS's problems resurface in Git

Although modern version control systems have improved a lot on CVS, I get the feeling that there is a fundamental version control problem that the modern VCSes (Git, Mercurial, Bazaar, and I'll include Subversion too!) haven't solved. The curious thing is that CVS had sort of made some steps towards addressing it.

In CVS, history is stored per file. If you commit a change that crosses multiple files, CVS updates each file's history separately. This causes a bunch of problems:

  • CVS does not represent changesets or snapshots as first class objects. As a result, many operations involve visiting every file's history.

    Reconstructing a changeset involves searching all files' histories to match up the individual file changes. (This was just about possible, though I hear there are tricky corner cases. Later CVS added a commit ID field that presumably helped with this.)

    Creating a tag at the latest revision involves adding a tag to every file's history. Reconstructing a tag, or a time-based snapshot, involves visiting every file's history again.

  • CVS does not represent file renamings, so the standard history tools like "cvs log" and "cvs annotate" are not able to follow a file's history from before it was renamed.

In the DAG-based decentralised VCSes (Git, Mercurial, Monotone, Bazaar), history is stored per repository. The fundamental data structure for history is a Directed Acyclic Graph of commit objects. Each commit points to a snapshot of the entire file tree plus zero or more parent commits. This addresses CVS's problems:

  • Extracting changesets is easy because they are the same thing as commit objects.
  • Creating a tag is cheap and easy. Recording any change creates a commit object (a snapshot-with-history), so creating a tag is as simple as pointing to an already-existing commit object.

However, often it is not practical to put all the code that you're interested in into a single Git repository! (I pick on Git here because, of the DAG-based systems, it is the one I am most familar with.) While it can be practical to do this with Subversion or CVS, it is less practical with the DAG-based decentralised VCSes:

  • In the DAG-based systems, branching is done at the level of a repository. You cannot branch and merge subdirectories of a repository independently: you cannot create a commit that only partially merges two parent commits.
  • Checking out a Git repository involves downloading not only the entire current revision, but the entire history. So this creates pressure against putting two partially-related projects together in the same repository, especially if one of the projects is huge.
  • Existing projects might already use separate repositories. It is usually not practical to combine those repositories into a single repository, because that would create a repo that is incompatible with the original repos. That would make it difficult to merge upstream changes. Patch sharing would become awkward because the filenames in patches would need fixing.

This all means that when you start projects, you have to decide how to split your code among repositories. Changing these decisions later is not at all straightforward.

The result of this is that CVS's problems have not really been solved: they have just been pushed up a level. The problems that occurred at the level of individual files now occur at the level of repositories:

  • The DAG-based systems don't represent changesets that cross repositories. They don't have a type of object for representing a snapshot across repositories.
  • Creating a tag across repositories would involve visiting every repository to add a tag to it.
  • There is no support for moving files between repositories while tracking the history of the file.

The funny thing is that since CVS hit this problem all the time, the CVS tools were better at dealing with multiple histories than Git.

To compare the two, imagine that instead of putting your project in a single Git repository, you put each one of the project's files in a separate Git repository. This would result in a history representation that is roughly equivalent to CVS's history representation. i.e. Every file has its own separate history graph.

  • To check in changes to multiple files, you have to "cd" to each file's repository directory, and "git commit" and "git push" the file change.
  • To update to a new upstream version, or to switch branch, you have to "cd" to each file's repository directory again to do "git pull/fetch/rebase/checkout" or whatever.
  • Correlating history across files must be done manually. You could run "git log" or "gitk" on two repositories and match up the timelines or commit messages by hand. I don't know of any tools for doing this.

In contrast, for CVS, "cvs commit" works across multiple files and (if I remember rightly) even across multiple working directories. "cvs update" works across multiple files.

While "cvs log" doesn't work across multiple files, there is a tool called "CVS Monitor" which reconstructs history and changesets across files.

Experience with CVS suggests that Git could be changed to handle the multiple-repository case better. "git commit", "git checkout" etc. could be changed to operate across multiple Git working copies. Maybe "git log" and "gitk" could gain options to interleave histories by timestamp.

Of course, that would lead to cross-repo support that is only as good as CVS's cross-file support. We might be able to apply a textual tag name across multiple Git repos with a single command just as a tag name can be applied across files with "cvs tag". But that doesn't give us an immutable tag object than spans repos.

My point is that the fundamental data structure used in the DAG-based systems doesn't solve CVS's problem, it just postpones it to a larger level of granularity. Some possible solutions to the problem are DEPS files (as used by Chromium), Git submodules, or Darcs-style set-of-patches repos. These all introduce new data structures. Do any of these solve the original problem? I am undecided -- this question will have to wait for another post. :-)

Wednesday, 5 May 2010

The trouble with Buildbot

The trouble with Buildbot is that it encourages you to put rules into a Buildbot-specific build configuration that is separate from the normal configuration files that you might use to build a project (configure scripts, makefiles, etc.).

This is not a big problem if your Buildbot configuration is simple and just consists of, say, "svn up", "./configure", "make", "make test", and never changes.

But it is a problem if your Buildbot configuration becomes non-trivial and ever has to be updated, because the Buildbot configuration cannot be tested outside of Buildbot.

The last time I had to maintain a Buildbot setup, it was necessary to try out configuration changes directly on the Buildbot master. This doesn't work out well if multiple people are responsible for maintaining the setup! Whoever makes a change has to remember to check it in to version control after they've got it working, which of course doesn't always happen. It's a bit ironic that Buildbot is supposed to support automated testing but doesn't follow best practices for testing itself.

There is a simple way around this though: Instead of putting those separate steps -- "./configure", "make", "make test" -- into the Buildbot config, put them into a script, check the script into version control, and have the Buildbot config run that script. Then the Buildbot config just consists of doing "svn up" and running the script. It is then possible to test changes to the script before checking it in. I've written scripts like this that go as far as debootstrapping a fresh Ubuntu chroot to run tests in, which ensures your package dependency list is up to date.

Unfortunately, Buildbot's logging facilities don't encourage having a minimal Buildbot config.

If you use a complicated Buildbot configuration with many Buildbot steps, Buildbot can display each step separately in its HTML-formatted logs. This means:

  • you can see progress;
  • you can see which steps have failed;
  • you'd be able to see how long the steps take if Buildbot actually displayed that.

Whereas if you have one big build script in a single Buildbot step, all the output goes into one big, flat, plain text log file.

I think the solution is to decouple the structured-logging functionality from the glorified-cron functionality that Buildbot provides. My implementation of this is build_log.py (used in Plash's build scripts), which I'll write more about later.

Saturday, 1 May 2010

Breakpoints in gdb using int3

Here is a useful trick I discovered recently while debugging some changes to the seccomp sandbox. To trigger a breakpoint on x86, just do:
__asm__("int3");

Then it is possible to inspect registers, memory, the call stack, etc. in gdb, without having to get gdb to set a breakpoint. The instruction triggers a SIGTRAP, so if the process is not running under gdb and has no signal handler for SIGTRAP, the process will die.

This technique seems to be fairly well known, although it's not mentioned in the gdb documentation. int3 is the instruction that gdb uses internally for setting breakpoints.

Sometimes it's easier to insert an int3 and rebuild than get gdb to set a breakpoint. For example, setting a gdb breakpoint on a line won't work in the middle of a chunk of inline assembly.

My expectations of gdb are pretty low these days. When I try to use it to debug something low level, it often doesn't work, which is why I have been motivated to hack together my own debugging tools in the past. For example, if I run gdb on the glibc dynamic linker (ld.so) on Ubuntu Hardy or Karmic, it gives:

$ gdb /lib/ld-linux-x86-64.so.2 
...
(gdb) run
Starting program: /lib/ld-linux-x86-64.so.2 
Cannot access memory at address 0x21ec88
(gdb) 

So it's nice to find a case where I can get some useful information out of gdb.

Monday, 1 February 2010

How to build adb, the Android debugger

adb is the Android debugger (officially the "Android debug bridge" I think). It is a tool for getting shell access to an Android phone across a USB connection. It can also be used to copy files to and from the Android device and do port-forwarding. In short, it is similar to ssh, but is not ssh. (Why couldn't they have just used ssh?)

I have not been able to find any Debian/Ubuntu packages for adb. The reason why it has not been packaged becomes apparent if you try to build the thing. Android has a monolithic build system which wants to download a long list of Git repositories and build everything. If you follow the instructions, it will download 1.1Gb from Git and leave you with a source+build directory of 6Gb. It isn't really designed for building subsets of components, unlike, say, JHBuild. It's basically a huge makefile. It doesn't know about dependencies between components. However, it does have some idea about dependencies between output files.

Based on a build-of-all-Android, I figured out how to build a much smaller subset containing adb. This downloads a more manageable 11Mb and finishes with a source+build directory containing 40Mb. This is also preferable to downloading the pre-built Android SDK, which has a non-free licence.

Instructions:

$ sudo apt-get install build-essential libncurses5-dev
$ git clone git://android.git.kernel.org/platform/system/core.git system/core
$ git clone git://android.git.kernel.org/platform/build.git build
$ git clone git://android.git.kernel.org/platform/external/zlib.git external/zlib
$ git clone git://android.git.kernel.org/platform/bionic.git bionic
$ echo "include build/core/main.mk" >Makefile

Now edit build/core/main.mk and comment out the parts labelled

 # Check for the correct version of java
and
 # Check for the correct version of javac
Since adb doesn't need Java, these checks are unnecessary.

Also edit build/target/product/sdk.mk and comment out the "include" lines after

 # include available languages for TTS in the system image
I don't know exactly what this is about but it avoids having to download language files that aren't needed for adb. Then building the adb target should work:
make out/host/linux-x86/bin/adb
If you try running "adb shell" you might get this:
ubuntu$ ./out/host/linux-x86/bin/adb shell
* daemon not running. starting it now *
* daemon started successfully *
error: insufficient permissions for device
So you probably need to do "adb start-server" as root first:
ubuntu$ sudo ./out/host/linux-x86/bin/adb kill-server
ubuntu$ sudo ./out/host/linux-x86/bin/adb start-server
* daemon not running. starting it now *
* daemon started successfully *
ubuntu$ ./out/host/linux-x86/bin/adb shell
$
For the record, here are the errors I got that motivated each step:
  • make: *** No rule to make target `external/svox/pico/lang/PicoLangItItInSystem.mk'.  Stop.
    
    - hence commenting out the picolang includes.
  • system/core/libzipfile/zipfile.c:6:18: error: zlib.h: No such file or directory
    
    - I'm guessing adb needs libzipfile which needs zlib.
  • system/core/libcutils/mspace.c:59:50: error: ../../../bionic/libc/bionic/dlmalloc.c: No such file or directory
    
    - This is why we need to download bionic (the C library used on Android), even though we aren't building any code to run on an Android device. This is the ugliest part and it illustrates why this is not a modular build system. The code does
    #include "../../../bionic/libc/bionic/dlmalloc.c"
    
    to #include a file from another module. It seems any part of the build can refer to any other part, via relative pathnames, so the modules cannot be build separately. I don't know whether this is an isolated case, but it makes it difficult to put adb into a Debian package.
  • host Executable: adb (out/host/linux-x86/obj/EXECUTABLES/adb_intermediates/adb)
    /usr/bin/ld: cannot find -lncurses
    
    - hence the ncurses-dev dependency above. However, this error is a little odd because if adb really depended on ncurses, it would fail when it tried to #include a header file. Linking with "-lncurses" is probably superfluous.

The instructions above will probably stop working as new versions are pushed to the public Git branches. (However, this happens infrequently because Android development is not done in the open.) For reproducibility, here are the Git commit IDs:

$ find -name "*.git" -exec sh -c 'echo "`git --git-dir={} rev-parse HEAD` {}"' ';'
91a54c11cbfbe3adc1df2f523c75ad76affb0ae9 ./system/core/.git
95604529ec25fe7923ba88312c590f38aa5e3d9e ./bionic/.git
890bf930c34d855a6fbb4d09463c1541e90381d0 ./external/zlib/.git
b7c844e7cf05b4cea629178bfa793321391d21de ./build/.git
It looks like the current version is Android 1.6 (Donut):
$ find -name "*.git" -exec sh -c 'echo "`git --git-dir={} describe` {}"' ';'
android-1.6_r1-80-g91a54c1 ./system/core/.git
android-1.6_r1-43-g9560452 ./bionic/.git
android-1.6_r1-7-g890bf93 ./external/zlib/.git
android-sdk-1.6-docs_r1-65-gb7c844e ./build/.git

Monday, 25 January 2010

Why system calls should be message-passing

Message-passing syscalls on Linux include read(), write(), sendmsg() and recvmsg(). These are message-passing because:
  1. They take a file descriptor as an explicit argument. This specifies the object to send a message to or receive a message from.
  2. The message to send (or receive) consists of an array of bytes, and maybe an array of file descriptors too (via SCM_RIGHTS). The syscall interacts with the process's address space (or file descriptor table) in a well-defined, uniform way. The caller specifies which locations are read or written. The syscall acts as if it takes a copy of the message.

Linux has a lot of syscalls that are not message-passing because the object they operate on is not specified explicitly through a reference that authorises use of the object (such as a file descriptor). Instead they operate using the process's ambient authority. Examples:

  • open(), stat(), etc.: These operate on the file namespace (a combination of the process's root directory, current directory, and mount table; and for /proc the contents of the file namespace is also influenced by the process's identity).
  • kill(), ptrace(): These operate on the process ID namespace. (Unlike file descriptors, process IDs are not strong references. The mapping from process IDs to processes is ambiently available.)
  • mmap(), mprotect(): These operate on the process's address space, which is not a first class object.
Here are some advantages of implementing syscalls on top of a message-passing construct:
  1. It allows syscalls to be intercepted.

    Suppose that open() were just a library call implemented using sendmsg()/recvmsg() (as in Plash). It would send a message to a file namespace object (named via a file descriptor). This object can be replaced in order to tame the huge amount of authority that open() usually provides.

  2. It allows syscalls to be disabled.

    open() could be disabled by providing a file namespace object that doesn't implement an open() method, or by not providing a file namespace object.

  3. It can avoid race conditions in filtering syscalls.

    In the past people have attempted to use ptrace() to sandbox processes and give them limited access to the filesystem, by checking syscalls such as open() and allowing them through selectively (Subterfugue is an example). This is difficult or impossible to do securely because of a TOCTTOU race condition. open() doesn't take a filename; it takes an address, in the current process's address space, of a filename. It is not enough to catch the start of the open() syscall, check the filename, and allow the syscall through. Another thread might change the filename in the mean time. (This is aside from the race conditions involved in interpreting symlinks.)

    Systrace went to some trouble to copy filenames in the kernel to allow a tracing process to see and provide a consistent snapshot. This would have been less ad-hoc if the kernel had a uniform message-passing system.

    See "Exploiting Concurrency Vulnerabilities in System Call Wrappers".

  4. It aids logging of syscalls.

    On Linux, strace needs to have logic for interpreting every single syscall, because each syscall passes arguments in different ways, including how it reads and writes memory and the file descriptor table.

    If all syscalls went through a common message-passing interface, strace would only need one piece of logic for recording what was read or written. Furthermore, logging could be separated from decoding and formatting (such as turning syscall numbers into names).

  5. It allows consistency of code paths in the kernel, avoiding bugs.

    Mac OS X had a vulnerability in the TIOCGWINSZ ioctl(), which reads the width and height of a terminal window. The bug was that it would write directly to the address provided by the process, without checking whether the address was valid. This allowed any process to take over the kernel by writing to kernel memory.

    This wouldn't happen if ioctl() were message-passing, because all writing to the process's address space would be done in one place, in the syscall's return path. Forgetting the check would be much less likely.

    This bug demonstrates why ioctl() is dangerous. ioctl() should really be considered as a (huge) family of syscalls, not a single syscall, because each ioctl number (such as TIOCGWINSZ) can read or write address space, and sometimes the file descriptor table, in a different way.

  6. It enables implementations of interfaces to be moved between the kernel and userland.

    If the mechanism used to talk to the kernel is the same as the mechanism used to talk to other userland processes, processes should be agnostic as to whether the interfaces they use are implemented by the kernel.

    For example, NLTSS allowed the filesystem to be in-kernel (faster) or in a userland process (more robust and secure). So it was possible to flip a switch to trade off speed and robustness.

  7. It allows implementations of interfaces to be in-process too.

    This allows further performace tradeoffs. The pathname lookup logic of open() can be moved between the calling process and a separate process. For speed, pathname lookup can be placed in the process that implements the filesystem (as in Plash currently) in order to avoid doing a cross-process call for each pathname element. Alternatively, pathname lookup can be done in libc (as in the Hurd).

  8. It can help with versioning of system interfaces.

    Stable interfaces are nice, but the ability to evolve interfaces is nice too.

    Using object-based message-passing interfaces instead of raw syscalls can help with that. You can introduce new objects, or add new methods to existing objects. Old, obsolete interfaces can be defined in terms of new interfaces, and transparently implemented outside the kernel. New interfaces can be exposed selectively rather than system-wide.

  9. It does not have to hurt performance.

    Objects can still be implemented in the kernel. For example, in EROS (and KeyKOS/CapROS/Coyotos), various object types are implemented by the kernel, but are invoked through the same capability invocation mechanism as userland-implemented objects.

    Object invocations can be synchronous (call-return). They do not have to go via an asynchronous message queue. The kernel can provide a send-message-and-wait-for-reply syscall that is equivalent to a sendmsg()+recvmsg() combo but faster. L4 and EROS provide syscalls like this.