Thursday, 4 November 2010

An introduction to FreeBSD-Capsicum

In my last blog post, I described one of the features in FreeBSD-Capsicum: process descriptors. Now it's time for an overview of Capsicum.

Capsicum is a set of new features for FreeBSD that adds better support for sandboxing, using a capability model in which the capabilities are Unix file descriptors (FDs).

Capsicum takes a fairly conservative approach, in that it does not make operations on file descriptors virtualisable. This approach has some limitations -- we do not get the advantages of having purely message-passing syscalls. However, it does mean that the new features are orthogonal.

The main new features are:

  • A per-process "capability mode", which is turned on via a new cap_enter() syscall.

    This mode disables any system call that provides ambient authority. So it disables system calls that use global namespaces, including the file namespace (e.g. open()), the PID namespace (e.g. kill()) and the network address namespace (e.g. connect()).

    This is not just a syscall filter, though. Some system calls optionally use a global namespace. For example, sendmsg() and sendto() optionally take a socket address. For openat(), the directory FD can be omitted. Capability mode disables those cases.

    Furthermore, capability mode disallows the use of ".." (parent directory) in filenames for openat() and the other *at() calls. This changes directory FDs to be limited-authority objects that convey access to a specific directory and not the whole filesystem. (It is interesting that this appears to be a property of the process, via capability mode, rather than of the directory FD itself.)

    Capability mode is inherited across fork and exec.

  • Finer-grained permissions for file descriptors. Each FD gets a large set of permission bits. A less-permissive copy of an FD can be created with cap_new(). For example, you can have read-only directory FDs, or non-seekable FDs for files.
  • Process descriptors. Capsicum doesn't allow kill() inside the sandbox because kill() uses a global namespace (the PID namespace). So Capsicum introduces process descriptors (a new FD type) as a replacement for process IDs, and adds pdfork(), pdwait() and pdkill() as replacements for fork(), wait() and kill().

Plus there are a couple of smaller features:

  • Message-based sockets. The Capsicum guys implemented Linux's SOCK_SEQPACKET interface for FreeBSD.
  • An fexecve() system call which takes a file descriptor for an executable. This replaces execve(), which is disabled in capability mode because execve() takes a filename.

    Capsicum's fexecve() ignores the implicit filename that is embedded in the executable's PT_INTERP field, so it is only good for loading the dynamic linker directly or for loading other statically linked executables.

Currently, the only programs that run under Capsicum are those that have been ported specially:

  • The Capsicum guys ported Chromium, and it works much the same way as on Linux. On both systems, Chromium's renderer process runs sandboxed, but the browser process does not. On both systems, Chromium needs to be able to turn on sandboxing after the process has started up, because it relies on legacy libraries that use open() during startup.
  • Some Unix utilities, including gzip and dhclient, have been extended to use sandboxing internally (privilege separation). Like Chromium, gzip can open files and then switch to capability mode.

However, it should be possible to run legacy Unix programs under Capsicum by porting Plash.

At first glance, it looks like Plash would have to do the same tricks under FreeBSD-Capsicum as it does under Linux to run legacy programs. Under Linux, Plash uses a modified version of glibc in order to intercept its system calls and convert them to system calls that work in the sandbox. That's because the Linux kernel doesn't provide any help with intercepting the system calls. The situation is similar under FreeBSD -- Capsicum does not add any extensions for bouncing syscalls back to a user space handler.

However, there are two aspects of FreeBSD that should make Plash easier to implement there than on Linux:

  • FreeBSD's libc is friendlier towards overriding its functions. On both systems, it is possible to override (for example) open() via an LD_PRELOAD library that defines its own "open" symbol. But with glibc on Linux, this doesn't work for libc's internal calls to open(), such as from fopen(). For a small gain in efficiency, these calls don't go through PLT entries and so cannot be intercepted.

    FreeBSD's libc doesn't use this optimisation and so it allows the internal calls to be intercepted too.

  • FreeBSD's dynamic linker and libc are not tightly coupled, so it is possible to change the dynamic linker to open its libraries via IPC calls without having to rebuild libc in lockstep.

    In contrast, Linux glibc's ld.so and libc.so are built together, share some data structures (such as TLS), and cannot be replaced independently.

Saturday, 23 October 2010

Process descriptors in FreeBSD-Capsicum

Capsicum is a set of new features for FreeBSD that adds better support for sandboxing, adding a capability mode in which the capabilities are Unix file descriptors (FDs). The features Capsicum adds are orthogonal, which is nice. One of the new features is process descriptors.

Capsicum adds a replacement for fork() called pdfork(), which returns a process descriptor (a new type of FD) rather than a PID. Similarly, there are replacements for wait() and kill() -- pdwait() and pdkill() -- which take FDs as arguments instead of PIDs.

The reason for the new interface is that kill() is not safe to allow in Capsicum's sandbox, because it provides ambient authority: it looks up its PID argument in a global namespace.

But even if you ignore sandboxing issues, this new interface is a significant improvement on POSIX process management:

  1. It allows the ability to wait on a process to be delegated to another process. In contrast, with wait()/waitpid(), a process's exit status can only be read by the process's parent.
  2. Process descriptors can be used with poll(). This avoids the awkwardness of having to use SIGCHLD, which doesn't work well if multiple libraries within the same process want to wait() for child processes.
  3. It gets rid of the race condition associated with kill(). Sending a signal to a PID is dodgy because the original process with this PID could have exited, and the kernel could have recycled the PID for an unrelated process, especially on a system where processes are spawned and exit frequently.

    kill() is only really safe when used by a parent process on its child, and only when the parent makes sure to use it before wait() has returned the child's exit status. pdkill() gets rid of this problem.

  4. In future, process descriptors can be extended to provide access to the process's internal state for debugging purposes, e.g. for reading registers and memory, or modifying memory mappings or the FD table. This would be an improvement on Linux's ptrace() interface.

However, there is one aspect of Capsicum's process descriptors that I think was a mistake: Dropping the process descriptor for a process causes the kernel to kill it. (By this I mean that if there are no more references to the process descriptor, because they have been close()'d or because the processes holding them have exited, the kernel will terminate the process.)

The usual principle of garbage collection in programming languages is that GC should not affect the observable behaviour of the program (except for resource usage). Capsicum's kill-on-close() behaviour violates that principle.

kill-on-close() is based on the assumption that the launcher of a subprocess always wants to check the subprocess's exit status. But this is not always the case. Exit status is just one communications channel, but not one we always care about. It's common to launch a process to communicate with it via IPC without wanting to have to wait() for it. (Double fork()ing is a way to do this in POSIX without leaving a zombie process.) Many processes are fire-and-forget.

The situation is analogous for threads. pthreads provides pthread_detach() and PTHREAD_CREATE_DETACHED as a way to launch a thread without having to pthread_join() it.

In Python, when you create a thread, you get a Thread object back, but if the Thread object is GC'd the thread won't be killed. In fact, finalisation of a thread object is implemented using pthread_detach() on Unix.

I know of one language implementation that (as I recall) will GC threads, Mozart/Oz, but this only happens when a thread can have no further observable effects. In Oz, if a thread blocks waiting for the resolution of a logic variable that can never be resolved (because no other thread holds a reference to it), then the thread can be GC'd. So deadlocked threads can be GC'd just fine. Similarly, if a thread holds no communications channels to the outside world, it can be GC'd safely.

Admittedly, Unix already violates this GC principle by the finalisation of pipe FDs and socket FDs, because dropping one endpoint of the socket/pipe pair is visible as an EOF condition on the other endpoint. However, this is usually used to free up resources or to unblock a process that would otherwise fail to make progress. Dropping a process descriptor would halt progress -- the opposite. Socket/pipe EOF can be used to do this too, but it is less common, and I'm not sure we should encourage this.

However, aside from this complaint, process descriptors are a good idea. It would be good to see them implemented in Linux.

Wednesday, 11 August 2010

My workflow with git-cl + Rietveld

Git's model of changes (which is shared by Mercurial, Bazaar and Monotone) makes it awkward to revise earlier patches. This can make things difficult when you are sending out multiple, dependent changes for code review.

Suppose I create changes A and B. B depends functionally on A, i.e. tests will not pass for B without A also being applied. There might or might not be a textual dependency (B might or might not modify lines of code modified by A).

Because code review is slow (high latency), I need to be able to send out changes A and B for review and still be able to continue working on further changes. But I also need to be able to revisit A to make changes to it based on review feedback, and then make sure B works with the revised A.

What I do is create separate branches for A and B, where B branches off of A. To revise change A, I "git checkout" its branch and add further commits. Later I can update B by checking it out and rebasing it onto the current tip of A. Uploading A or B to the review system or committing A or B upstream (to SVN) involves squashing their branch's commits into one commit. (This squashing means the branches contain micro-history that reviewers don't see and which is not kept after changes are pushed upstream.)

The review system in question is Rietveld, the code review web app used for Chromium and Native Client development. Rietveld does not have any special support for patch series -- it is only designed to handle one patch at a time, so it does not know about dependencies between changes. The tool for uploading changes from Git to Rietveld and later committing them to SVN is "git-cl" (part of depot_tools).

git-cl is intended to be used with one branch per change-under-review. However, it does not have much support for handling changes which depend on each other.

This workflow has a lot of problems:

  • When using git-cl on its own, I have to manually keep track that B is to be rebased on to A. When uploading B to Rietveld, I must do "git cl upload A". When updating B, I must first do "git rebase A". When diffing B, I have to do "git diff A". (I have written a tool to do this. It's not very good, but it's better than doing it manually.)
  • Rebasing B often produces conflicts if A has been squash-committed to SVN. That's because if branch A contained multiple patches, Git doesn't know how to skip over patches from A that are in branch B.
  • Rebasing loses history. Undoing a rebase is not easy.
  • In the case where B doesn't depend on A, rebasing branch B so that it doesn't include the contents of branch A is a pain. (Sometimes I will stack B on top of A even when it doesn't depend on A, so that I can test the changes together. An alternative is to create a temporary branch and "git merge" A and B into it, but creating further branches adds to the complexity.)
  • If there is a conflict, I don't find out about it until I check out and update the affected branch.
  • This gets even more painful if I want to maintain changes that are not yet ready for committing or posting for review, and apply them alongside changes that are ready for review.

There are all reasons why I would not recommend this workflow to someone who is not already very familiar with Git.

The social solution to this problem would be for code reviews to happen faster, which would reduce the need to stack up changes. If all code reviews reached a conclusion within 24 hours, that would be an improvement. But I don't think that is going to happen.

The technical solution would be better patch management tools. I am increasingly thinking that Darcs' set-of-patches model would work better for this than Git's DAG-of-commits model. If I could set individual patches to be temporarily applied or unapplied to the working copy, and reorder and group patches, I think it would be easier to revisit changes that I have posted for review.

Friday, 6 August 2010

CVS's problems resurface in Git

Although modern version control systems have improved a lot on CVS, I get the feeling that there is a fundamental version control problem that the modern VCSes (Git, Mercurial, Bazaar, and I'll include Subversion too!) haven't solved. The curious thing is that CVS had sort of made some steps towards addressing it.

In CVS, history is stored per file. If you commit a change that crosses multiple files, CVS updates each file's history separately. This causes a bunch of problems:

  • CVS does not represent changesets or snapshots as first class objects. As a result, many operations involve visiting every file's history.

    Reconstructing a changeset involves searching all files' histories to match up the individual file changes. (This was just about possible, though I hear there are tricky corner cases. Later CVS added a commit ID field that presumably helped with this.)

    Creating a tag at the latest revision involves adding a tag to every file's history. Reconstructing a tag, or a time-based snapshot, involves visiting every file's history again.

  • CVS does not represent file renamings, so the standard history tools like "cvs log" and "cvs annotate" are not able to follow a file's history from before it was renamed.

In the DAG-based decentralised VCSes (Git, Mercurial, Monotone, Bazaar), history is stored per repository. The fundamental data structure for history is a Directed Acyclic Graph of commit objects. Each commit points to a snapshot of the entire file tree plus zero or more parent commits. This addresses CVS's problems:

  • Extracting changesets is easy because they are the same thing as commit objects.
  • Creating a tag is cheap and easy. Recording any change creates a commit object (a snapshot-with-history), so creating a tag is as simple as pointing to an already-existing commit object.

However, often it is not practical to put all the code that you're interested in into a single Git repository! (I pick on Git here because, of the DAG-based systems, it is the one I am most familar with.) While it can be practical to do this with Subversion or CVS, it is less practical with the DAG-based decentralised VCSes:

  • In the DAG-based systems, branching is done at the level of a repository. You cannot branch and merge subdirectories of a repository independently: you cannot create a commit that only partially merges two parent commits.
  • Checking out a Git repository involves downloading not only the entire current revision, but the entire history. So this creates pressure against putting two partially-related projects together in the same repository, especially if one of the projects is huge.
  • Existing projects might already use separate repositories. It is usually not practical to combine those repositories into a single repository, because that would create a repo that is incompatible with the original repos. That would make it difficult to merge upstream changes. Patch sharing would become awkward because the filenames in patches would need fixing.

This all means that when you start projects, you have to decide how to split your code among repositories. Changing these decisions later is not at all straightforward.

The result of this is that CVS's problems have not really been solved: they have just been pushed up a level. The problems that occurred at the level of individual files now occur at the level of repositories:

  • The DAG-based systems don't represent changesets that cross repositories. They don't have a type of object for representing a snapshot across repositories.
  • Creating a tag across repositories would involve visiting every repository to add a tag to it.
  • There is no support for moving files between repositories while tracking the history of the file.

The funny thing is that since CVS hit this problem all the time, the CVS tools were better at dealing with multiple histories than Git.

To compare the two, imagine that instead of putting your project in a single Git repository, you put each one of the project's files in a separate Git repository. This would result in a history representation that is roughly equivalent to CVS's history representation. i.e. Every file has its own separate history graph.

  • To check in changes to multiple files, you have to "cd" to each file's repository directory, and "git commit" and "git push" the file change.
  • To update to a new upstream version, or to switch branch, you have to "cd" to each file's repository directory again to do "git pull/fetch/rebase/checkout" or whatever.
  • Correlating history across files must be done manually. You could run "git log" or "gitk" on two repositories and match up the timelines or commit messages by hand. I don't know of any tools for doing this.

In contrast, for CVS, "cvs commit" works across multiple files and (if I remember rightly) even across multiple working directories. "cvs update" works across multiple files.

While "cvs log" doesn't work across multiple files, there is a tool called "CVS Monitor" which reconstructs history and changesets across files.

Experience with CVS suggests that Git could be changed to handle the multiple-repository case better. "git commit", "git checkout" etc. could be changed to operate across multiple Git working copies. Maybe "git log" and "gitk" could gain options to interleave histories by timestamp.

Of course, that would lead to cross-repo support that is only as good as CVS's cross-file support. We might be able to apply a textual tag name across multiple Git repos with a single command just as a tag name can be applied across files with "cvs tag". But that doesn't give us an immutable tag object than spans repos.

My point is that the fundamental data structure used in the DAG-based systems doesn't solve CVS's problem, it just postpones it to a larger level of granularity. Some possible solutions to the problem are DEPS files (as used by Chromium), Git submodules, or Darcs-style set-of-patches repos. These all introduce new data structures. Do any of these solve the original problem? I am undecided -- this question will have to wait for another post. :-)

Wednesday, 5 May 2010

The trouble with Buildbot

The trouble with Buildbot is that it encourages you to put rules into a Buildbot-specific build configuration that is separate from the normal configuration files that you might use to build a project (configure scripts, makefiles, etc.).

This is not a big problem if your Buildbot configuration is simple and just consists of, say, "svn up", "./configure", "make", "make test", and never changes.

But it is a problem if your Buildbot configuration becomes non-trivial and ever has to be updated, because the Buildbot configuration cannot be tested outside of Buildbot.

The last time I had to maintain a Buildbot setup, it was necessary to try out configuration changes directly on the Buildbot master. This doesn't work out well if multiple people are responsible for maintaining the setup! Whoever makes a change has to remember to check it in to version control after they've got it working, which of course doesn't always happen. It's a bit ironic that Buildbot is supposed to support automated testing but doesn't follow best practices for testing itself.

There is a simple way around this though: Instead of putting those separate steps -- "./configure", "make", "make test" -- into the Buildbot config, put them into a script, check the script into version control, and have the Buildbot config run that script. Then the Buildbot config just consists of doing "svn up" and running the script. It is then possible to test changes to the script before checking it in. I've written scripts like this that go as far as debootstrapping a fresh Ubuntu chroot to run tests in, which ensures your package dependency list is up to date.

Unfortunately, Buildbot's logging facilities don't encourage having a minimal Buildbot config.

If you use a complicated Buildbot configuration with many Buildbot steps, Buildbot can display each step separately in its HTML-formatted logs. This means:

  • you can see progress;
  • you can see which steps have failed;
  • you'd be able to see how long the steps take if Buildbot actually displayed that.

Whereas if you have one big build script in a single Buildbot step, all the output goes into one big, flat, plain text log file.

I think the solution is to decouple the structured-logging functionality from the glorified-cron functionality that Buildbot provides. My implementation of this is build_log.py (used in Plash's build scripts), which I'll write more about later.

Saturday, 1 May 2010

Breakpoints in gdb using int3

Here is a useful trick I discovered recently while debugging some changes to the seccomp sandbox. To trigger a breakpoint on x86, just do:
__asm__("int3");

Then it is possible to inspect registers, memory, the call stack, etc. in gdb, without having to get gdb to set a breakpoint. The instruction triggers a SIGTRAP, so if the process is not running under gdb and has no signal handler for SIGTRAP, the process will die.

This technique seems to be fairly well known, although it's not mentioned in the gdb documentation. int3 is the instruction that gdb uses internally for setting breakpoints.

Sometimes it's easier to insert an int3 and rebuild than get gdb to set a breakpoint. For example, setting a gdb breakpoint on a line won't work in the middle of a chunk of inline assembly.

My expectations of gdb are pretty low these days. When I try to use it to debug something low level, it often doesn't work, which is why I have been motivated to hack together my own debugging tools in the past. For example, if I run gdb on the glibc dynamic linker (ld.so) on Ubuntu Hardy or Karmic, it gives:

$ gdb /lib/ld-linux-x86-64.so.2 
...
(gdb) run
Starting program: /lib/ld-linux-x86-64.so.2 
Cannot access memory at address 0x21ec88
(gdb) 

So it's nice to find a case where I can get some useful information out of gdb.

Monday, 1 February 2010

How to build adb, the Android debugger

adb is the Android debugger (officially the "Android debug bridge" I think). It is a tool for getting shell access to an Android phone across a USB connection. It can also be used to copy files to and from the Android device and do port-forwarding. In short, it is similar to ssh, but is not ssh. (Why couldn't they have just used ssh?)

I have not been able to find any Debian/Ubuntu packages for adb. The reason why it has not been packaged becomes apparent if you try to build the thing. Android has a monolithic build system which wants to download a long list of Git repositories and build everything. If you follow the instructions, it will download 1.1Gb from Git and leave you with a source+build directory of 6Gb. It isn't really designed for building subsets of components, unlike, say, JHBuild. It's basically a huge makefile. It doesn't know about dependencies between components. However, it does have some idea about dependencies between output files.

Based on a build-of-all-Android, I figured out how to build a much smaller subset containing adb. This downloads a more manageable 11Mb and finishes with a source+build directory containing 40Mb. This is also preferable to downloading the pre-built Android SDK, which has a non-free licence.

Instructions:

$ sudo apt-get install build-essential libncurses5-dev
$ git clone git://android.git.kernel.org/platform/system/core.git system/core
$ git clone git://android.git.kernel.org/platform/build.git build
$ git clone git://android.git.kernel.org/platform/external/zlib.git external/zlib
$ git clone git://android.git.kernel.org/platform/bionic.git bionic
$ echo "include build/core/main.mk" >Makefile

Now edit build/core/main.mk and comment out the parts labelled

 # Check for the correct version of java
and
 # Check for the correct version of javac
Since adb doesn't need Java, these checks are unnecessary.

Also edit build/target/product/sdk.mk and comment out the "include" lines after

 # include available languages for TTS in the system image
I don't know exactly what this is about but it avoids having to download language files that aren't needed for adb. Then building the adb target should work:
make out/host/linux-x86/bin/adb
If you try running "adb shell" you might get this:
ubuntu$ ./out/host/linux-x86/bin/adb shell
* daemon not running. starting it now *
* daemon started successfully *
error: insufficient permissions for device
So you probably need to do "adb start-server" as root first:
ubuntu$ sudo ./out/host/linux-x86/bin/adb kill-server
ubuntu$ sudo ./out/host/linux-x86/bin/adb start-server
* daemon not running. starting it now *
* daemon started successfully *
ubuntu$ ./out/host/linux-x86/bin/adb shell
$
For the record, here are the errors I got that motivated each step:
  • make: *** No rule to make target `external/svox/pico/lang/PicoLangItItInSystem.mk'.  Stop.
    
    - hence commenting out the picolang includes.
  • system/core/libzipfile/zipfile.c:6:18: error: zlib.h: No such file or directory
    
    - I'm guessing adb needs libzipfile which needs zlib.
  • system/core/libcutils/mspace.c:59:50: error: ../../../bionic/libc/bionic/dlmalloc.c: No such file or directory
    
    - This is why we need to download bionic (the C library used on Android), even though we aren't building any code to run on an Android device. This is the ugliest part and it illustrates why this is not a modular build system. The code does
    #include "../../../bionic/libc/bionic/dlmalloc.c"
    
    to #include a file from another module. It seems any part of the build can refer to any other part, via relative pathnames, so the modules cannot be build separately. I don't know whether this is an isolated case, but it makes it difficult to put adb into a Debian package.
  • host Executable: adb (out/host/linux-x86/obj/EXECUTABLES/adb_intermediates/adb)
    /usr/bin/ld: cannot find -lncurses
    
    - hence the ncurses-dev dependency above. However, this error is a little odd because if adb really depended on ncurses, it would fail when it tried to #include a header file. Linking with "-lncurses" is probably superfluous.

The instructions above will probably stop working as new versions are pushed to the public Git branches. (However, this happens infrequently because Android development is not done in the open.) For reproducibility, here are the Git commit IDs:

$ find -name "*.git" -exec sh -c 'echo "`git --git-dir={} rev-parse HEAD` {}"' ';'
91a54c11cbfbe3adc1df2f523c75ad76affb0ae9 ./system/core/.git
95604529ec25fe7923ba88312c590f38aa5e3d9e ./bionic/.git
890bf930c34d855a6fbb4d09463c1541e90381d0 ./external/zlib/.git
b7c844e7cf05b4cea629178bfa793321391d21de ./build/.git
It looks like the current version is Android 1.6 (Donut):
$ find -name "*.git" -exec sh -c 'echo "`git --git-dir={} describe` {}"' ';'
android-1.6_r1-80-g91a54c1 ./system/core/.git
android-1.6_r1-43-g9560452 ./bionic/.git
android-1.6_r1-7-g890bf93 ./external/zlib/.git
android-sdk-1.6-docs_r1-65-gb7c844e ./build/.git