Thursday, 10 February 2011

Cookies versus the Chrome sandbox

Although Chrome's sandbox does not protect one web site from another in general, it can provide such protection in some cases. Those cases are ones in which HTTP cookies are either reduced in scope or not used at all. One lesson we could draw from this is that cookies reduce the usefulness of Chrome's sandbox.

The scenario we are exploring supposes that there is a vulnerability in Chrome's renderer process, and that the vulnerability lets a malicious site take control of the renderer process. This means that all the restrictions that are normally enforced on the malicious site by the renderer process are stripped away, and all we are left with are the restrictions enforced on the renderer process by the Chrome browser process and the Chrome sandbox.

In my previous blog post, I explained how an attacker site, evil.com, that manages to exploit the renderer process could steal the login cookies from another site, mail.com, and so gain access to the user's e-mail.

The attack is made possible by the combination of two features:

  1. cookies
  2. frames

Chrome currently runs a framed page in the same renderer process as the parent page. HTML standards allow framed pages to access cookies, so the browser process has to give the renderer process access to the cookies for both pages.

Because this problem arises from the interaction of these features, one site is not always vulnerable to other sites. There should be a couple of ways that users and sites can mitigate the problem, without changing Chrome. Firstly, the user can change how cookies are scoped within the browser by setting up multiple profiles. Secondly, a site can skirt around the problem by not using cookies at all. We discuss these possibilities below.

  • Use multiple profiles: As a user, you can create multiple browser profiles, and access mail.com and evil.com in separate profiles.

    Chrome does not make this very easy at the moment. It provides a command line option (--user-data-dir) for creating more profiles, but this feature is not available through the GUI. Chrome's GUI provides just two profiles: one profile (persistent) for normal windows, plus an extra profile (non-persistent) for windows in "incognito" mode.

    So, you could log in to mail.com in a normal window and view evil.com in an incognito window, or vice versa. This is safer because cookies registered by a web site in one profile are not accessible to sites visited in another profile. Each profile has a separate pool of cookies. This feature of browser profiles means you can log into one mail.com account in incognito mode and a different mail.com account in normal mode.

    It would be interesting to see if this profile separation could be automated.

  • Use web-keys instead of cookies: The developers of mail.com could defend against evil.com by not using cookies to track user logins. Instead mail.com could use web-keys. Logging in to mail.com would take you to a URL containing an unguessable token. Such a URL is known as a "web-key". (For technical reasons, the unguessable token should be in the "#" fragment part of the URL.)

    This is safer because even if evil.com compromises the renderer process, the Chrome browser process generally does not give the renderer process a way to enumerate other tabs, discover other tabs' URLs, or enumerate the user's browsing history and bookmarks.

    Using web-keys has been proposed before to address a different (but related) web security problem, clickjacking. (See Tyler Close's essay, The Confused Deputy Rides Again.)

    Using web-keys would change the user interface for mail.com a little. Cookies are the mechanism by which entering "mail.com" in the URL bar can take you directly to the inbox of the e-mail account you are logged in to. Whenever the browser sees "mail.com" as the domain name it automatically adds the cookies for "mail.com" to the HTTP request. (This automatic attachment of credentials makes this a type of ambient authority.) This mechanism adds some convenience for the user, but it is also a means by which evil.com can attack mail.com, because whenever evil.com links to mail.com, the cookies get added to the request too. The browser does not distinguish between a URL entered by the user in the address bar and a URL provided by another site like evil.com.

    So if mail.com were changed to remove its use of cookies, you would lose the small convenience of being able to get to your inbox directly by typing "mail.com" without re-logging-in. Is there a way to get this convenience back? An alternative to using cookies to record logins is to use bookmarks. If mail.com uses web-key URLs, you could bookmark the address of the inbox page. To get to your inbox without re-logging-in, you would select the bookmark.

    These days the Chrome address bar accepts more than just URLs (which is why the address bar is actually called the "omnibar"), so you could imagine that entering "mail.com" would search your bookmarks and jump to your bookmarked inbox page rather than being treated as the URL "http://mail.com".

Conclusions

There are a couple of possible conclusions we could draw from this:

  • Cookies weaken the Chrome sandbox.
  • Frames weaken the Chrome sandbox.

If we blame cookies, we should look critically at other browser features that behave similarly to cookies by providing ambient authority. For example, the new LocalFileSystem feature (an extension of the File API) provides local storage. The file store it provides is per-origin and so is based on ambient authority. If mail.com uses this to cache e-mail, and evil.com exploits the renderer process, then evil.com will be able to read and modify the cached e-mail. There are other local storage APIs (IndexedDB and Web Storage), but they are based on ambient authority too. From this perspective, the situation is getting worse.

If we blame frames, this suggests that browsers should fix the problem by implementing site isolation. Site isolation means that the browser would put a framed page in a different renderer process from a parent page. Microsoft's experimental Gazelle browser implements site isolation but breaks compatibility with the web. It remains to be seen whether a web browser can implement site isolation while retaining compatibility and good performance.

Either way, concerned users and web app authors need to know how browser features are implemented if they are to judge how much security the browser can provide. That's not easy, because the web platform is so complicated!

Tuesday, 21 December 2010

A common misconception about the Chrome sandbox

A common misconception about the Chrome web browser is that its sandbox protects one web site from another.

For example, suppose you are logged into your e-mail account on mail.com in one tab, and have evil.com open in another tab. Suppose evil.com finds an exploit in the renderer process, such as a memory safety bug, that lets it run arbitrary code there. Can evil.com get hold of your HTTP cookies for mail.com, and thereby access your e-mail account?

Unfortunately, the answer is yes.

The reason is that mail.com and evil.com can be assigned to the same renderer process. The browser does not only do this to save memory. evil.com can cause this to happen by opening an iframe on mail.com. With mail.com's code running in the same exploited renderer process, evil.com can take it over and read the cookies for your mail.com account and use them for its own ends.

There are a couple of reasons why the browser puts a framed site in the same renderer process as the parent site. Firstly, if the sites were handled by separate processes, the browser would have to do costly compositing across renderer processes to make the child frame appear inside the parent frame. Secondly, in some cases the DOM allows Javascript objects in one frame to obtain references to DOM objects in other frames, even across origins, and it is easier for this to be managed within one renderer process.

I don't say this to pick on Chrome, of course. It is better to have the sandbox than not to have it.

Chrome has never claimed that the sandbox protects one site against another. In the tech report "The Security Architecture of the Chromium Browser" (Barth, Jackson, Reis and the Chrome Team; 2008), "Origin isolation" is specifically listed under "Out-of-scope goals". They state that "an attacker who compromises the rendering engine can act on behalf of any web site".

There are a couple of ways that web sites and users can mitigate this problem, which I'll discuss in another post. However, in the absence of those defences, what Chrome's multi-process architecture actually gives you is the following:

  • Robustness if a renderer crashes. Having multiple renderer processes means that a crash of one takes down only a limited number of tabs, and the browser and the other renderers will survive. It also helps memory management.

    But we can get this without sandboxing the renderers.

  • Protection of the rest of the user's system from vulnerabilities in the renderer process. For example, the sandboxed renderer cannot read any of the user's files, except for those the user has granted through a "File Upload" file chooser.

    But we can get this by sandboxing the whole browser (including any subprocesses), without needing to have the browser separated from the renderer.

    For example, since 2007 I have been running Firefox under Plash (a sandbox), on Linux.

    In principle, such a sandbox should be more effective at protecting applications and files outside the browser than the Chrome sandbox, because the sandbox covers all of the browser, including its network stack and the so-called browser "chrome" (this means the parts of the GUI outside of the DOM).

    In practice, Plash is not complete as a sandbox for GUI apps because it does not limit access to the X Window System, so apps can do things that X allows such as screen scraping other apps and sending them input.

The main reason Chrome was developed to sandbox its renderer processes but not the whole browser is that this is easier to implement with sandboxing technologies that are easily deployable today. Ideally, though, the whole browser would be sandboxed. One of the only components that would stay unsandboxed, and have access to all the user's files, would be the "File Open" dialog box for choosing files to upload.

Saturday, 18 December 2010

When printf debugging is a luxury

Inserting printf() calls is often considered to be a primitive fallback when other debugging tools are not available, such as stack backtraces with source line numbers.

But there are some situations in low-level programming where most libc calls don't work and so even printf() and assert() are unavailable luxuries. This can happen:

  • when libc is not properly initialised yet;
  • when we writing code that is called by libc and cannot re-enter libc code;
  • when we are in a signal handler;
  • when only limited stack space is available;
  • when we cannot allocate memory for some reason; or
  • when we are not even linked to libc.

Here's a fragment of code that has come in handy in these situations. It provides a simple assert() implementation:

#include <string.h>
#include <unistd.h>

static void debug(const char *msg) {
  write(2, msg, strlen(msg));
}

static void die(const char *msg) {
  debug(msg);
  _exit(1);
}

#define TO_STRING_1(x) #x
#define TO_STRING(x) TO_STRING_1(x)

#define assert(expr) {                                                        \
  if (!(expr)) die("assertion failed at " __FILE__ ":" TO_STRING(__LINE__)    \
                   ": " #expr "\n"); }

By using preprocessor trickery to construct the assertion failure string at compile time, it avoids having to format the string at runtime. So it does not need to allocate memory, and it doesn't need to do multiple write() calls (which can become interleaved with other output in the multi-threaded case).

Sometimes even libc's write() is a luxury. In some builds of GNU libc on Linux, glibc's syscall wrappers use the TLS register (%gs on i386) to fetch the address of a routine for making syscalls.

However, if %gs is not set up properly for some reason, this will fail. For example, for Native Client's i386 sandbox, %gs is set to a different value whenever sandboxed code is running, and %gs stays in this state if sandboxed code faults and triggers a signal handler. In Chromium's seccomp-sandbox, %gs is set to zero in the trusted thread.

In those situations we have to bypass libc and do the system calls ourselves. The following snippet comes from reference_trusted_thread.cc. The sys_*() functions are defined by linux_syscall_support.h, which provides wrappers for many Linux syscalls:

#include "linux_syscall_support.h"

void die(const char *msg) {
  sys_write(2, msg, strlen(msg));
  sys_exit_group(1);
}

Thursday, 4 November 2010

An introduction to FreeBSD-Capsicum

In my last blog post, I described one of the features in FreeBSD-Capsicum: process descriptors. Now it's time for an overview of Capsicum.

Capsicum is a set of new features for FreeBSD that adds better support for sandboxing, using a capability model in which the capabilities are Unix file descriptors (FDs).

Capsicum takes a fairly conservative approach, in that it does not make operations on file descriptors virtualisable. This approach has some limitations -- we do not get the advantages of having purely message-passing syscalls. However, it does mean that the new features are orthogonal.

The main new features are:

  • A per-process "capability mode", which is turned on via a new cap_enter() syscall.

    This mode disables any system call that provides ambient authority. So it disables system calls that use global namespaces, including the file namespace (e.g. open()), the PID namespace (e.g. kill()) and the network address namespace (e.g. connect()).

    This is not just a syscall filter, though. Some system calls optionally use a global namespace. For example, sendmsg() and sendto() optionally take a socket address. For openat(), the directory FD can be omitted. Capability mode disables those cases.

    Furthermore, capability mode disallows the use of ".." (parent directory) in filenames for openat() and the other *at() calls. This changes directory FDs to be limited-authority objects that convey access to a specific directory and not the whole filesystem. (It is interesting that this appears to be a property of the process, via capability mode, rather than of the directory FD itself.)

    Capability mode is inherited across fork and exec.

  • Finer-grained permissions for file descriptors. Each FD gets a large set of permission bits. A less-permissive copy of an FD can be created with cap_new(). For example, you can have read-only directory FDs, or non-seekable FDs for files.
  • Process descriptors. Capsicum doesn't allow kill() inside the sandbox because kill() uses a global namespace (the PID namespace). So Capsicum introduces process descriptors (a new FD type) as a replacement for process IDs, and adds pdfork(), pdwait() and pdkill() as replacements for fork(), wait() and kill().

Plus there are a couple of smaller features:

  • Message-based sockets. The Capsicum guys implemented Linux's SOCK_SEQPACKET interface for FreeBSD.
  • An fexecve() system call which takes a file descriptor for an executable. This replaces execve(), which is disabled in capability mode because execve() takes a filename.

    Capsicum's fexecve() ignores the implicit filename that is embedded in the executable's PT_INTERP field, so it is only good for loading the dynamic linker directly or for loading other statically linked executables.

Currently, the only programs that run under Capsicum are those that have been ported specially:

  • The Capsicum guys ported Chromium, and it works much the same way as on Linux. On both systems, Chromium's renderer process runs sandboxed, but the browser process does not. On both systems, Chromium needs to be able to turn on sandboxing after the process has started up, because it relies on legacy libraries that use open() during startup.
  • Some Unix utilities, including gzip and dhclient, have been extended to use sandboxing internally (privilege separation). Like Chromium, gzip can open files and then switch to capability mode.

However, it should be possible to run legacy Unix programs under Capsicum by porting Plash.

At first glance, it looks like Plash would have to do the same tricks under FreeBSD-Capsicum as it does under Linux to run legacy programs. Under Linux, Plash uses a modified version of glibc in order to intercept its system calls and convert them to system calls that work in the sandbox. That's because the Linux kernel doesn't provide any help with intercepting the system calls. The situation is similar under FreeBSD -- Capsicum does not add any extensions for bouncing syscalls back to a user space handler.

However, there are two aspects of FreeBSD that should make Plash easier to implement there than on Linux:

  • FreeBSD's libc is friendlier towards overriding its functions. On both systems, it is possible to override (for example) open() via an LD_PRELOAD library that defines its own "open" symbol. But with glibc on Linux, this doesn't work for libc's internal calls to open(), such as from fopen(). For a small gain in efficiency, these calls don't go through PLT entries and so cannot be intercepted.

    FreeBSD's libc doesn't use this optimisation and so it allows the internal calls to be intercepted too.

  • FreeBSD's dynamic linker and libc are not tightly coupled, so it is possible to change the dynamic linker to open its libraries via IPC calls without having to rebuild libc in lockstep.

    In contrast, Linux glibc's ld.so and libc.so are built together, share some data structures (such as TLS), and cannot be replaced independently.

Saturday, 23 October 2010

Process descriptors in FreeBSD-Capsicum

Capsicum is a set of new features for FreeBSD that adds better support for sandboxing, adding a capability mode in which the capabilities are Unix file descriptors (FDs). The features Capsicum adds are orthogonal, which is nice. One of the new features is process descriptors.

Capsicum adds a replacement for fork() called pdfork(), which returns a process descriptor (a new type of FD) rather than a PID. Similarly, there are replacements for wait() and kill() -- pdwait() and pdkill() -- which take FDs as arguments instead of PIDs.

The reason for the new interface is that kill() is not safe to allow in Capsicum's sandbox, because it provides ambient authority: it looks up its PID argument in a global namespace.

But even if you ignore sandboxing issues, this new interface is a significant improvement on POSIX process management:

  1. It allows the ability to wait on a process to be delegated to another process. In contrast, with wait()/waitpid(), a process's exit status can only be read by the process's parent.
  2. Process descriptors can be used with poll(). This avoids the awkwardness of having to use SIGCHLD, which doesn't work well if multiple libraries within the same process want to wait() for child processes.
  3. It gets rid of the race condition associated with kill(). Sending a signal to a PID is dodgy because the original process with this PID could have exited, and the kernel could have recycled the PID for an unrelated process, especially on a system where processes are spawned and exit frequently.

    kill() is only really safe when used by a parent process on its child, and only when the parent makes sure to use it before wait() has returned the child's exit status. pdkill() gets rid of this problem.

  4. In future, process descriptors can be extended to provide access to the process's internal state for debugging purposes, e.g. for reading registers and memory, or modifying memory mappings or the FD table. This would be an improvement on Linux's ptrace() interface.

However, there is one aspect of Capsicum's process descriptors that I think was a mistake: Dropping the process descriptor for a process causes the kernel to kill it. (By this I mean that if there are no more references to the process descriptor, because they have been close()'d or because the processes holding them have exited, the kernel will terminate the process.)

The usual principle of garbage collection in programming languages is that GC should not affect the observable behaviour of the program (except for resource usage). Capsicum's kill-on-close() behaviour violates that principle.

kill-on-close() is based on the assumption that the launcher of a subprocess always wants to check the subprocess's exit status. But this is not always the case. Exit status is just one communications channel, but not one we always care about. It's common to launch a process to communicate with it via IPC without wanting to have to wait() for it. (Double fork()ing is a way to do this in POSIX without leaving a zombie process.) Many processes are fire-and-forget.

The situation is analogous for threads. pthreads provides pthread_detach() and PTHREAD_CREATE_DETACHED as a way to launch a thread without having to pthread_join() it.

In Python, when you create a thread, you get a Thread object back, but if the Thread object is GC'd the thread won't be killed. In fact, finalisation of a thread object is implemented using pthread_detach() on Unix.

I know of one language implementation that (as I recall) will GC threads, Mozart/Oz, but this only happens when a thread can have no further observable effects. In Oz, if a thread blocks waiting for the resolution of a logic variable that can never be resolved (because no other thread holds a reference to it), then the thread can be GC'd. So deadlocked threads can be GC'd just fine. Similarly, if a thread holds no communications channels to the outside world, it can be GC'd safely.

Admittedly, Unix already violates this GC principle by the finalisation of pipe FDs and socket FDs, because dropping one endpoint of the socket/pipe pair is visible as an EOF condition on the other endpoint. However, this is usually used to free up resources or to unblock a process that would otherwise fail to make progress. Dropping a process descriptor would halt progress -- the opposite. Socket/pipe EOF can be used to do this too, but it is less common, and I'm not sure we should encourage this.

However, aside from this complaint, process descriptors are a good idea. It would be good to see them implemented in Linux.

Wednesday, 11 August 2010

My workflow with git-cl + Rietveld

Git's model of changes (which is shared by Mercurial, Bazaar and Monotone) makes it awkward to revise earlier patches. This can make things difficult when you are sending out multiple, dependent changes for code review.

Suppose I create changes A and B. B depends functionally on A, i.e. tests will not pass for B without A also being applied. There might or might not be a textual dependency (B might or might not modify lines of code modified by A).

Because code review is slow (high latency), I need to be able to send out changes A and B for review and still be able to continue working on further changes. But I also need to be able to revisit A to make changes to it based on review feedback, and then make sure B works with the revised A.

What I do is create separate branches for A and B, where B branches off of A. To revise change A, I "git checkout" its branch and add further commits. Later I can update B by checking it out and rebasing it onto the current tip of A. Uploading A or B to the review system or committing A or B upstream (to SVN) involves squashing their branch's commits into one commit. (This squashing means the branches contain micro-history that reviewers don't see and which is not kept after changes are pushed upstream.)

The review system in question is Rietveld, the code review web app used for Chromium and Native Client development. Rietveld does not have any special support for patch series -- it is only designed to handle one patch at a time, so it does not know about dependencies between changes. The tool for uploading changes from Git to Rietveld and later committing them to SVN is "git-cl" (part of depot_tools).

git-cl is intended to be used with one branch per change-under-review. However, it does not have much support for handling changes which depend on each other.

This workflow has a lot of problems:

  • When using git-cl on its own, I have to manually keep track that B is to be rebased on to A. When uploading B to Rietveld, I must do "git cl upload A". When updating B, I must first do "git rebase A". When diffing B, I have to do "git diff A". (I have written a tool to do this. It's not very good, but it's better than doing it manually.)
  • Rebasing B often produces conflicts if A has been squash-committed to SVN. That's because if branch A contained multiple patches, Git doesn't know how to skip over patches from A that are in branch B.
  • Rebasing loses history. Undoing a rebase is not easy.
  • In the case where B doesn't depend on A, rebasing branch B so that it doesn't include the contents of branch A is a pain. (Sometimes I will stack B on top of A even when it doesn't depend on A, so that I can test the changes together. An alternative is to create a temporary branch and "git merge" A and B into it, but creating further branches adds to the complexity.)
  • If there is a conflict, I don't find out about it until I check out and update the affected branch.
  • This gets even more painful if I want to maintain changes that are not yet ready for committing or posting for review, and apply them alongside changes that are ready for review.

There are all reasons why I would not recommend this workflow to someone who is not already very familiar with Git.

The social solution to this problem would be for code reviews to happen faster, which would reduce the need to stack up changes. If all code reviews reached a conclusion within 24 hours, that would be an improvement. But I don't think that is going to happen.

The technical solution would be better patch management tools. I am increasingly thinking that Darcs' set-of-patches model would work better for this than Git's DAG-of-commits model. If I could set individual patches to be temporarily applied or unapplied to the working copy, and reorder and group patches, I think it would be easier to revisit changes that I have posted for review.

Friday, 6 August 2010

CVS's problems resurface in Git

Although modern version control systems have improved a lot on CVS, I get the feeling that there is a fundamental version control problem that the modern VCSes (Git, Mercurial, Bazaar, and I'll include Subversion too!) haven't solved. The curious thing is that CVS had sort of made some steps towards addressing it.

In CVS, history is stored per file. If you commit a change that crosses multiple files, CVS updates each file's history separately. This causes a bunch of problems:

  • CVS does not represent changesets or snapshots as first class objects. As a result, many operations involve visiting every file's history.

    Reconstructing a changeset involves searching all files' histories to match up the individual file changes. (This was just about possible, though I hear there are tricky corner cases. Later CVS added a commit ID field that presumably helped with this.)

    Creating a tag at the latest revision involves adding a tag to every file's history. Reconstructing a tag, or a time-based snapshot, involves visiting every file's history again.

  • CVS does not represent file renamings, so the standard history tools like "cvs log" and "cvs annotate" are not able to follow a file's history from before it was renamed.

In the DAG-based decentralised VCSes (Git, Mercurial, Monotone, Bazaar), history is stored per repository. The fundamental data structure for history is a Directed Acyclic Graph of commit objects. Each commit points to a snapshot of the entire file tree plus zero or more parent commits. This addresses CVS's problems:

  • Extracting changesets is easy because they are the same thing as commit objects.
  • Creating a tag is cheap and easy. Recording any change creates a commit object (a snapshot-with-history), so creating a tag is as simple as pointing to an already-existing commit object.

However, often it is not practical to put all the code that you're interested in into a single Git repository! (I pick on Git here because, of the DAG-based systems, it is the one I am most familar with.) While it can be practical to do this with Subversion or CVS, it is less practical with the DAG-based decentralised VCSes:

  • In the DAG-based systems, branching is done at the level of a repository. You cannot branch and merge subdirectories of a repository independently: you cannot create a commit that only partially merges two parent commits.
  • Checking out a Git repository involves downloading not only the entire current revision, but the entire history. So this creates pressure against putting two partially-related projects together in the same repository, especially if one of the projects is huge.
  • Existing projects might already use separate repositories. It is usually not practical to combine those repositories into a single repository, because that would create a repo that is incompatible with the original repos. That would make it difficult to merge upstream changes. Patch sharing would become awkward because the filenames in patches would need fixing.

This all means that when you start projects, you have to decide how to split your code among repositories. Changing these decisions later is not at all straightforward.

The result of this is that CVS's problems have not really been solved: they have just been pushed up a level. The problems that occurred at the level of individual files now occur at the level of repositories:

  • The DAG-based systems don't represent changesets that cross repositories. They don't have a type of object for representing a snapshot across repositories.
  • Creating a tag across repositories would involve visiting every repository to add a tag to it.
  • There is no support for moving files between repositories while tracking the history of the file.

The funny thing is that since CVS hit this problem all the time, the CVS tools were better at dealing with multiple histories than Git.

To compare the two, imagine that instead of putting your project in a single Git repository, you put each one of the project's files in a separate Git repository. This would result in a history representation that is roughly equivalent to CVS's history representation. i.e. Every file has its own separate history graph.

  • To check in changes to multiple files, you have to "cd" to each file's repository directory, and "git commit" and "git push" the file change.
  • To update to a new upstream version, or to switch branch, you have to "cd" to each file's repository directory again to do "git pull/fetch/rebase/checkout" or whatever.
  • Correlating history across files must be done manually. You could run "git log" or "gitk" on two repositories and match up the timelines or commit messages by hand. I don't know of any tools for doing this.

In contrast, for CVS, "cvs commit" works across multiple files and (if I remember rightly) even across multiple working directories. "cvs update" works across multiple files.

While "cvs log" doesn't work across multiple files, there is a tool called "CVS Monitor" which reconstructs history and changesets across files.

Experience with CVS suggests that Git could be changed to handle the multiple-repository case better. "git commit", "git checkout" etc. could be changed to operate across multiple Git working copies. Maybe "git log" and "gitk" could gain options to interleave histories by timestamp.

Of course, that would lead to cross-repo support that is only as good as CVS's cross-file support. We might be able to apply a textual tag name across multiple Git repos with a single command just as a tag name can be applied across files with "cvs tag". But that doesn't give us an immutable tag object than spans repos.

My point is that the fundamental data structure used in the DAG-based systems doesn't solve CVS's problem, it just postpones it to a larger level of granularity. Some possible solutions to the problem are DEPS files (as used by Chromium), Git submodules, or Darcs-style set-of-patches repos. These all introduce new data structures. Do any of these solve the original problem? I am undecided -- this question will have to wait for another post. :-)