For example, suppose you are logged into your e-mail account on mail.com in one tab, and have evil.com open in another tab. Suppose evil.com finds an exploit in the renderer process, such as a memory safety bug, that lets it run arbitrary code there. Can evil.com get hold of your HTTP cookies for mail.com, and thereby access your e-mail account?
Unfortunately, the answer is yes.
The reason is that mail.com and evil.com can be assigned to the same renderer process. The browser does not only do this to save memory. evil.com can cause this to happen by opening an iframe on mail.com. With mail.com's code running in the same exploited renderer process, evil.com can take it over and read the cookies for your mail.com account and use them for its own ends.
There are a couple of reasons why the browser puts a framed site in the same renderer process as the parent site. Firstly, if the sites were handled by separate processes, the browser would have to do costly compositing across renderer processes to make the child frame appear inside the parent frame. Secondly, in some cases the DOM allows Javascript objects in one frame to obtain references to DOM objects in other frames, even across origins, and it is easier for this to be managed within one renderer process.
I don't say this to pick on Chrome, of course. It is better to have the sandbox than not to have it.
Chrome has never claimed that the sandbox protects one site against another. In the tech report "The Security Architecture of the Chromium Browser" (Barth, Jackson, Reis and the Chrome Team; 2008), "Origin isolation" is specifically listed under "Out-of-scope goals". They state that "an attacker who compromises the rendering engine can act on behalf of any web site".
There are a couple of ways that web sites and users can mitigate this problem, which I'll discuss in another post. However, in the absence of those defences, what Chrome's multi-process architecture actually gives you is the following:
- Robustness if a renderer crashes. Having multiple renderer processes means that a crash of one takes down only a limited number of tabs, and the browser and the other renderers will survive. It also helps memory management.
But we can get this without sandboxing the renderers.
- Protection of the rest of the user's system from vulnerabilities in the renderer process. For example, the sandboxed renderer cannot read any of the user's files, except for those the user has granted through a "File Upload" file chooser.
But we can get this by sandboxing the whole browser (including any subprocesses), without needing to have the browser separated from the renderer.
For example, since 2007 I have been running Firefox under Plash (a sandbox), on Linux.
In principle, such a sandbox should be more effective at protecting applications and files outside the browser than the Chrome sandbox, because the sandbox covers all of the browser, including its network stack and the so-called browser "chrome" (this means the parts of the GUI outside of the DOM).
In practice, Plash is not complete as a sandbox for GUI apps because it does not limit access to the X Window System, so apps can do things that X allows such as screen scraping other apps and sending them input.
The main reason Chrome was developed to sandbox its renderer processes but not the whole browser is that this is easier to implement with sandboxing technologies that are easily deployable today. Ideally, though, the whole browser would be sandboxed. One of the only components that would stay unsandboxed, and have access to all the user's files, would be the "File Open" dialog box for choosing files to upload.