Randell Jesup, Mozilla Browser Architecture team
We’ve recently moved a number of parts of Firefox into separate processes (e10s, multi-e10s, GPU process, compositor, WebExtensions process, GMP). This has produced some serious improvements in both security (sandboxing content, codecs) and stability (restarting GPU process when a GPU driver crashes, etc). This project is to evaluate how much further we can push process isolation to further improve these things.
We have large processes, running many unrelated items of highly varying security and stability properties. A single bug (including in OS drivers) in many cases will take down either a major part of your tabs, or the master process and by extension the entire browser.
In a related concern, a single exploitable bug gives access to a large part of the browser. Even if it’s in the Content process, it can give access to ¼ of your tabs, and because Content processes have very wide needs to access data and communicate with the Master process, the possibilities for either sandbox escape or information leakage are quite high.
Features and capabilities often have code strewn across various parts of the tree, increasing the maintenance cost and risk of unrelated changes breaking them.
There are some secondary benefits we hope to achieve by doing this, such as decoupling parts of the system and providing more-stable interfaces to them, as well as easing some aspects of unit testing. There may be some advantages in build times or cost of maintenance; we’ll see.
There are costs: development time, memory use, and performance all can or will be negatively impacted. Part of this project is to quantify these impacts (and hopefully reduce them) in order to guide the decisions on how far to push this process.
Chrome/Chromium has been doing similar work recently on “Servicification”. This is related to their slowly replacing classic ipc/chromium-based IPC with “Mojo”; a new IPC implementation (and ipdl-like layer) that has improved performance compared to classic Chromium IPC. Note that some ways Mozilla uses IPC might avoid some of the performance costs in Chrome (multiple channels, PBackground and the like, for example) - but we haven’t yet assessed how much overlap there is between Mojo and our additions to Chromium IPC. It may be that with some smaller modifications our use of Chromium IPC will be as efficient (or nearly so) as Mojo.
Chrome is also working on Site Isolation, to avoid a compromised Renderer from breaking cross-origin isolation (more details below).
As part of this work, since it affects the performance impact of Process Isolation, we plan to explore the potential of adopting Mojo for Mozilla IPC (either wholesale, or progressively as Chrome has been doing).
Alternatively, and maybe more interestingly, we could look at using a Rust IPC crate and some added interface logic. <Need some more concrete suggestions/pointers here>
Note: we don’t want to just “follow” Chrome’s lead, but to do what’s smart for the users and Firefox, whether it’s similar to Chrome or not. Leapfrogging them in some manner would be great, but is not a requirement. If we can leverage work or research they’ve done to reduce our cost or risks, great.
Develop a browser-wide direction for Process Isolation:
How much Servicification should we do?
How many Content Processes should we use?
Should we consider a Chrome-like Process-Per-Origin or Process-Per-Iframe model? What are the implications of this?
Measure the overhead of using Process Isolation:
Memory cost for various scenarios (which largely depend on what part of firefox code need to be initialized/used in the process - XPCOM, etc)
Performance cost - latency and throughput of calls through IPC (with classic IPC and Mojo, and perhaps a Rust IPC crate)
Evaluate if we can make Chromium IPC as efficient as Mojo
I suspect we can; the main perf advantage Mojo has on classic IPC is one less thread-hop per message (2 less per round-trip - 4 vs 6). The overall code is probably a lot “cleaner”, but it would be a fair bit of work to convert over, though probably much of it could be hidden by ipdl.
We believe that Mojo’s shutdown code may be more robust/better engineered than IPC’s; shutdown has been a common source of crash/security bugs.
We think it might be possible in some special(?) cases to avoid a thread hop on the receiving side as well as the sending. Mojo does not do this.
Startup cost - cost to browser startup time
Service startup time cost - cost on first use of a service which requires spawning a Process
Analysis of Process Isolation
Analysis of the code maintenance impact
Analysis of the stability impact
Analysis of the security impact
Analysis of embryo processes
Analysis of IPC options
Update/improve current Chromium IPC
Mojo
Rust IPC
Android analysis - android will likely require different tradeoffs
Develop a preliminary list of potential subsystems/features to consider for Isolation
Necko is already planning to Isolate network socket access and protocol code (and some crypto code) after 57 or 58 – Bug 1322426
They expect to land code in 61 behind a pref, and enable it in release in 63.
Video camera capture code (and screensharing) is another prime target, as it’s already remoted over IPC even when running non-e10s. The way this works is very similar in principle to Chrome’s remote-services-over-Mojo approach.
Places in particular, eventually profile data access in general. This pushes storage asynchrony to the process boundary and decouples lifetimes (background storage and syncing, faster ‘startup’ and relaunch, with Places possibly living longer than a crashed browser). Future technology refactors could make this standalone process reusable outside of desktop Firefox. No bugs filed yet.
Font and/or Image code to avoid or reduce duplication of data between Content processes and the Compositor.
Printing
PDF display/PDFium
Look at the Content Process state and model (mostly this has been done in the e10s team)
How far do we want to push the model towards Chrome’s 1-per-origin/iframe model?
Probably not as far… note however that Chrome has closed the gap we created with them on memory use. [reference?]
Even Chrome can’t get away with their stated goal (yet?)
How much does servicification help reduce Chrome process overhead (avoiding N instances of things)
How much can this work help sandbox hardening?
Very speculative: examine if the current Master Process could be moved to be a child process of a thin Master Process, allowing restarts on Master Process crash without reloading all the running Content Processes.
GMP
E10S
GPU process
Compositor
WebExtensions
Examination of the options for sandboxing Audio capture and playback, as well as other parts of the Media code: Media, WebRTC and Audio Sandboxing Plans
Background docs (needs to be updated but some useful info maybe)
https://wiki.mozilla.org/Security/Sandbox/Process_model
Chrome has been discussing Servicification since roughly early 2016, and major work on it has begun this year (2017). This is the primary document: Chrome Service Model, and this is the primary root of the Servicification work: Servicification.
An example of one item currently being moved to a Service is Video Capture: Design Doc and detailed plans and measurements. Another which I think has been completed is the Prefs Service.
Mojo consists of a set of common IPC primitives, a message IDL format, and a bindings library (with generation for a number of languages; undoubtedly we’d need to add Rust binding generation – C++ bindings are here). Mojo has been measured in Chrome as being about ⅓ faster than classic IPC, and produces ⅓ less context switches. (Probably these two facts are related, and their performance analysis indicates that not having to “hop to the IO thread” is part of why it’s faster, which makes sense.)
One thing we plan to experiment with is seeing if (leveraging our IPDL compiler) we can fairly transparently replace (some?) existing Chromium IPC channels/messages with Mojo.
Chrome has docs on how they move legacy IPC code to Mojo. This is a (somewhat dated) cheat sheet on moving code from IPC and Mojo.
One interesting tidbit from that cheatsheet:
IPC
IPCs can be sent and received from any threads. If the sending/receiving thread is not the IO thread, there is always a hop and memory copy to/from the IO thread.
Mojo
A binding or interface pointer can only be used on one thread at a time since they’re not thread-safe. However the message pipe can be unbound using either Binding::Unbind or InterfacePtr::PassInterface and then it can be bound again on a different thread. Sending an IPC message in Mojo doesn’t involve a thread hop, but receiving it on a thread other than the IO thread does involve a thread hop.
Mojo has extensive support for message validation:
Regardless of target language, all interface messages are validated during deserialization before they are dispatched to a receiving implementation of the interface. This helps to ensure consistent validation across interfaces without leaving the burden to developers and security reviewers every time a new message is added.
1. Memory use. Content Process overhead is tracked in [Bug 1436250](https://bugzilla.mozilla.org/show_bug.cgi?id=1436250). It’s measured both with a small patch to dump the ASAN heap state on command, and using a DMD build with specific environment options.
1. Minimal process (with IPC)
2. With XPCOM
1. 7-10MB
3. Full Content process
2. 25-30MB (varies by OS, memory model (32 vs 64), and system details (fonts, etc))
3. Content Process overhead is critical for Site Isolation
2. Performance -- measure each with small messages, large messages, and shared-memory payloads (anything else?)
4. Latency
5. Messages/second
Moving services and other code into sandboxed processes should generally increase the resilience of the system to security bugs. In particular, compromising the system in one sandboxed process will require a sandbox escape of some sort to leverage that into full or increased control, and generally will only expose data already flowing through the Service to the attacker - and for many sandboxed processes, exfiltrating compromised data would be much harder.
How hard exfiltration would be depends on what sort of data flows through the Service, how locked-down we can make the process, and if the output of the process normally is in some way visible to content - for example if the process did image decoding, then data could be exfiltrated by using it to break cross-domain restrictions (such as taking the content of image A in domain X, and outputting it in place of image B from domain Y (the attacker’s domain), allowing the attacker to render it in a canvas).
Another way that isolation will help security is by separating memory pools - memory errors such as UAFs can become much harder to exploit if you can’t put the memory system under load (forcing reallocation of the memory with content you control, for example). In a Content process with JS running (and tons of other stuff), this is often not too hard; in an isolated Service it might be very hard indeed.
Once a Process is compromised, leveraging that into an exploit requires escaping into other Processes (using further bugs), or leveraging an OS bug. How hard that is depends on the OS and how locked-down we can make these Processes. “Small” processes may have much smaller OS attack surfaces, though this might be tougher to do on (say) Windows due to granularity of permissions.
Chrome doesn’t actually use a Process-per-tab (or origin), though many people believe it does: see Peter Kasting’s post from June. (That was partially in response to some of our announcements around E10S.) The number of Render processes they use depends on the available memory - though it sounds like they have bugs there, and that may cause them to overrun and slow down the user’s system.
Chrome is working on Site Isolation. Part of this is putting iframes OutOfProcess from the main page renderer, but more generally it’s about not using a renderer (Content process) for more than one origin. This has some serious downsides if taken to an extreme, and currently they’re planning to do this only for “high value” sites. (It’s unclear what “high value” means here; one presumes banks, paypal, and other especially juicy targets.)
As mentioned above, Chrome is increasing the number of non-Render processes they use as part of Servicification.
The Chrome Process Model document is useful, but very out of date with current details - for example, the Site Isolation work they’ve done. Some of the code for all these decisions is here.
In “Browser security beyond sandboxing” Microsoft goes into detail on a Chrome vulnerability, but also highlights some of what they’ve done - in particular Arbitrary Code Guard: “ACG, which was introduced in Windows 10 Creators Update, enforces strict Data Execution Prevention (DEP) and moves the JIT compiler to an external process. This creates a strong guarantee that attackers cannot overwrite executable code without first somehow compromising the JIT process, which would require the discovery and exploitation of additional vulnerabilities.” Their post introducing ACG is here.
In more detail: “To support this, we moved the JIT functionality of Chakra into a separate process that runs in its own isolated sandbox. The JIT process is responsible for compiling JavaScript to native code and mapping it into the requesting content process. In this way, the content process itself is never allowed to directly map or modify its own JIT code pages.”
This suggests that we should consider moving the JIT itself out of the main process space, and just share the result of the JIT back with the Content process requesting it. Since we already do JIT on a non-MainThread, the main issue here is probably shared-memory management. There will be some perf tests to do on this to validate if this is feasible within our current overall architecture, but it seems possible. This is being tracked in Bug 1348341, and Tom Ritter has been investigating in this doc.
Note that if the JIT process is compromised, anything running through it is as well, and any content processes it provides code for. This would imply that JIT processes may need to be tied to a single requesting Content Process to avoid stepping backwards in security here (and this also increases the potential memory cost).