I want to write a little bit about multi-process browsers, and specifically highlighting some of the drawbacks with it that nobody who promotes these things ever wants to talk about. Why? Because I'm tired of having the same discussions with people again and again about this, and why I'm against multi-process browsers.
First things first: any more complex program will use a technique called "multi-threading"; this is an integral part of any OO or class-based program which allows many computing tasks to safely run in parallel. These threads within a program can run on single or multiple CPUs, depending on how things are designed and compiled (so don't confuse "CPU threads" with "program threads", please).
To make all this run smooth and without issues, any threads that are relying on asynchronicity will be using so-called "mutexes" that lock data objects so they can't be modified, released from memory, or have other disastrous actions performed on them while a procedure is run on them.
So, there's a solid, and reliable framework to use here that allows efficient processing of multiple tasks simultaneously, all within a single process.
Now, enter the realm of multi-process browsing, which for the sake of not having to type as much, I'll also refer to by Mozilla's term "electrolysis", abbreviated to e10s.
When you use e10s, of course there are some great advantages that I'm sure everyone is already sick of hearing, like the potential for more graceful recovery from crashes or keeping the UI more responsive regardless of what terrible blob of JavaScript is run on a website. So, I'm not going to talk about that, as you'll be able to find many articles praising it into heaven on the net already. No, what I want to talk about is how this, in practice, actually works, and why it is in many respects slower, more dangerous to use, and a lot more resource intensive than using a single multi-threaded process.
Inter-process communication
To make anything possible in a multi-process browser, you need what is called inter-process communication (IPC), a method to get program instructions from one process to another through a messaging system. For example, if you press the "reload" button in an e10s browser, you are interacting with the "main process" (that runs the user interface). This click will have to be converted to a message, that has to be sent to the web content process. When the web content process receives this message, it will perform the reload of the page, sending status messages back to the main process as the page is loading; first to confirm that the command was received, then more as the status of page loads changes. Because it is a messaging system, everything, without exception, has to be done asynchronously -- this means processes end up waiting for other processes most of the time. Multi-tasking operating systems tend to be rather efficient in this type of communication, but even milliseconds quickly add up if you're dealing with thousands of messages being sent back and forth, causing a noticeable sluggishness to the browser that can even be more pronounced than any delay you experience from downloading assets of web pages from the net.
Process restrictions and IPC
One of the advantages touted for multi-process browsing is that it is "more secure" because web content processes (that load potentially dangerous content) can run with restricted rights. This restriction also inherently causes the need for a lot of data to be passed through IPC to the main process. After all: if you want to cache web content, which is an integral part of any browser, you can't do that from a restricted content process that doesn't have rights to write to the file system. Any data that has to be cached by the browser will have to be downloaded by the content process, packaged as a payload in a message, sent asynchronously to the main process, which can then cache this content. Similarly, just loading a page will involve the content process checking with the main process if what is being downloaded is in the cache or not, generating many more messages to be sent back and forth for each asset in the web pages. As you can see some of the very basic features of a web browser become very complex with an enormous amount of overhead.
As for the security of a single-process browser: although it's potentially simpler to just off-load security restrictions to the operating system, properly-designed single-process browsers are just as secure, if not more so (because they actively keep tabs on restrictions instead of passively letting someone else handle it). Separating browser content from the application code of a browser is an essential mechanism that all browsers (and any document viewer, really) use. In Mozilla-land, it's a little more complicated because both content and the application use the same technologies, and more of this active separation is needed to not have web page scripting access UI scripts. Even so, properly designed content containers and so-called x-ray vision (read more about that here) are an effective method to keep things neatly separated.
Off-loading security to others also means you have no control over it. If there's a vulnerability in IPC's communication or operating system sandboxing, then there's nothing you can do about it and you'll have to rely on the O.S. vendors to fix it before the vulnerability is mitigated.
Web standards and multi-process aren't always compatible
It doesn't end there, though. By design, a good number of web standards are synchronous. This isn't a problem when dealing with single-process applications, because the application has full control over the sequence of steps performed to achieve a synchronous task. Because everything in e10s must, by design, be asynchronous, this causes problems. Take setting cookies for example as a very basic process every single browser has; this is such a significant problem that web standards even make special mention of it:
This example is only a problem in multi-process, because 2 content processes can easily read the same state,and then write the same changed state back to the shared cookie storage in the main process. In a single-process browser this isn't a problem, because getting and setting cookies is synchronous (one task, one result).WhatWG HTML DOM spec wrote:The cookie attribute's getter and setter synchronously access shared state. Since there is no locking mechanism, other browsing contexts in a multiprocess user agent can modify cookies while scripts are running. A site could, for instance, try to read a cookie, increment its value, then write it back out, using the new value of the cookie as a unique identifier for the session; if the site does this twice in two different browser windows at the same time, it might end up using the same "unique" identifier for both sessions, with potentially disastrous effects.
In other words, many parts of web standards don't specify mutexes or locking mechanisms because they would not be needed if the browser itself wouldn't be forced to use asynchronous calls for everything (even things that are, by design, of a synchronous nature).
Resource usage
Of course there is the additional issue of resource usage: for every browser process in use, another copy of the parsing, layout and rendering engine has to be loaded and used, or web content processed would not be able to display their pages. This will quickly inflate the number of graphics resources, memory and processing used to load the same content in an e10s browser compared to a single-process browser. There may even be contention between processes for the same resources in the case of heavy use, e.g. video memory.
I may add more to this later, but these are the main concerns I keep having to explain to people.