flypig.co.uk

Personal Blog

View the blog index.

RSS feed Click the icon for the blog RSS feed.

Blog

5 most recent items

1 Dec 2024 : Selecting New Home Server Hardware #
For the last 17 years I've been running a home server called Constantia. Virtually it lives at www.flypig.org.uk but physically it lives on a bookshelf in the corner of the dining room next to our home WiFi and broadband router.

I call it a home server, but more recently the term Home Lab seems to have become more popular. I use it for running my personal cloud services, experimenting with various server technologies and helping me orchestrate my home network. Since I avoid corporate cloud services like Google and Dropbox it provides me with a pretty essential set of services.

Until recently it was running Nextcloud (shared drive, calendar, contacts, phone backup), Bind9 (DNS), git (development), SVN (development), Apache2 (Web server), OpenVPN (VPN), Jitsi (video conferencing), ZNC (IRC bouncer), an FTP server, SMB shares, media sharing, backup to AWS and various cron-jobs.

That looks like quite a lot, but in practice all of this can be run with very modest compute capabilities.

The first Constantia incarnation was a Koolu Net Appliance running an AMD Geode LX 800 processor that I bought for a couple of hundred pounds in November 2007.

It worked great for many years until Ubuntu dropped support for the Geode, which caused me a bit of trouble. It sadly died after about five years and I replaced it with an Aleutia T1 running a Celeron J1800 processor.

Both the Koolu and Aleutia were fanless and both ran great. Until a couple of months back when the Aleutia also died. At first I thought it might have been a hard drive failure, but on testing the components I found both the internal SSD and 2.5 inch HDD were working fine.

In truth it was getting a little slugging even for server tasks running the latest Ubuntu. So I've taken the hint that after seven years of trusted service, it's time to upgrade the hardware to a new device again.

As well as being fanless, both devices were also diminutive, the larger Aleutia measuring just 35 mm × 180 mm × 200 mm. Over the last decade or so there's been a positive explosion in the number and diversity of mini PCs on the market, driven in part by the success of the Raspberry Pi, but also in part by Intel's NUC initiative and no doubt a wealth of other factors.

This is great in terms of options, but also means a much harder time choosing. So I'm taking the same approach I've typically done when buying a new laptop, involving a table of specs, some rationale for ruling out certain options, and a final choice that's based largely on guesswork and impulse.

Given my experience with my previous servers, I know the following are going to be important to me:
  1. Small footprint: the Aleutia was already pushing the limits of acceptability (in terms of width and depth), so ideally no larger than this.
  2. Fanless: for the sake of my own sanity, no noise is key.
  3. Low idle power: it's going to be running 24/7 but will spend most of the time awaiting incoming network connections.
  4. Linux compatible: I'll be running Ubuntu on it; having to pay for a Windows or macOS licence would be irksome.
  5. Upgradable: 16 GiB RAM, 512 GiB storage at minimum with more in the future.
  6. Generic hardware: experience with the Geode tells me a generic chipset is preferable if it's going to be running for many years.
  7. Multi-core: the cores don't have to be fast, but at least four will be helpful for running as a Web server.
On the other hand, there are features that many small home PCs offer that I know aren't going to be so relevant to me. Some of these might be nice, but they're not essential.
  1. Price: there's a limit to how expensive small PCs can get and given the cost will effectively be amortised over multiple years, I'm willing to pay for something worthwhile.
  2. GPU: some parallel compute would be nice, but I'm not planning to play any games.
  3. Miniscule: small is good, but I'd rather have the potential to add a spare HDD than have it smaller than my thumb.
  4. Multi-display: it'll be a headless Web server, so will only get connected to a display under exceptional circumstances.
  5. Glitz: flashing lights, glowing logos, embedded displays; they'll just be a waste.
Also worth mentioning is that I plan to configure the device with a minimum of 16 GiB RAM and 1 TiB storage (preferable solid state). However, I'll be perfectly happy getting a bare-bones system and adding the memory and storage myself.

With all this in mind I've been scouring the Web for mini PCs over the last month to try to find anything that might fit the bill. Right now, N100 and N305 devices seem to be top of the range for fanless mini PCs. There are also Ryzen processors that compete, but in practice, the requirement for the device to be fanless constrains things quite significantly.

Here's the table with the contenders. In the first block the first two columns show my previous devices for comparison. I've also included two devices with fans (the ASUS NUC 14 Pro and the Mac Mini M4), also just for comparison.

The second block show the five most likely contenders. Then the third block show five other systems I compared against, but all of which have flaws significant enough for me to reject them as options.


Colour coding:
Good
Acceptable
Bad
Previous hardware
 
 

Koolu

Aleutia T1

Asus NUC 14 Pro

Mac Mini M4

 

Company

Koolu

Aleutia

Asus

Apple

 

Review

Chas' Compilation

FanlessTech

Juan Bagnell

Lon.TV

 

Height (mm)

35.0

35.0

54.0

50.0

 

Width (mm)

130.0

180.0

112.0

127.0

 

Depth (mm)

140.0

200.0

117.0

127.0

 

Weight (kg)

1.000

0.991

0.600

0.670

 

Fan

✘

✘

✔

✔

 

Metal chasis

✔

✔

✘

✔

 

Power idle (W)

5.00

10.00

8.00

6.00

 

Power load (W)

5.00

10.00

88.00

40.00

 

Processor

AMD Geode LX 800

Celeron J1800

Core Ultra 7

M4

 

Cores

1

2

16

10

 

Memory (GiB)

0.5

1.0

16.0

16.0

 

SSD bays

1

1

1

0

 

SSD (TiB)

0.04

0.56

1.00

1.00

 

OS

Ubuntu

Ubuntu

Windows 11

macOS

 

Price (£)

200

200

800

1000

 

Price inc. mem. (£)

200

200

800

1000

 

Pros

 

 

Processor

Idle power; perfromance

 

Cons

 

 

Fan

Fan

 

Notes

Bought 2007

Bought 2013

 

 

 

           
 

QOTOM Q20332G9-S10

HUNSN BM34

iKoolCore R2 Max

CWWK Mini PC

MeLE Quieter 4C

Company

QOTUM

fHUNSN

iKoolCore

CWWK

MeLE

Review

ServeTheHome

FanlessTech

ServeTheHome

ServeTheHome

Liliputing

Height (mm)

62.0

50.0

40.0

53.6

18.3

Width (mm)

122.0

125.0

118.0

145.4

81.0

Depth (mm)

217.0

170.0

157.0

145.6

131.0

Weight (kg)

2.500

1.500

1.050

1.800

0.203

Fan

✘

✘

✘

✘

✘

Metal chasis

✔

✔

✔

✔

✘

Power idle (W)

16.00

6.00

10.00

9.00

7.10

Power load (W)

32.00

10.00

24.00

36.00

18.50

Processor

Atom C3758R

N100

N100

N305

N100

Cores

8

4

4

8

4

Memory (GiB)

32.0

0.0

16.0

32.0

16.0

SSD bays

2

1

0

0

0

SSD (TiB)

1.00

0.00

1.00

1.00

0.50

OS

Linux

Linux

Linux

Linux

Linux

Price (£)

400

168

500

327

240

Price inc. mem. (£)

400

268

500

327

240

Pros

8 cores

Good fit

Good fit

Great performance

Small, low power

Cons

No acceleration

 

Pre-order

Gets hot

Not upgradable

Notes

 

 

 

 

 

           
 

MINIX Neo Z300-dB

MINIX Z100-0db

Asus NUC 13 rugged

Shuttle XPC DL30N

MeLE Quieter 3Q

Company

MINIX

MINIX

Asus

Shuttle

MeLE

Review

Robtech

Lon.TV

ServeTheHome

Mad Shrimps

CNX Software

Height (mm)

46.0

46.0

35.8

43.0

61.0

Width (mm)

120.0

120.0

108.0

165.0

146.0

Depth (mm)

123.0

123.0

174.0

190.0

200.0

Weight (kg)

0.890

0.890

1.060

1.300

0.182

Fan

✘

✘

✘

✘

✘

Metal chasis

✔

✔

✔

✔

✘

Power idle (W)

10.00

8.00

3.70

9.46

2.40

Power load (W)

31.00

26.00

18.00

22.00

10.90

Processor

N300

N100

N50

N100

N5105

Cores

8

4

2

4

4

Memory (GiB)

16.0

16.0

0.0

0.0

8.0

SSD bays

0

0

1

1

0

SSD (TiB)

0.50

0.50

0.00

0.00

0.25

OS

Windows 11 Pro

Windows 11 Pro

Linux

Ubuntu

Windows 11 Pro

Price (£)

340

270

318

224

190

Price inc. mem. (£)

340

270

418

324

190

Pros

Good processor

Good value

Good fit

Good fit

Tiny

Cons

No SSD Bay

No SSD Bay

Slow processor

Hard to source

Poor specs

Notes

 

 

 

 

 

           
 

There are perhaps a few things worth noting in this table. This isn't intended to be a comprehensive comparison, it's just covering the issues that matter to me. For example, processor speed isn't included because I'm more concerned about the number of cores. I've also not included anything about connectivity (USB, HDMI and so on) because all of these systems reach the baseline for my needs.

As I mentioned above, my intention is to get a system with at least 16 GiB RAM and 1 TiB of solid state storage. Not all of the systems come with this specification, so alongside the price of the system I've also included a line showing the price after adding on the cost (which I estimate to be around £100) of any additional storage needed.

Almost all of the columns include at least one red ("bad") entry. The existence of a bad entry may not be enough to trigger an immediate rejection.
 
Product images of the five contenders, from left to right: QOTOM Q20332G9-S10, HUNSN BM34, iKoolCore R2 Max, CWWK Mini PC and MeLE Quieter 4C. Beneath each image is a box representing the relative size of each device.

Let's go through each of the contenders and consider their benefits and drawbacks.

The QOTOM Q20332G9-S10 is arguably the most interesting of the options here. The processor is an older generation, with only very basic GPU acceleration, but with the ability to offload crypto, which could be a really useful feature for my needs. It also has eight cores which is also great for what I need. The main downside of it being older and with better networking is that it requires a fair bit more juice on idle than newer generations. This is the biggest downside of this device for me. Crucially, it seems this device is built as a server with 10 G networking, rather than a home PC. That's really what I'm looking for.

I'm particularly taken by the design of the HUNSN BM34 with its all-metal chassis and clean looks. It also claims to have space for a 2.5 inch drive inside. I'm not sure if I'll use this in the long run, but this is nice to have. It's incredibly good value with very low power requirements and the reviews on Amazon also shed it in a positive light. One downside is that I can only find minimal information about the BM34 model, which I couldn't even find listed on the HUNSN website. While it does have WiFi, it only has 1 G networking when I'd prefer 2.5 G at least.

The iKoolCore R2 Max fits many of my requirements. It's apparently really well made and the N100 model doesn't suffer from throttling under load. It's not super-fast, but likely good enough for my needs. The device is built in and shipped from Hong Kong and when I contacted the company about taxes I was pretty happy with how they responded. The company also offers comprehensive documentation, which is pretty unusual in this space from what I can tell. The biggest positive of this device is the 10 G networking. The biggest downside for me is the fact it's not supposed to be user-serviceable. There are flaps on the underside for access to RAM and storage slots, but iKoolCore have used hexagonal screws for the main chassis. I can understand why, but I've really appreciated being able to open up my Aleutia device, so this would be a retrograde step. There's also no space for a 2.5 inch drive and it's expensive compared to the other devices I'm considering.

I added the CWWK Mini PC device explicitly so that I could have an N305 powered device in the list. When I started this search it became clear pretty early on that the Intel N100 and N305 were the most likely candidates for a small fanless device. The N305 has twice the cores, but the extra power obviously pushes the thermal envelope for a fanless design and I didn't find many that support it. This CWWK device looked like the most promising for running an N305. The idle power is still low and while the burst power is high, that's not such an issue for me. More of an issue is the fact there's no room inside for a 2.5 inch drive, which is a shame. On the plus side, 2.5 G networking is nice and there's an expansion board offering support for up to four SSDs. Neat.

The MeLE Fanless Quieter 4C is the smallest of the devices I looked at. And it really is very small. There's certainly something exciting about having a proper server that's barely larger than a Raspberry Pi. Unfortunately there are some compromises that come with this. In particular, the memory and storage are soldered on, so can't be upgraded. It can be bought with up to 32 GiB RAM and 512 GiB eMMC storage, plus the option to add an SSD, so this would still be workable. The case is also plastic rather than metal, which makes me a bit concerned about thermal dissipation. This would make a great mini-PC for desktop use, but I'm not so convinced it'd make a great home server.

There seems to be a lot to commend all of these devices. Ultimately I've decided to go with the CWWK Mini PC N305 device. I'm calling it that because its proper title appears to be "12th Gen Intel Firewall Mini PC Alder Lake i3 N305 8 Core Fanless Soft Router Proxmox DDR5 4800MHz 4xi226-V 2.5G"; not a name anyone wants to have to repeat. I'll go for the Intel i3-N305, bare bones model with NVME expansion interface to support four drives (I plan to use two, which I'll source separately). My aim will be to transfer over the data from Constantia to it to create a Constantia Mk III. I'll share the results here when I do.
Comment
26 Nov 2024 : A Brief Embedded Browser Expo #
The question of whether Gecko is the most appropriate browser for use on Sailfish OS is a perennial one. Back in the days of Maemo, with a user interface built using Gtk, Gecko may have seemed like a natural choice. But with the shift to Qt on the N9 and with Sailfish OS sticking to Qt ever since, it's natural to ask whether something like WebKit might not be a better fit.

Indeed, for many years WebKit was also an integral part of Sailfish OS, providing the embeddable QtWebKit widget that many other apps used as a way of rendering Web content. It wasn't until Sailfish OS 4.2.0 that this was officially replaced by the Gecko-based WebView API.

The coexistence of multiple engines within the operating system isn't the only reason many people felt WebKit would make a better alternative to Gecko. Another is the fact that WebKit, and subsequently Blink, has become the defacto standard for embedded applications. In contrast, although Mozilla were pushing embedded support back when Maemo was being developed, it's since dropped official embedded support entirely.

So in this post I'm going to take a look at embedded browsers. What does it mean for a browser to be embedded, what APIs are supported by the most widely used embedded toolkits, and might it be true that Sailfish OS would be better off using Blink? In fact, I'll be leaving this last question for a future post, but my hope is that the discussion in this post will serve as useful groundwork.

Let's start by figuring out what an embedded browser actually is. In my mind there are two main definitions, each embracing a slightly different set of characteristics.
  1. A browser that runs on an embedded device.
  2. A browser that can be embedded in another application.
Both use-cases benefit from a minimal user interface and low resource footprint, but otherwise they have slightly different requirements. The first case leans more heavily on lower resource requirements and might also necessitate minimal dependencies, given that embedded devices will be constrained in the frameworks available to them. The second case will rely more on the ability to integrate the browser into other apps by exposing a suitable API, alongside the potential to work with a variety of user interface toolkits.

If you frame it right though, these two definitions can feel similar. Here's how the Web Platform for Embedded team describe it:
 
For many of us, a browser is an application like the one you’re probably using now. You click an icon on your graphical operating system (OS), navigate somewhere with a URL bar, search, and so on. You have bookmarks and tabs that you can drag around, and lots of other features.

In contrast, an embedded browser is contained within another application or is built for a specific purpose and runs in an embedded system, and the application controlling the embedded browser does not provide all the typical features of browsers running in desktops.

So minimal, encapsulated, targeted. Maybe something you don't even realise is a browser.

And what does this mean in practice? That might be a little hard to pin down, but for me it's all about the API. What API does the browser expose for use by other applications and systems? If it provides bare-bones render output, but with enough hooks to build a complete browser on top of (at a minimum) then you've got yourself an effective embedded browser.

In the past Gecko provided exactly this in the form of the EmbedLIte API and the XULRunner Development Kit. The former provides a set of APIs that allow Gecko to be embedded in other applications. The latter allows the Gecko build process to be harnessed in order to output all of the libraries and artefacts needed (such as the libxul.so library and the omni.ja resource archive) to integrate Gecko into another application.

Sadly Mozilla dropped support for both of these back in 2016, when it was decided the core Firefox browser needed to be prioritised over an embedding offering. Mozilla has made plenty of questionable decisions over the years and given the rise in use of WebKit and Chrome as embedded browsers, you might think this was one of them. But despite the lack of investment in the API, it's not been removed entirely, to Mozilla's credit. It is, in fact, still possible to access the EmbedLite APIs and to generate the XULRunner artefacts and get a very effective embedded browser.

We'll come back to the EmbedLite approach to embedding later. But in order to understand it better, I believe it's also helpful to understand the context. I therefore plan to look at three different embedded browser frameworks. These are CEF (the Chromium Embedded Framework), Qt WebEngine and then finally we'll return to Gecko by considering the Gecko WebView.

Looking through the documentation I was surprised at how similar these three frameworks appear to be. But trying them out I was quickly divested of this misapprehension. They do offer similar functionality, but turn out to be quite different to use in practice.

Before we get in to the API details, let's first consider what a minimal embedded browser external interface might look like.
  1. Settings controller. An API, likely exposed as a class, to control browser settings such as cache and profile location, scaling, user agent string, privacy settings and so on. Browsers typically offer numerous configuration options and some of these such as profile location are especially important for the embedded case.
  2. JavaScript execution. Apps that embed a browser often have particular use-cases in mind. The ability to execute JavaScript is important for allowing interaction between the rendered Web content and the rest of the application (see also message passing interface).
  3. Web controls. There are a bunch of controls that are needed as a bare minimum for controlling browser content. Load URL; navigate forwards; navigate backwards; that kind of thing. An app that embeds a browser may choose to handle these controls itself, potentially hiding them from the user entirely, but at the very least the app has to be able to access these controls programmatically from its own code.
  4. Separate view widgets. The browser is an engine and often an app will want multiple views all of which make use of it, each rendering different content. An embedding framework should allow an app to embed multiple views, each making use of the same engine underneath.
  5. Message passing interface. The app and the browser need a way to communicate with one another. Browsers already work by broadcasting messages between different components, so there should be a way for the embedder to send and receive these messages as well. A common use case will involve the embedder injecting some JavaScript, with communication handled by message passing between the app and the JavaScript. The app can then act on the messages sent from inside the browser engine by the JavaScript.
The flow for an application embedding a browser component might look something like this:
  1. Populate a settings structure for the browser to capture the settings in an object.
  2. Instantiate a bunch of singleton browser classes. These will be for central management of the browser components.
  3. Pass in the settings object to these central browser components.
  4. Embed one or more browser widget into the user interface to create browser views.
  5. Inject some JavaScript into the browser views. This JavaScript listens for messages from the app, interacts with the browser content and sends messages back.
  6. Open the window containing the browser widget for the user.
  7. Interact with the JavaScript and browser engine by passing messages via the message passing interface.
  8. When the user closes the window, shut down the views.
This may not be how others have used embedded browsers, but this has typically been the flow for me when embedding a browser widget into another application.

Let's now turn to the three individual embedding frameworks to see how they approach all this.

CEF

Let's start by considering the CEF API as documented. I actually began by looking at the Blink source code, but it turns out this isn't set up well for easy integration into other projects. I should caveat this: the underlying structure may be carefully arranged to support it, but the Blink project itself doesn't seem to prioritise streamlining the process of embedding. For example it exposes the entire internal API with no simplified embedding wrapper and I didn't find good official documentation on the topic.

And that's exactly how CEF brings value. It takes the Chromium internals (Blink, V8, etc.) and wraps them with the basic windowing scaffolding needed to get a browser working across multiple platforms (Linux, Windows, macOS). It then adds in a streamlined interface for controlling the most important features needed for embedding (settings, browser controls, JavaScript injection, message passing).

Having worked through the documentation and tutorials, the CEF project assumes a slightly different workflow from what I'd typically expect. Qt WebEngine and Gecko WebView are both provided as widgets that integrate with the graphical user interface (GUI) toolkit (which in these cases is Qt). On the other hand, CEF is intended for use with multiple different widget toolkits (Gtk, Qt, etc.). As a developer you're supposed to clone the cef-project repository with the CEF example code — which is complex and extensive — and build your application on top of that. As is explained in the documentation, the first thing a developer needs to do is to:
 
Fork the cef-project repository using Bitbucket and Git to store the source code for your own CEF-based project.

The documentation assumes you're starting from scratch; it's not clear to me how you're supposed to proceed if you want to retrofit CEF into an existing application. It looks like it may not be straightforward.

Nevertheless, assuming you're starting from scratch, CEF provides a solid base to build on, since you'll start with an application that already builds, runs and displays Web content. You can then immediately see how the main classes needed for controlling the browser are used.

There are many such classes, but I've picked out three that I think are especially important for understanding what's going on. As soon as you look into one of these classes you'll find references to other classes. You may need to look into these too if you want to fully understand what's going on; with each such step I found myself getting pulled a little further into the rabbit hole.

First the CefBrowserHost class. This class is pretty key as it handles the lifespan of the browser; it's described in the source as being "used to represent the browser process aspects of a browser". Here's a flavour of what the class looks like. I've cut a lot out for the sake of brevity, but you can check out the class header if you want to see everything.
class CefBrowserHost : public CefBaseRefCounted {
 public:
  static bool CreateBrowser(const CefWindowInfo& windowInfo,
                            CefRefPtr<CefClient> client,
                            const CefString& url,
                            const CefBrowserSettings& settings,
                            CefRefPtr<CefDictionaryValue> extra_info,
                            CefRefPtr<CefRequestContext> request_context);
  CefRefPtr<CefBrowser> GetBrowser();
  void CloseBrowser(bool force_close);
  [...]
  void StartDownload(const CefString& url);
  void PrintToPDF(const CefString& path,
                  const CefPdfPrintSettings& settings,
                  CefRefPtr<CefPdfPrintCallback> callback);
  void Find(const CefString& searchText,
                    bool forward,
                    bool matchCase,
                    bool findNext);
  void StopFinding(bool clearSelection);
  bool IsFullscreen();
  void ExitFullscreen(bool will_cause_resize);
  [...]
};
As a developer you call CreateBrowser() to start up your browser, which you can then access using GetBrowser(). Once you're done you can destroy it using CloseBrowser(). All of these are accessed via this CefBrowserHost interface. As you can see, there are also a bunch of browser-wide functionalities (search, fullscreen mode, printing, etc.) that are also managed through CefBrowserHost.

Here's the interface for the CefBrowser object the lifecycle of which is being managed (you'll find the full class header in the same file):
class CefBrowser : public CefBaseRefCounted {
 public:
  bool CanGoBack();
  void GoBack();
  bool CanGoForward();
  void GoForward();
  bool IsLoading();
  void Reload();
  void StopLoad();
  bool HasDocument();
  CefRefPtr<CefFrame> GetMainFrame();
  [...]
};
Things are starting to look a lot more familiar now, with methods to perform Web navigation and the like. Notice however that we still haven't reached the interface for loading a specific URL yet. For that we need a CefFrame which the source code describes as being "used to represent a frame in the browser window.". A page can be made up of multiple such frames, but there's always a root frame which we can extract from the browser using the GetMainFrame() method you see above.

Once we have this root CefFrame object we can then ask for a particular page to be loaded into the frame using the LoadURL() method:
class CefFrame : public CefBaseRefCounted {
 public:
  CefRefPtr<CefBrowser> GetBrowser();
  void LoadURL(const CefString& url);
  CefString GetURL();
  CefString GetName();
  void Cut();
  void Copy();
  void Paste();
  void ExecuteJavaScript(const CefString& code,
                         const CefString& script_url,
                         int start_line);
  void SendProcessMessage(CefProcessId target_process,
                          CefRefPtr<CefProcessMessage> message);
  [...]
};
The full definition of CefFrame can be seen in the cef_frame.h header file.

So now we've seen enough of the API to initialise things and to then load up a particular URL into a particular view. But CefFrame offers us a lot more than just that. In particular it offers up two other pieces of functionality critical for embedded browser use: it allows us to execute code within the frame and it allows us to send messages between the application and the frame.

Why are these two things so critical? In order for the content shown by the browser to feel fully integrated into the application, the application must have a means to interact with it. These two capabilities are precisely what we need to do this.

Understanding CEF requires a lot more than these three classes, but this is a supposed to be a survey, not a tutorial. Still, it would be nice to know what it's like to use these classes in practice. To that end, I've put together a simple example CEF application that makes use of some of this functionality.
 
An application with a button bar at the top containing backward, forward and execute buttons, plus a URL bar. At the bottom of the window are three labels showing node count, DOM height and DOM width, all indicating zero. The main content on the page is rendered Web content from whatsmybrowser.org. It states the browser as being Chrome 129.

The application itself is simple and useless, but designed to capture functionality that might be repurposed for better use in other situations. The app displays a single window containing various widgets built using the native toolkit (in the case of our CEF example, these are Gtk widgets). The browser view is embedded between these widgets to demonstrate that it could potentially be embedded anywhere on the page.

The native widgets allow some limited control over the browser content: a URL bar, forwards and backwards. There's also an "execute JavaScript" button. This is the more interesting functionality. When the user presses this a small piece of JavaScript will be executed in the DOM context of the page being rendered.

Here's the JavaScript to be executed:
function collect_node_stats(global_context, local_context, node) {
  // Update the context
  local_context.depth += 1;
  local_context.breadth.push(node.childNodes.length);
  global_context.nodes += 1;
  global_context.maxdepth = Math.max(local_context.depth, 
    global_context.maxdepth);

  // Recurse into child nodes
  for (child of node.childNodes) {
    child_context = structuredClone(local_context);
    child_context.breadth = local_context.breadth.slice(0, local_context.depth 
    + 1);
    child_context = collect_node_stats(global_context, child_context, child);

    // Recalculate the child breadths
    for (let i = local_context.depth + 1; i < child_context.breadth.length; 
    ++i) {
      local_context.breadth[i] = (local_context.breadth[i]||0) + 
    child_context.breadth[i];
    }
  }

  // Paint the DOM red
  if (node.style) {
    node.style.boxShadow = &quot;inset 0px 0px 1px 0.5px red&quot;;
  }

  // Move back up the tree
  local_context.depth -= 1;
  return local_context;
}

function node_stats() {
  // Data available to all nodes
  let global_context = {
    &quot;nodes&quot;: 0,
    &quot;maxdepth&quot;: 0,
    &quot;maxbreadth&quot;: 0
  }

  // Data that's local to the node and shared with the parent
  let local_context = {
    &quot;depth&quot;: 0,
    &quot;breadth&quot;: [1]
  }

  // Off we go
  local_context = collect_node_stats(global_context, local_context, document);
  global_context.maxbreadth = Math.max.apply(null, local_context.breadth);
  return global_context;
}

// Return the results (only strings allowed)
// See: DomWalkHandler::Execute() defined in renderer/client_renderer.cc
dom_walk(JSON.stringify(node_stats()))
I'm including all of it here because it's not too long, but there's no need to go through this line-by-line. All it does is walk the page DOM tree, giving each item in the DOM a red border and collecting some statistics as it goes. As it goes along it collects information that allows us to calculate the number of nodes, the maximum height of the DOM tree and the maximum breadth of the DOM tree.

There is one peculiar aspect to this code though: having completed the walk and returned from collect_node_stats() the code then converts the results into a JSON string and passes the result into a function called dom_walk(). But this function doesn't exist. Huh?!

We'll come back to this.

The values that are calculated aren't really important, what is important is that we can return these values at the end and display them in the native user interface code. This highlights not only how an application can have its own code executed in the browser context, but also how the browser can communicate back information to the application. With these, we can make our browser and application feel seamlessly integrated, rather than appear as two different apps that happen to be sharing some screen real-estate.

Let's now delve in to some code and consider how our three classes are being used. We'll then move on to how the communication between app and browser is achieved.

To get the CEF example working I followed the advice in the documentation and made a fork of the cef-project repository. I then downloaded the binary install of the cef project, inside which is an example application called cefclient. I made my own copy of this inside my cef-project fork, hooked it into the CMake build files and started making changes to it.

There's a lot of code there which may look a bit overwhelming but bear in mind that the vast majority of this code is boilerplate taken directly from the example. Writing this all from scratch would have been... time consuming.

Most of the changes I did make were to the file. This handles the browser lifecycle as described above using an instance of the CefBrowserHost class.

We can see this in the RootWindowGtk::CreateRootWindow() method which is responsible for setting up the contents of the main application window. In there you'll see lots of calls for creating and arranging Gtk widgets (I love Gtk, but admittedly it can be a tad verbose). Further down in this same method we see the call to CefBrowserHost::CreateBrowser() that brings the browser component to life.

In the case of CEF the browser isn't actually a component. We tell the browser where in our window to render and it goes ahead and renders, so we actually create an empty widget and then update the browser content bounds every time the size or position of this widget changes.

This contrasts with the Qt WebEngine and Gecko WebView approach, where the embedded browser is provided as an actual widget and, consequently, the bounds are updated automatically as the widget updates. Here with CEF we have to do all this ourselves.

It's not hard to do, and it brings extra control for greater flexibility, but it also hints at why so much boilerplate code is needed.

The browser lives on until the app calls CefBrowserHost::CloseBrowser() in the event that the actual Gtk window containing it is deleted.

We already talked about the native controls in the window and the fact that we can enter a URL, as well as being able to navigate forwards and backwards through the browser history. For this functionality we use the CefBrowser object.

We can see this at work in the same file. Did I mention that this file is where most of the action happens? That's because this is the file that handles the Gtk window and all of the interactions with it.

When creating the window we set up a generic RootWindowGtk::NotifyButtonClicked() callback to handle interactions with the native Gtk widgets. Inside this we find some code to get our CefBrowser instance and call one of the navigation functions on it. The choice of which to call depends on the button that was pressed by the user:
  CefRefPtr<CefBrowser> browser = GetBrowser();
  if (!browser.get()) {
    return;
  }

  switch (id) {
    case IDC_NAV_BACK:
      browser->GoBack();
      break;
    case IDC_NAV_FORWARD:
      browser->GoForward();
  [...]
Earlier we mentioned that we also have this special execute JavaScript button. I've hooked this up slightly differently, so that it has its own callback for when clicked.

The format is similar, but when clicked it extracts the main frame from the browser in the form of a CefFrame instance and calls the CefFrame::ExecuteJavaScript() method on this instead. Like this:
void RootWindowGtk::DomWalkButtonClicked(GtkButton* button,
                                        RootWindowGtk* self) {
  CefRefPtr<CefBrowser> browser = self->GetBrowser();
  if (browser.get()) {
    CefRefPtr<CefFrame> frame = browser->GetMainFrame();

    frame->ExecuteJavaScript(self->dom_walk_js_, &quot;&quot;, 0);
  }
}
The dom_walk_js_ member is just a string buffer containing the contents of our JavaScript file (which I load at app start up). As the method name implies, calling ExecuteJavaScript() will immediately execute the provided JavaScript code in the view's DOM context, starting execution from the line provided.

There are similar methods available for the Qt WebEngine and Gecko WebView as well. As we'll see, what makes the CEF version different is that it doesn't block the user interface thread during execution and doesn't return a value. But as we discussed above, we want to return a value, because otherwise how are we going to display the number of nodes, tree height and tree breadth in the user interface?

This is where that mysterious dom_walk() method that I mentioned earlier comes in. We're going to create this method on the C++ side so that when the JavaScript code calls it, it'll execute some C++ code rather than some JavaScript code.

We do this by extending the CefV8Handler class and overriding its CefV8Handler::Execute() method with the following code:
bool DomWalkHandler::Execute(const CefString& name,
           CefRefPtr<CefV8Value> object,
           const CefV8ValueList& arguments,
           CefRefPtr<CefV8Value>& retval,
           CefString& exception) {
  if (!arguments.empty()) {
    // Create the message object.
    CefRefPtr<CefProcessMessage> msg = CefProcessMessage::Create(
    &quot;dom_walk&quot;);

    // Retrieve the argument list object.
    CefRefPtr<CefListValue> args = msg->GetArgumentList();

    // Populate the argument values.
    args->SetString(0, arguments[0]->GetStringValue());

    // Send the process message to the main frame in the render process.
    // Use PID_BROWSER instead when sending a message to the browser process.
    browser->GetMainFrame()->SendProcessMessage(PID_BROWSER, msg);
  }
  return true;
}
This code is going to execute on the render thread, so we still need to get our result to the user interface thread. I say "thread", but it could even be a different process. So this is where the SendProcessMessage() call at the end of this code snippet comes in. The purpose of this is to create a message with a payload made up of the arguments passed in to the dom_walk() method (which, if you'll recall, is a stringified JSON structure). We then send this as a message to the browser process.

In JavaScript functions are just like any other value, so to get our new function into the DOM context all we need to do is create a CefV8Value object, which is the C++ name for a JavaScript value, and pass it in to the global context for the browser. We do this when the JavaScript context is created like so:
  void OnContextCreated(CefRefPtr<ClientAppRenderer> app,
                        CefRefPtr<CefBrowser> browser,
                        CefRefPtr<CefFrame> frame,
                        CefRefPtr<CefV8Context> context) override {
    message_router_->OnContextCreated(browser, frame, context);
    if (!dom_walk_handler) {
      dom_walk_handler = new DomWalkHandler(browser);
    }

    CefRefPtr<CefV8Context> v8_context = frame->GetV8Context();
    if (v8_context.get() && v8_context->Enter()) {
      CefRefPtr<CefV8Value> global = v8_context->GetGlobal();
      CefRefPtr<CefV8Value> dom_walk = CefV8Value::CreateFunction(
    &quot;dom_walk&quot;, dom_walk_handler);
      global->SetValue(&quot;dom_walk&quot;, dom_walk, 
    V8_PROPERTY_ATTRIBUTE_READONLY);

      CefV8ValueList args;
      dom_walk->ExecuteFunction(global, args);

      v8_context->Exit();
    }
  }
Finally in our browser thread we set up a message handler to listen for when the dom_walk message is received from the render thread.
  if (message_name == &quot;dom_walk&quot;) {
    if (delegate_) {
      delegate_->OnSetDomWalkResult(message->GetArgumentList()->GetString(0));
    }
  }
Back in our root_window_gtk.cc file is the implementation of OnSetDomWalkResult() which takes the string passed to it, parses it and displays the content in our info bar at the bottom of the window:
void RootWindowGtk::OnSetDomWalkResult(const std::string& result) {
  CefRefPtr<CefValue> parsed = CefParseJSON(result, 
    JSON_PARSER_ALLOW_TRAILING_COMMAS);

  int nodes = parsed->GetDictionary()->GetInt(&quot;nodes&quot;);
  int maxdepth = parsed->GetDictionary()->GetInt(&quot;maxdepth&quot;);
  int maxbreadth = parsed->GetDictionary()->GetInt(&quot;maxbreadth&quot;);

  gchar* nodes_str = g_strdup_printf(&quot;Node count: %d&quot;, nodes);
  gtk_label_set_text(GTK_LABEL(count_label_), nodes_str);
  g_free(nodes_str);

  gchar* maxdepth_str = g_strdup_printf(&quot;DOM height: %d&quot;, maxdepth);
  gtk_label_set_text(GTK_LABEL(height_label_), maxdepth_str);
  g_free(maxdepth_str);

  gchar* maxbreadth_str = g_strdup_printf(&quot;DOM width: %d&quot;, 
    maxbreadth);
  gtk_label_set_text(GTK_LABEL(width_label_), maxbreadth_str);
  g_free(maxbreadth_str);
}
As you can see, most of this final piece of the puzzle is just calling the Gtk code needed to update the user interface.

So now we've gone full circle: the user interface thread executes some JavaScript code on the render thread in the view's DOM context. This then calls a C++ method also on the render thread, which sends a message to the user interface thread, which updates the widgets to show the result.

All of the individual steps make sense in their own way, but it is, if I'm honest, a bit convoluted. I can fully understand that message passing is needed between the different threads, but it would have been nice to be able to send the message directly from the JavaScript. Although there are constraints that apply here for security reasons, the Qt WebEngine and Gecko WebView equivalents both abstract these steps away from the developer, which makes life a lot easier.

With all of this hooked up, pressing the execute JavaScript button now has the desired effect.
 
Similar to the previous image showing an application with whatsmybrowser.org rendered in the window. Now the elements of the Web page are bordered in red. Along the bottom toolbar the node count now shows 421, the DOM height shows 13 and the DOM width shows 88.

The CEF project works hard to make Blink accessible as an embedded browser, but there's still plenty of complexity to contend with. Given just the few pieces we've covered here — lifecycle, navigation, JavaScript execution and message passing — you'll likely be able to do the majority of things you might want with an embedded browser. Crucially, you can integrate the browser component seamlessly with the rest of your application.

It's powerful stuff, but it's also true to say that the other approaches I tried out managed to hide this complexity a little better. The main reason for this would seem to be because CEF doesn't target any particular widget toolkit. It can, in theory, be integrated with any toolkit, whether it be on Linux, Windows or macOS.

While that flexibility comes at a cost in terms of complexity, that hasn't stopped CEF becoming popular. It's widely used by both open source and commercial software, including the Steam client and Spotify desktop app.

In the next section we'll look at the Qt WebEngine, which provides an alternative way to embed the Blink rendering engine into your application.

Qt WebEngine

In the last section we looked at CEF for embedding Blink into an application with minimal restrictions on the choice of GUI framework. We'll follow a similar approach as we investigate Qt WebEngine: first looking at the API, then seeing how we can apply it in practice.

Although both uses Blink, there are other important differences between the two. First, Qt WebEngine is tied to Qt. That means that all of the classes we'll look at bar one will inherit from QObject and the main user interface class will inherit from QWidget (which itself is a descendant of QObject).

While Qt is largely written in C++ and targets C++ applications, we'll also make use of QML for our example code. This will make the presentation easier, but in practice we could achieve exactly the same results using pure C++. We'd just end up with a bit more code.

So, with all that in mind, let's get to it.

The fact that Qt WebEngine exclusively targets Qt applications does make things a little simpler, both for the Qt WebEngine implementation and for our use of it. Consequently we can focus on just two classes. In practice there are many more classes that make up the API, but many of these have quite specific uses (such as interacting with items in the navigation history, or handling HTTPS certificates). All useful stuff for sure, but our aim here is just to give a flavour.

The two classes we're going to look at are QWebEnginePage and QWebEngineView. Here's an abridged version of the former:
class QWebEnginePage : public QObject
{
  Q_PROPERTY(QUrl requestedUrl...)
  Q_PROPERTY(qreal zoomFactor...)
  Q_PROPERTY(QString title..)
  Q_PROPERTY(QUrl url READ...)
  Q_PROPERTY(bool loading...)
  [...]

public:
  explicit QWebEnginePage(QObject *parent);

  virtual void triggerAction(WebAction action, bool checked);

  void findText(const QString &subString, FindFlags options,
    const std::function<void(const QWebEngineFindTextResult &)> 
    &resultCallback));
  void load(const QUrl &url);
  void download(const QUrl &url, const QString &filename);
  void runJavaScript(const QString &scriptSource,
    const std::function<void(const QVariant &)> &resultCallback);
  void fullScreenRequested(QWebEngineFullScreenRequest fullScreenRequest);
  [...]
};
If you're not familiar with Qt those Q_PROPERTY macros at the top of the class may be a bit confusing. These introduce scaffolding for setters and getters of a named class variable. The developer still has to define and implement the setter and getter methods in the class. However properties come with a signal method which other methods can connect to. When the value of a property changes, the connected method is called, allowing for immediate reactions to be coded in whenever the property updates.

According to the Qt documentation, the QWebEnginePage class...
 
holds the contents of an HTML document, the history of navigated links, and actions.

That's reflected in the methods and member variables I've pulled out here. The title, url and load status of the page are all exposed by this class and it also allows us to search the page. The reference to actions in the documentation relates to the triggerAction() method. There are numerous types of WebAction that can be passed in to this. Things like Forward, Back, Reload, Copy, SavePage and so on.

You'll also notice there's a runJavaScript() method. If you've already read through the section on CEF you should have a pretty good idea about how we're planning to make use of this, but we'll talk in more detail about that later.

The other key class is QWebEngineView. This inherits from QWidget, which means we can actually embed this object in our window. It's the class that actually gets added to the user interface. It's therefore also the route through which we can interact with the QWebEnginePage page object that it holds.
class QWebEngineView : public QWidget
{
  Q_PROPERTY(QString title...)
  Q_PROPERTY(QUrl url...)
  Q_PROPERTY(QString selectedText...)
  Q_PROPERTY(bool hasSelection...)
  Q_PROPERTY(qreal zoomFactor...)
  [...]

public:
  explicit QWebEngineView(QWidget *parent);
  QWebEnginePage *page() const;
  void setPage(QWebEnginePage *page);

  void load(const QUrl &url);
  void findText(const QString &subString, FindFlags options,
    const std::function<void(const QWebEngineFindTextResult &)> 
    &resultCallback);
  QWebEngineSettings *settings() const;
  void printToPdf(const QString &filePath, const QPageLayout &layout,
    const QPageRanges &ranges);
  [...]

public slots:
  void stop();
  void back();
  void forward();
  void reload();
  [...]
};
Notice the interface includes a setter and getter for the QWebEnginePage object. Some of the page functionality is duplicated (for convenience as far as I can tell). We can also get access to the QWebEngineSettings object through this view, which allows us to configure similar browser settings to those we might find on the settings page of an actual browser.

There are also convenience slots for navigation (a slot is just a method that can be either called directly, or connected up to one of the signals I mentioned earlier).

And with these few classes we have what we need to create ourselves an example application. I've created something equivalent to our CEF example application described in the previous section, called WebEngineTest; all of the code for it is available on GitHub, but I'm also going to walk us through the most important parts here.
 
An application window containing the same elements as before: a toolbar at the top, an info bar at the bottom and between the two the whatsmybrowser.org page is rendered.

If you looked at the sprawling CEF code, you may be surprised to see how simple the Qt WebEngine equivalent is. The majority of what we need is encapsulated in this short snipped of QML code copied from the Main.qml file.
Column {
  anchors.fill: parent

  NavBar {
    id: toolbar
    webview: webview
    width: parent.width
  }

  WebEngineView {
    id: webview
    width: parent.width
    height: parent.height - toolbar.height - infobar.height
    url: &quot;https://www.whatsmybrowser.org&quot;
    onUrlChanged: toolbar.urltext.text = url
    settings.javascriptEnabled: true

    function getInfo() {
      runJavaScript(domwalk, function(result) {
        infobar.dominfo = JSON.parse(result);
      });
    }
  }

  InfoBar {
    id: infobar
  }
}
This column fills the entire application window and essentially makes up the complete user interface for our application. The column contains three rows. At the top and bottom are a NavBar widget and an InfoBar widget respectively. Nestled between the two is a WebEngineView component, which is an instance of the class with the same name that we described above.

We'll take a look at the NavBar and InfoBar shortly, but let's first concentrate on the WebEngineView. It has a width set to match the width of the page and a height set to match the page height minus the size of the other widgets. We set the initial page to load and set JavaScript to be enabled, like so:
settings.javascriptEnabled: true
As it happens JavaScript is enabled by default, so this line is redundant. It's there as a demonstration of how we can interact with the elements inside the QWebEngineSettings component we saw above.

Then there's the getInfo() method that executes the following:
  runJavaScript(domwalk, function(result) {
    infobar.dominfo = JSON.parse(result);
  });
These three lines of code are performing all of the complex message passing steps that we described at length for the CEF example. We call runJavaScript() which is provided by the QML interface as a shortcut to the method from QWebEnginePage.

The method takes the JavaScript script to execute — as a string — for its first parameter and a callback that's called on completion of execution for the second parameter. Internally this is actually doing something very similar to the CEF code we saw above: it passes the code to the V8 JavaScript engine to execute inside the DOM, then waits on a message to return with the results of the call.

In our callback we simply copy the returned data into the infobar.dominfo variable, which is used to populate the widgets along the bottom of the screen.

It all looks very clean and simple. But there is some machinery needed in the background to make it all hang together. First, you may have noticed that for our script we simply pass in a domwalk variable. We set this up in the main.cpp file (which is the entrypoint of our application). There you'll see some code that looks like this:
  QString domwalk;
  QFile file(&quot;:/js/DomWalk.js&quot;);
  if (file.open(QIODevice::ReadOnly)) {
    domwalk = file.readAll();
    file.close();
  }
  engine.rootContext()->setContextProperty(&quot;domwalk&quot;, domwalk);
This is C++ code that simply loads the file from disk, stores it in the domwalk string and then adds the domwalk variable to the QML context. Doing this essentially makes domwalk globally accessible in all the QML code. If we were writing a larger more complex application we might approach this differently, but it's fine here for the purposes of demonstration.

Next up, let's take a look at the NavBar.qml file. QML automatically creates a widget named NavBar based on the name of the file, which we saw in use above as part of the main page.
Row {
  height: 48
  property WebEngineView webview
  property alias urltext: urltext

  NavButton {
    icon.source: &quot;../icons/back.png&quot;
    onClicked: webview.goBack()
    enabled: webview.canGoBack
  }

  NavButton {
    icon.source: &quot;../icons/forward.png&quot;
    onClicked: webview.goForward()
    enabled: webview.canGoForward
  }

  NavButton {
    icon.source: &quot;../icons/execute.png&quot;
    onClicked: webview.getInfo()
  }

  Item {
    width: 8
    height: parent.height
  }

  TextField {
    id: urltext
    y: (parent.height - height) / 2
    text: webview.url
    width: parent.width - (parent.height * 3.6) - 16
    color: palette.windowText
    onAccepted: webview.url = text
  }
}
As we can see, the toolbar is a row of five widgets. Three buttons, a spacer and the URL text field. The first two buttons simply call the goBack() and goForward() methods on our QWebEngineView class. The only other thing of note is that we also enable or disable the buttons based on the status of the canGoBack and canGoForward properties. This is where the signals we discussed earlier come in: when these variables change, they will output signals which are bound to these properties so that the change is propagated throughout the user interface. That's a Qt thing and it works really nicely for user interface development.

Finally for the toolbar, the third button simply calls the getInfo() method that we created as part of the definition of our WebEngineView widget above. We already know what this does, but just to recap, this will execute the domwalk JavaScript inside the DOM context and store the result in the infobar.dominfo variable.

The NavButton component type used here is just a simple wrapper around the QML Button QML.

Now let's look at the code in the InfoBar.qml file:
Row {
  height: 32
  anchors.horizontalCenter: parent.horizontalCenter

  property var dominfo: {
    &quot;nodes&quot;: 0,
    &quot;maxdepth&quot;: 0,
    &quot;maxbreadth&quot;: 0
  }

  InfoText {
    text: qsTr(&quot;Node count: %1&quot;).arg(dominfo.nodes)
  }

  InfoText {
    text: qsTr(&quot;DOM height: %1&quot;).arg(dominfo.maxdepth)
  }

  InfoText {
    text: qsTr(&quot;DOM width: %1&quot;).arg(dominfo.maxbreadth)
  }
}
The overall structure here is similar: it's a row of widgets, in this case three of them, each a text label. The InfoText component is another simple wrapper, this time around a QML Text widget. As you can see the details shown in each text label are pulled in from the dominfo variable. Recall that when the domwalk JavaScript code completes execution, the callback will store the resulting structure into the dominfo variable we see here. This will cause the nodes, maxdepth and maxbreadth fields to be updated, which will in turn cause the labels to be updated as well.

And it works too. Clicking the execute JavaScript button will paint the elements of the page with a red border and display the stats in the infobar, just as happened with our CEF example:
 
Similar to the last image, but now the items rendered on the page are outlined with a red border; along the bottom of the page the text reads: node count 422, DOM height 13, DOM width 89.

When I first started using QML this automatic updating of the fields felt counter-intuitive. In most programming languages if an expression includes a variable, the value at the point of assignment is used and the expression isn't reevaluated if the variable changes value. In QML if a variable is defined using a colon : (as opposed to an equals =) symbol, it will be bound to the variables in the expression and updated if they change. This is what's happening here: when the dominfo variable is updated, all of its dependent bound variables will be updated too. All made possible using the magical signals from earlier.

Other user interface frameworks (Svelte springs to mind) have this feature as well; when used effectively it can make for super-simple and clean code.

There's just one last piece of the puzzle, which is the domwalk code itself. I'm not going to list it here, because it's practically identical to the code we used for the CEF example, which is listed above. The only difference is the way we return the result back at the end. You can check out the DomWalk.js source file if you'd like to compare.

And that's it. This is far simpler than the code needed for CEF, although admittedly the CEF code all made perfect sense. Unlike CEF, Qt WebEngine is only intended for use with Qt. This fact, combined with the somewhat less verbose syntax of QML compared to C++, is what makes the Qt version so much more concise.

In both cases the underlying Web rendering and JavaScript execution engines are the same: Blink and V8 respectively. It's only the way the Chromium API is exposed that differs.

Let's now move on to the Sailfish WebView, which has a similar interface to Qt WebEngine but uses a different engine in the background.

Sailfish WebView

The Sailfish WebView differs from our previous two examples in some important respects. First and foremost it's designed for use on Sailfish OS, a mobile Linux-based operating system. The Sailfish WebView won't run on other Linux distributions, mainly because Sailfish OS uses a bespoke user interface toolkit called Silica, which is built on top of Qt, but which is specifically targeted at mobile use.

Although the Sailfish WebView may therefore not be so useful outside of Sailfish OS, it's Sailfish OS that drives my interest in mobile browsers. So from my point of view it's very natural for me to include it here.

Since it's built using Qt and is exposed as a QML widget, the Sailfish WebView has many similarities with Qt WebEngine. In fact the Qt WebEngine API is itself a successor of the Qt WebView API, which was previously available on Sailfish OS and which the Sailfish WebView was developed as a replacement for.

So expect similarities. However there's also one crucial difference between the two: whereas the Qt WebEngine is built using Blink and V8, the Sailfish WebView is built using Gecko and SpiderMonkey. So in the background they're making use of completely different Web rendering engines.

Like the other two examples, all of the code is available on GitHub. The repository structure is a little different, mostly because Sailfish OS has its own build engine that's designed for generating RPM packages. Although the directory structured differs, for the parts that interest us you'll find all of the same source files across both the WebEngineTest and harbour-webviewtest.
 
A screenshot of Sailfish OS running the harbour-webviewtest application. The screen shows the same elements as before: a toolbar at the top, an info bar at the bottom and between the two the whatsmybrowser.org page is rendered.

Looking first at the Main.qml file there are only a few differences between this and the equivalent file in the WebEngineTest repository.
Column {
  anchors.fill: parent

  NavBar {
    id: toolbar
    webview: webview
    width: parent.width
  }

  WebView {
    id: webview
    width: parent.width
    height: parent.height - (2 * Theme.iconSizeLarge)
    url: &quot;https://www.whatsmybrowser.org/&quot;
    onUrlChanged: toolbar.urltext.text = url
    Component.onCompleted: {
      WebEngineSettings.javascriptEnabled = true
    }

    function getInfo() {
      runJavaScript(domwalk, function(result) {
        infobar.dominfo = JSON.parse(result);
      });
    }
  }

  InfoBar {
    id: infobar
    width: parent.width
  }
}
The main difference is the way the settings are accessed and set. Whereas the settings property could be accessed directly as a WebEngineView property, here we have to access the settings through the WebEngineSettings singleton object. Otherwise this initial page is the same and the approach to calling JavaScript in the DOM is also identical.

The DomWalk.js code is also practically identical. One difference is that we don't use structuredClone() because the engine version is slightly older. We use a trick of converting to JSON and back instead to achieve the same result. This JavaScript is loaded in the main.cpp file in the same way as for WebEngineTest.

The NavBar.qml and InfoBar.qml files are to all intents and purposes identical, so I won't copy the code out here.

And that's it. Once again, it's a pretty clean and simple implementation. It demonstrates execution of JavaScript within the DOM that's able to manipulate elements and read data from them. It also shows data being passed back from Gecko to the native app that wraps it.
 
Similar to the last screenshot, but now the items rendered on the page are outlined with a red border; along the bottom of the page the text reads: Nodes 421, Height 13, Width 88.

Although the Sailfish WebView is uses Gecko, from the point of view of the developer and the end user there's no real difference between the API offered by Qt WebEngine and that offered by the Sailfish WebView.

For Sailfish OS users it's natural to ask whether it makes sense to continue using Gecko, rather than switching to Blink. I'm hoping this investigation will help provide an answer, but right now I just want to reflect on the fact there's very little difference from the perspective of the API consumer.

Wrap-up

We've seen three different embedding APIs. CEF uses Chromium and in particular Blink and V8 for its rendering and JavaScript engines respectively. CEF isn't aimed at any particular platform or user interface toolkit and consequently writing an application that uses it requires considerably more boilerplate than the Qt WebEngine or Sailfish OS WebView approaches. The latter two both build on the Qt toolkit and it wouldn't make sense to use them with something like Gtk.

While CEF and Qt WebEngine both share the same rendering backend, their APIs are quite different, at least when it comes to the specifics. But in fact, the functionalities exposed by both are similar.

The Sailfish WebView on the other hand uses completely different engines — Gecko and SpiderMonkey — and yet, in spite of this the WebView API is really very similar to the Qt WebEngine API.

So as a developer, why choose one of them over the other? When it comes to the WebEngine and the WebView the answer is happily straightforward: since they support different, non-overlapping platforms, if you're using Sailfish OS consider using the WebView; if you're not it's the WebEngine you should look at.

To wrap things up, lets consider the core functionalities the APIs provide. Although each has its own quirks, fundamentally they offer something similar:
  1. Rendering of Web content. As an embedded browser, the content can be rendered in a window in amongst the other widgets of your application.
  2. Navigation. The usual Web navigation functionalities can be controlled programmatically. This includes setting the URL, moving through the history and so on
  3. Settings. The browser settings can be set and controlled programmatically.
  4. Access to other browser features. This includes profiles, password management, pop-up controls, downloading content and so on. All of these are also accessible via the API.
  5. JavaScript execution. JavaScript can be executed in the DOM and in different contexts, including with privileged access to all of the engine's functionalities.
  6. Message passing. Messages can be sent from the managing application to the renderer and in the other direction, allowing fine-grained integration of the two
Based on my experiences using them, this list captures the main features that all three of the embedded browsers offer. And in practice, there's not much more you could ask for. With these capabilities the browser engine can be integrated seamlessly into another application, which is exactly what you want from an embedded browser.

At the start I said I'd consider whether Gecko is still appropriate for use by Sailfish OS for its embedded browser. This is an important question that we're now closer to having a clearer answer to. I'll take a look at this in more detail in a future post.
Comment
13 Nov 2024 : Templating in C #
Many years ago — probably some time around 2006 — I was working as a researcher at Liverpool John Moores University and we had an external speaker come to talk to our students about C++ coding.

It's so long ago that I don't recall the speaker's name, but they presented very convincingly about the benefits of using templates and their extensive and effective use as part of the C++ standard library. Or possibly Boost.

Either way, I recall the focus on templates and how powerful they could be. Although I was familiar with templates, I'd never had them explained to me in quite such a vivid and uncomplicated way.

Still, with this new found clarity I had some doubts. "What do templates give you that you can't already do with C pre-processor macros?" I wondered. As we travelled down in the lift together following the presentation, I remember asking the presenter this same question. They said something about type safety, but the journey in the lift wasn't long enough to go in to detail and I remained unconvinced. As is so often the case, I assumed my confusion must be grounded in a lack of knowledge on my part.

Twenty years later a friend and I were discussing the benefits of C++ over C. We both agree that the lack of destructors in C is one of its biggest omissions, something that it's hard to work around. You can create your own destructor, but how to you get it called automatically when a variable goes out of scope? My friend then suggested that he particularly likes C++ support for vectors.
 
"I appreciate the simplicity of C++'s vector (for example). You probably can do something similar in C but won't it be more involved to get it working for all types for example?"

There's no doubt, vectors are a nice feature of the C++ standard library. But I felt there was scope to achieve something similar in C. In doing so you might run up against the same lack of support for destructors, but we'd already covered that. Otherwise vectors felt like something a good C library should be able to provide, not something that relies on any intrinsic capability of the language.

I'd always regretted not pursuing my 20-year-old macro-template question to the point of implementation, so I saw my opportunity to try now. What's needed to implement a nice templated vector class in C?

You can see the code we came up with on GitHub. It's not intended as a complete implementation; more a demonstration of what might be possible. It provides two constructs: a vector template and an Example class to test it with. This is C++ so the former isn't actually a template and the latter isn't actually a class. Instead we've used pre-processor macros and conventions based on an object-oriented approach instead respectively.

Throughout the rest of this post I'll talk about them as templates and classes anyway, because conceptually that's what we're aiming for.

Before getting in to how the template is implemented, let's first take a look at the Example class. We can see the struct that collects together the "member variables" and the functions that provide the "methods" alongside one another in the example-class.h file.

Here are the members:
struct _Example {
  uint64_t length;
  char * string;
};
As you can see the class holds a length to indicate the length of the string and a dynamic array of char instances for the string data. You can imagine that this is a much simplified version of the standard library's string class.

For the methods we have a bunch of constructors and destructors:
void Example_construct(Example *data);
void Example_construct_init(Example *data, char const * const string);
void Example_destruct(Example *data);
Similar to C++ we have a default constructor and an "override" that accepts initialisation parameters. It's not an override at all of course because C doesn't support multiple functions using the same name. Instead we just name the two constructors differently, but with the same prefix. Under the hood this is what C++ is doing as well through name mangling, it's just hidden from the programmer.

In practice this will turn out to be fine, because code that wants to automatically call the constructor is likely to want the default constructor anyway.

If you check inside the implementations for these methods you'll notice that the constructors don't allocate memory for the struct and the destructor doesn't free it, they only allocate and free for the member variables. You might reflect that the same is true for constructors and destructors in C++. But this isn't just a case of copying C++, it turns out to be necessary for our implementation, especially for handling objects rather than references.

I think it's interesting to note that, in doing this task, we end up making all the same decisions as were made decades ago for C++. It makes it a useful learning exercise for me.

We'll come back to this later, but it means we can also create some new and delete methods for ourselves. We get these for free in C++:
Example *example_new();
Example *example_new_init(char const * const string);
Example *example_delete(Example *data);
These two new methods do allocate memory for the object data, before calling the constructor. Contrariwise the delete method calls the destructor and then frees the allocated memory. I've not looked at the C++ source code for creation and deletion, but I imagine it does something similar.

Finally we have a bunch of methods for manipulating the data held be the object. The first is used to populate the string using a given format provided, similar to sprintf(). The latter just dumps out the contents of the string to the console, prefixed by the length. As the name implies, this second method is really only intended for debugging purposes.
void example_sprintf(Example *data, char const * format, ...);
void example_debug_print(Example *data);
If this were a proper string implementation we'd want a lot more functionality for accessing and manipulating the string. But this simple example is enough for our purposes.

If you look in the main() method in main.c you'll see an example of its use:
  Example_construct_init(&example, "Hello World!");
  example_debug_print(&example);
Now as the name suggestions this is just an example class, but it already demonstrates the foundations of our class-based approach to C. Apart from the new methods every one of our functions accepts a pointer to an Example as its first parameter. This is the equivalent of the class object this in C++. Every class has a constructor and a destructor. Following the same conventions for all other struct implementations will make our C code safer and more robust, and will encourage increased separation between classes.

There are many interesting ways to extend this, including with support for virtual methods and inheritance, all within the constraints of C, but those are topics for another day. If you'd like to see a nice object-oriented set of class implementations in C, I recommend taking a look at the GLib code. The GLib GString implementation is a much more feature-complete version of what we've got here.

This is all well and good, I hear you say, but what does it have to do with C templates? Okay, okay, it has nothing to do with them, but we will get there. Before we do we can make our lives easier by first considering how we might make a bespoke vector class just for use with Example objects.

If we check out an earlier version of the code in the repository we can see an example in the cvector.h and cvector.c files. I put these together so that I could understand what was needed prior to converting it into a template, so I think it'll be helpful to review the files before we move on.

The header shows a similar sort of structure to our Example class. Given what I said above about following similar conventions for all of our "classes" this won't come as a great shock. We start with the structure for holding the member variables:
typedef struct _Vector Vector;
Unlike for our Example class in this case we're keeping the actual implementation opaque because we don't have to know its size for use elsewhere.

Next we define the default constructor and destructor, alongside respective new and delete methods.
void vector_construct(Vector *data);
Vector *vector_new();
void vector_destruct(Vector * data);
Vector *vector_delete(Vector *data);
Finally we have a bunch of class methods that are unique to this class and which provide all of the real functionality:
Example vector_get(Vector *data, uint64_t position);
void vector_push(Vector *data, Example example);
Example vector_pop(Vector *data);
uint64_t vector_length(Vector *data);

void vector_resize(Vector *data, uint64_t required);

void vector_debug_print_space(Vector *data);
We have a method for getting items from the vector using random access, a method for pushing items to the end of the vector; a method for popping items from the end of the vector and a method that returns the number of items in the vector. The resize method is for internal use (I shouldn't have included it in the header really) and a debug method for printing out some info about the contents of the vector.

As before, in each case the data parameter can be considered equivalent to this in C++ (or self in other languages).

One important thing to note about these methods is that the Example parameter of vector_push() and the return value of vector_pop() are both values rather than pointers or references. That's intentional, because our vector doesn't just store pointers, we want it to be able to store values as well. That's to reflect the same situation as a C++ vector, which can also store values. Pointers all have the same size (64 bits on a 64 bit machine), so if we store only pointers the stride of the vector is always going to be the same. That's a bit dull. We want to support vectors that handle strides of different lengths, both larger (e.g. structs containing lots of data) and smaller (e.g. chars) than would be typically needed for pointers.

The downside is that calling the vector_push() method will potentially result in a large memory copy. A constant reference would be nice here, but we don't have references in C. If this were a real library I might have gone for a constant pointer as a compromise, but for the sake of this exercise and to keep things less confusing I'm sticking with passing by value.

Let's take a look at the implementation, starting with the all-important data structure.
struct _Vector {
  uint64_t space;
  uint64_t count;

  Example *array;
};
Here we store an array of Example elements. We use a pointer rather than an actual array because we want the size to be dynamic, but by giving it the Example type our stride will be sizeof(Example). We also store a count to represent the number of items in the vector and a space value which represents the size of the array.

The value of count and space can be different because we may want to allocate more space than we have elements. This will allow us to reallocate memory more judiciously, in our case controlled by the VECTOR_SPACE_STRIDE value. This represents how many items we allocate for at a time. I've set this to be eight, which means that the memory for the array will be allocated eight items at a time. Note that we must always have spacecount to avoid memory corruption.

Keeping the size of the array large enough is the job of the vector_resize() method. We pass in the number of items we need the array to accommodate (count) and it resizes the array to ensure it's large enough, potentially reducing its size if this is possible.

I'm going to skip over how vector_resize() works (it's an implementation detail) but it's helpful to see how things are working to some extent, so let's look at the vector_push() method:
void vector_push(Vector *data, Example example) {
  uint64_t count;

  count = data->count + 1;
  vector_resize(data, count);
  memcpy(&data->array[count - 1], &example, sizeof(Example));
  data->count = count;
}
The purpose of this method is to push an item to the end of the vector. We calculate the new space required, which is just the current space plus one (we're adding a single element to the end). We resize the array using vector_resize() to ensure we have the space for it. Then we copy the value from the passed Example structure into the memory that we now know is now available in the array. Finally we set the count of our array to the new, incremented, value.

We're using memcpy to transfer the value. That's important. If this were C++ we might have invoked the class's copy assignment operator here, but this is C and we don't have one.

Actually that's not true. I could very well have created an Example_copy() method, equivalent to a copy assignment. This would make our vector more flexible at the expense of having to implement more methods on our Example structure. I skipped this to avoid complicating the implementation but it would be a very simple addition.

If you look through the full implementation of Vector you'll see that we reference Example as a type quite a few times (I count nine in total). We also reference the following two methods that apply to it explicitly:
  1. void example_construct(Example *data);
  2. void example_destruct(Example *data);
If we'd also defined a copy assignment operator we might also have added the following to this list:
  1. Example *example_copy(Example const *data, Example *other);
These methods are what we might call the interface of Example in C++, but which I think align more closely with the concept of a trait. They're the methods we have to implement in order for our vector class to be able to support them.

They're a pretty minimal set of requirements and align nicely with the default methods we'd normally stick in a C++ class. One of the nice features about our templated version of our C vector is that the code won't compile if these methods aren't defined for our particular type: it'll be a compile-time error rather than a runtime error.

That's true for C++ as well, except that one of the benefits of C++ is that we get default versions of these in case we don't define them ourselves. For our C version we get nothing for free: if we don't define them they won't exist. That's arguably one of the benefits of C over C++: everything is explicit, so you always know what's going on. But obviously the downside is we need to write much more of the implementation ourselves.

Let's take stock. We have an Example class that's like a cut-down version of std::string and we have a Vector class that's like a cut-down version of std::vector. But our vector can only hold Example items; it's intrinsically restricted to supporting this one type.

The next step is to decouple them, which is where the templating comes in.

In order to turn our vector into a template vector class we need to make two changes. First we need to abstract out all of those references to the Example type. Second we have to abstract out the references to example_construct() and example_destruct().

The first change is the easier of the two. We're going to replace every use of Example in our vector code with a TYPE placeholder. Then we're going to allow TYPE to be changed at compile-time by making all of our code a pre-processor macro.

So this is what our struct becomes:
typedef struct _Vector {
  uint64_t space;
  uint64_t count;

  TYPE *array;
} Vector;
By way of example for the methods, this is what our vector_push() becomes:
void vector_push(Vector *data, TYPE item) {
  uint64_t count;

  count = data->count + 1;
  vector_resize(data, count);
  memcpy(&data->array[count - 1], &item, sizeof(TYPE));
  data->count = count;
}
Eventually we'll put these into a macro and we'll need to generate different versions for every TYPE we want to use. But let's not get ahead of ourselves just yet.

In C we can't override functions, so we're going to have to give each of our structures and functions a name that's unique to the type. To do this we're going to use some more macro magic, by concatenating the TYPE placeholder with each of the names, like so:
typedef struct _Vector_##TYPE {
  uint64_t space;
  uint64_t count;

  TYPE *array;
} Vector_##TYPE;
Now our vector implementation for the Example type will use a struct called Vector_Example rather than just Vector. We can do the same thing for our methods as well, like this:
void vector_push_##TYPE(Vector_##TYPE *data, TYPE item) {
  uint64_t count;

  count = data->count + 1;
  vector_resize_##TYPE(data, count);
  memcpy(&data->array[count - 1], &item, sizeof(TYPE));
  data->count = count;
}
So now our vector_push() method will actually take the name vector_push_Example(). If we were to create a vector that consumes a different type, say a Blob type, the names would become Vector_Blob and vector_push_Blog() respectively.

We don't want to have to remember to perform this name mangling ourselves every time, so we also create some pre-processor macros for the function names as well, like this:
#define vector_push(TYPE) vector_push_##TYPE
Now, if we want to call the vector_push() method that we've defined for the Example type, we can call it like this:
vector_push(Example)(vector, example);
The code with Example surrounded by parenthesis can be considered like the angle-bracket equivalent for templates in C++:
vector_push<Example>(vector, example);
We just have to use parentheses rather than angle brackets because C has no concept of the angle brackets as used for templates. The compiler would think they were inequalities.

Next we have to deal with the constructor and destructor calls. For example, when we call the destructor on our vector it's going to call the destructor on all of the items it's holding, like this:
void vector_destruct(Vector * data) {
  uint64_t pos;

  if (data->array) {
    for (pos = 0; pos < data->count; ++pos) {
      example_destruct(&data->array[pos]);
    }

    free(data->array);
    data->array = NULL;
  }
  data->space = 0;
  data->count = 0;
}
We need to fix the call to example_destruct() in the middle of that code so that it calls the destructor for the specific type held by the vector. We can do it like this:
void vector_destruct_##TYPE(Vector_##TYPE * data) {
  uint64_t pos;

  if (data->array) {
    for (pos = 0; pos < data->count; ++pos) {
      TYPE##_destruct(&data->array[pos]);
    }

    free(data->array);
    data->array = NULL;
  }
  data->space = 0;
  data->count = 0;
}
So now, in the case of our vector holding Example objects this will now call Example_destruct() whereas for our vector holding Blob objects, this will call Blob_destruct(). Great!

The C pre-processor isn't powerful enough to manipulate parameters into lowercase, so in order to support this change, we have to capitalise the constructor and destructor methods for our Example class as well:
void Example_construct(Example *data);
void Example_destruct(Example *data);
It's a bit ugly like this in my opinion, but pragmatically it's the right thing to do. For example, it ensures we can to support structs that share the same name apart from their capitalisation, just as we should.

Finally we need to actually wrap all of these changes up into a macro. That means we have to name the macro and add a backslash to the end of each line of our implementation. We end up with something that looks horrific, but is otherwise pretty clear and works as we expect:
#define VECTOR_TEMPLATE(TYPE) \
typedef struct _Vector_##TYPE { \
  uint64_t space; \
  uint64_t count; \
 \
  TYPE *array; \
} Vector_##TYPE; \
[...]
With all this in place, the only thing we now need to do is add a call to this macro at the top of our code to define the actual vector classes we want to use:
VECTOR_TEMPLATE(Example)
VECTOR_TEMPLATE(Blob)
[...]
And that's it! We now have a fully type-safe templated vector class written in C that doesn't require any C++ magic.

The nice thing about this is that it really is very similar to the C++ implementation. For example, as with C++ templates, all of the implementation code is now in the header file and only gets compiled in the place where the macro is instantiated. Similar to a C++ template, entirely new code is generated for every template instance that's defined. And if one of the classes we use in our template is missing a constructor or destructor method, the code will refuse to compile. No messy runtime failures due to a missing implementation.

C++ has become a wild and wonderful language, while C has remained astonishingly stable. Despite this it seems the ties between C and C++ remain surprisingly strong.

We didn't go through all of the code here, but I did try to include examples to cover all of the relevant aspects. Do check out the full C template implementation in the repository. Here's the result of running the code:
 
Console output with two panes showing the result of executing the code in the left pane and the code from cvector.h in the right pane. The execution ends with 30 calls to the destructor of the Example class when the vector is deleted.


If we can implement templates in C, the obvious follow-on question is why we need templates in C++ at all. After all, pre-processor macros are available in C++ as well.

Well, although using pre-processor macros this way allows us to get something remarkably close to C++ templates, there remain significant limitations.

The most obvious one is that writing the code is much harder with the C version. Using string concatenation works, but you have to be quite careful to get the code correct, for example with all of the function renaming. This is still needed with C++ templates, the difference is that the compiler does it all for you automatically when it mangles the function names.

Perhaps more importantly, templates are type-aware. That means that you can have template functionality that's dependent on type, using C++ traits, like this:
#include <type_traits>

template <typename TYPE>
void vector_destruct(Vector * data) {
  uint64_t pos;

  if (data->array) {
    if constexpr (std::is_object_v<TYPE>) {
      for (pos = 0; pos < data->count; ++pos) {
        destruct(&data->array[pos]);
      }
    }

    free(data->array);
    data->array = NULL;
  }
  data->space = 0;
  data->count = 0;
}
Sadly the C pre-processor simply isn't sophisticated enough to do anything like this. This also hints at other issues that we might experience with our C implementation. For example, everything works fine for vanilla types, but if we try to use pointers, or const pointers, or any datatype with a composite name, we're going to run in to trouble. There's a solution, which is to make a typedef and use that instead, but it's an extra layer of abstraction and work. Likewise, if we want to use a datatype that has no constructor or destructor (for example if it's a fundamental type) then we'll have to define an empty constructor and an empty destructor in order to allow the code to compile. We can set these as being inline so that the compiler can optimise them away, but again, it's extra work.

Using macros in place of templates this way isn't intended to be a serious tool. It might be useful under certain circumstances, but if you find yourself regularly using these kinds of constructs, it might be an indication that it's time to switch to C++. But thinking about how to implement templates in C can be a useful exercise in better understanding the underlying mechanisms of C++. At least, I certainly feel I've improved my understanding as a result.

Now that we have templates the next step is to automate all this using a tool to pre-process our source files. Perhaps our tool could also allow the developer to use angle brackets Vector>Example< instead of Vector(Example) parentheses? I can see this becoming really successful! I just have this nagging feeling... didn't someone already implemented something like this?
Comment
6 Oct 2024 : Reviewing My Browser History #
For many years I thought it would be a mistake to mix my hobbies with my professional life. Blurring the boundary would prevent me defining a clear boundary between my work time and my relaxation time. I thought it could also lead to things I enjoy becoming contaminated, irreversibly harming the joy I get from them. It's not that I didn't want to enjoy my work: quite the opposite in fact. I felt in order to enjoy both I needed to maintain a separation.

As my life has progressed I've changed my opinion on this. It's great to separate work and play, but there's also immense joy to be had from doing something you love as a professional endeavour. Mixing the two together has the potential to amplify the joy from both.

Working for Jolla was what really brought this home to me. Smartphone development, user privacy and control, and Sailfish OS in particular were always part of the life I separated from my career. When I started working at Jolla I thought I was taking a risk. Would I lose my passion? Would I regret knowing what goes on inside the "sausage factory"?

My concerns were unfounded and, with this experience in hand, I now make it my aim to bring my personal passions into my professional life as well.

Now I'm at the Turing I'm no longer developing for Sailfish OS during work hours. As readers of my gecko dev diaries will know, upgrading the Sailfish browser has been one of my main activities outside of work. Finding opportunities to bring this Sailfish development into my professional world has been one of my objectives and recently just such an opportunity arose.

It's only a small overlap: in November I'll be giving a presentation about browsers at the Turing. The title of the talk will be "The Anatomy of a Browser: Embedded Mobile Lizards". Lizards being a reference to Gecko.

To help with this I've been digging a little into the history of browsers. They have a rich, fascinating and often fractious history that I find fascinating and one I want to talk a little more about today.

But to understand the history, we first need to understand a little about the internals of a browser.

Browser Internals

What are the various pieces that make up a browser? Broadly speaking we can see it as being made up of five parts:
  1. Protocol client (HTTP/S, WS/S, file, FTP,...).
  2. JavaScript engine.
  3. DOM - Document Object Model.
  4. Layout/rendering engine (HTML, CSS, SVG).
  5. Media encoder/decoder (JPEG, PNG, audio, video,...).
  6. User interface.
That's already quite a lot to think about, but each of these can be broken down into many more pieces. Let's look at them in a bit more detail.
 
A graph showing 11 nodes: Web server, JavaScript engine, Protocol client, Media encoder/decoder, Layout engine, DOM + scene graph, Renderer, Chrome, nsDocShell + nsWebBrowser, Render backend and Compositor. The nodes are connected with arrows indicating functional relationships

The protocol client handles network interactions. It opens a network connection, sends a message to the server, then waits for and interprets the response. If the protocol uses a secure transport layer it handles certificate validation, checking certificate revocation, data encryption and integrity. The latest releases of Firefox and Chrome support HTTP, HTTPS, WebSockets, Secure WebSockets, Secure Real-time Transport Protocol, file access and probably others I'm not aware of. Firefox and Chrome used to support FTP but have since dropped it. Firefox dropped support in version 90 (July 2021) while Chrome dropped it in version 95 (September 2021). Unlike the rendering and JavaScript engines, protocol clients tend not to be given their own bespoke names separate from the browser. Maybe this is because they're often built from other libraries offering support for specific protocols. Nevertheless the protocol client is both a crucial and complex part of the browser.

I've listed the DOM as a separate piece of the browser, but it's usually tightly coupled with the layout and rendering engines. The DOM defines the internal data structures used to represent the page being rendered. For HTML, XML or SVG documents these are hierarchically built from nodes that have a parent and multiple (possibly zero) children. Typically the document structure will map naturally onto the DOM, with XML elements and attributes mapping onto nodes. Child nodes in the document will map to child nodes in the DOM. In practice nodes are likely to be represented as class objects in the code containing references to child nodes. The DOM is usually part of the rendering engine separate from the JavaScript engine, but if it weren't for JavaScript the DOM might be considered as just an implementation details. The existence of JavaScript elevates the DOM to something Web developers have to have a good understanding of, as we'll see.

The JavaScript engine allows execution of JavaScript code. JavaScript has an odd history. Originally invented at Netscape by Brendan Eich, you might think that the JavaScript language has something to do with Java. In fact they're very different. Java is a strictly-typed object-oriented garbage-collected language that compiles down to a bytecode representation that can be executed by a Java Virtual Machine. Although at one point Java "applets" that ran in the browser were a thing, you rarely see these nowadays (they're not supported without installing a plugin). Java is still used in server applications and to be honest, given Sun's expertise and revenue rested primarily with servers I always found it rather surprising that it was ever anything other than server-focused. JavaScript on the other hand is very much a client-side, dynamically-typed event-based scripting language with prototype-based object orientation. In recent years it's also become popular as a server-side language for reasons that I won't go in to here. Both Java and JavaScript are "curly-brace" languages with similarities to C++; and while I realise I've managed to make them sound quite similar, they're actually totally different. The only reason they share a name is that in a bit to ride the wave of Java's popularity, Netscape signed a licensing agreement with Sun to use the name. Marketing genius or ontological vandalism? You decide.

Another key difference between JavaScript and Java is that the DOM is a first-class entity in JavaScript. Although they live in different parts of the browser, the development of the DOM is tightly intertwined with that of the language. When first released by Netscape JavaScript could interact with only certain elements of the page, most notably form elements. The name now given to the set of elements exposed at that time is DOM Level 0. Access to the full document didn't come until DOM Level 1. While JavaScript is a perfectly good language even without the DOM, in a browser context the two are tightly coupled.

Although JavaScript refers to and can modify the DOM, the DOM implementation is part of the layout and rendering engine. When we refer to browser engines (WebKit, Gecko, Blink,...) we're usually referring to this layout/render engine portion of the browser. The layout engine takes the document, structured using the DOM, and lays it out as elements on the page in the way they'll be viewed by the user. This allows the browser to build up the equivalent of a scene graph which is then rendered by the rendering engine to some sort of canvas (the screen or an offscreen buffer). This rendering usually uses an appropriate render backend, for example on the Sailfish Browser it calls a serious of GLES commands. The layout engine follows a strict set of rules for positioning elements on the page. The HTML/CSS box model is used for rendering most items, but there are exceptions. For example SVG has its own rendering model which Gecko also supports as part of the same DOM hierarchy.

HTML and SVG documents embed or reference large numbers of other file types, which the browser has to support as well. These multimedia files include images, audio, animations and video. Historically browser support for different multimedia elements has been a mess, often delegated to some other operating system component (e.g. Windows Media Player, ffmpeg, gstreamer). Each file type will have its own decoder and there may be Digital Rights Management involved as well (e.g. Widevine). In practice browsers tend to separate raster and vector images from video and audio. The former have been tightly integrated into HTML for decades whereas the latter two only became standardised in HTML 5 with the introduction of the audio and video tags. These allow audio and video to be embedded with customisable controls.

Finally we have the user interface, which is the bit that we most associate with the browser. This is a little ironic given I'd argue the depth, complexity and maintenance burden is weighted towards the other layers. But most people aren't really concerned with the rendering or JavaScript engine, they care about whether a particular user interface feature is supported or not.

And to be fair, the user interface doesn't just display an address bar. It also has to provide tabs, JavaScript pop-ups, permissions dialogues, Settings controls, password management functionality, bookmarks, history management and a whole lot more.

In the embedded browser space the user interface is intentionally minimal. The idea is that the browser gets embedded into some other application which provides the user interface elements needed over and above those provided by the rendered Web page itself. On Sailfish OS this minimal interface is provided by the WebView. The additional capabilities are managed through the WebView's Application Programming Interface. On Sailfish OS there's also a Qt-based user interface to the browser, which brings its own complexity. For simplicity I've grouped together the user interface of the browser and the application programming interface of the embeddable WebView in the "Interface" section in the diagram.

During my time upgrading the Sailfish Browser from ESR 78 to ESR 91 I routinely referred to it as a Gecko upgrade. The name Gecko covers the DOM, layout engine and rendering engine but typically doesn't include the JavaScript engine or user interface. The user interface is typically referred to by the name of the browser itself. For example Firefox uses the Gecko rendering engine, the SpiderMonkey JavaScript engine and the Firefox user interface. For Safari it's WebKit, Nitro and Safari. For Chrome it's Blink, V8 and Chrome. And so on.

Now that we've broken down the different parts of the browser we're equipped to delve into the history of Web browsers in more detail.

Libwww

We're going to start our history in 1990 when Tim-Berners Lee and Jean-François Groff, both working at CERN, created the HTTP protocol and HTML language that still define the Web today. It fascinates me that Tim-Berners Lee is so well-known as the inventor of the Web, but pioneers like Jean-François Groff and Nicola Pellow, who were there at the beginning, are scarcely recorded. But the Computer History Museum has documented a fascinating interview with Jean-François in which he gives am explanation of the very first Web engine.
 
my main task during my days at CERN... was porting all the software libraries, I mean the software components that were on the NeXT system into a universal code library that was written in C, it's the 'libwww'" It didn't even have a name at the beginning, which is why in some history books, you see, 'Oh, libwww was released in November of 92.' No, it wasn't, you know? It was running since February '91, it just didn't have that name... We had the page rendering system, the parsing of HTML, and also all the URL mechanisms, history list, all that was abstracted into one software library as a package, as a toolset basically. And then in August 91, I think when we announced the World Wide Web, we also said 'You can use that toolset and build whatever you want with it'".

Right at the start the history is a bit messy. The WorldWideWeb browser was the graphical HTML browser (and editor) written by Tim-Berners Lee in Objective-C to run on NeXTSTEP. The first version was completed at the end of 1990 with the browser being later renamed to Nexus. The code was re-written in C by Tim and Jean-François and turned into the Libwww library to become the very first browser engine. This was then used by Nicola Pellow at CERN to write the Line Mode Browser which was text-based, usable over telnet and released in 1991.
 
A Gantt chart with eleven groups referencing different browser engines (Libwww, Trident, Navigator, Gecko, Servo, KHTML, WebKit, Blink, Presto, LibWeb and Netsurf). Horizontally years between 1990 and 2024 are shown, with bars to represent when the various browsers were supported.

This was not just the birth of the Web, but also the genesis of structures that now define what it means to be a Web browser. These same structures can be seen in how browsers are built today.

Libwww and the Line Mode Browser that were created from it continued to be developed right up until 2017. Although the library is written in C it applies an object-oriented approach. Structures have constructors and destructors with the me context variable often used in places where you might find this or self in an object-oriented language. Reading this code in the early noughties had a profound influence on me, shaping my own style of C coding to this day.
/*	Create a Context Object
**	-----------------------
*/
PRIVATE Context * Context_new (LineMode *lm, HTRequest *request, LMState state)
{
    Context * me;
    if ((me = (Context  *) HT_CALLOC(1, sizeof (Context))) == NULL)
        HT_OUTOFMEM(&quot;Context_new&quot;);
    me->state = state;
    me->request = request;
    me->lm = lm;
    HTRequest_setContext(request, (void *) me); 
    HTList_addObject(lm->active, (void *) me);
    return me;
}
Besides the Line Mode Browser, Libwww was used in countless other projects. My own port to RISC OS from 2004 is still available. I used it to extend a forensic analysis tool for use on the Web (that the University I worked for later patented).

More notably it was also used by the Amaya lightweight Web editor developed at INRIA and the Mosaic Browser developed at the NCSA. The Mosaic browser was popular in its day and the NCSA spun out a commercial entity in the form of Spyglass Mosaic which built on the NCSA Mosaic code. The company was set up to licence the browser to other companies.

Trident

This Microsoft duly did. The browser engine of Internet Explorer — called Trident — was built on the Mosaic technology. The first version of Internet Explorer shipped without JavaScript support (the language hadn't been invented yet), but when it arrived in IE 3 in 1996 it was powered by Microsoft's Chakra JavaScript (nee JScript) engine.

The licensing agreement struck with Spyglass required Microsoft to pay a small monthly fee with additionally a portion of all non-Windoww revenue from the browser going to Spyglass.

As anyone who experienced the browser wars at that time will know, Microsoft proceeded to give Internet Explorer away for free with Windows. This ultimately earned them a lawsuit from Spyglass (settled out of court for $8 million) and an antitrust lawsuit from the US Government (eventually resulting in Microsoft having to change its approach to interoperability).

Internet Explorer remained as a core component of Windows until Windows 10, after which the company finally switched to offering Edge as the default browser. While Edge is built on Google's Blink engine, even that wasn't enough to dislodge Trident entirely. It remains to this day as the rendering engine powering Edge's compatibility mode. While it's not clear whether any of the original code can still be found in Edge (seems unlikely), a thirty-five year legacy is pretty good going.

Gecko

Internet Explorer's arch rival during the browser wars was Netscape Navigator, offered to consumers by countless dial-up Internet providers bundling it on free CDs alongside their own dial-up software and configurations. Netscape was the first browser to incorporate JavaScript support, which it did using the SpiderMonkey JavaScript interpreter in 1995.

Running up to 2000 Netscape completely re-wrote their browser engine. The result was what we now know as Gecko and which powers both Firefox and the Sailfish Browser. The purpose of the re-write was ostensibly to improve standards compliance and maintainability. But the highly abstracted code — arguably what has allowed the renderer to remain relevant to this day — resulted in poor performance. Netscape Navigator was a large programme, incorporating not just a browser but also a full email client and Website editor. In an attempt to improve performance in 2002 the components were split up to form Firefox as a stand-alone Web browser and Thunderbird as a stand-alone email client. My recollection is that this was controversial at the time and didn't improve performance a great deal. But the separation stuck. Splitting email from Web and dropping editing entirely seems to have resonated with users.

Gecko, in the form of Firefox, has experienced ups and downs. Browser statistics are notoriously subjective, but Statscounter registers Firefox market share as having dropped to just over 3% as of January 2024, having peaked in January 2010 at just over 30%.

There's plenty more to say about Gecko's history, not least in relation to its use as an embeddable component, but let's put that aside for today and I'll return to it in a future post.

Gecko remains relevant today as the most popular alternative to the WebKit/Blink family of browsers. While technically open source, both WebKit and Blink are directed by large corporations with few concessions to open source development methodologies. Mozilla on the other hand is a not-for-profit foundation that embraces the spirit of open source as well as the letter. For many, Gecko is an important bulwark against a corporate-controlled browser monoculture.

An interesting twist in Gecko's development comes from its adoption of the Rust language. Developed by Mozilla employee Graydon Hoare and officially adopted by Mozilla in 2009, Mozilla has been gradually moving Gecko's internal components from C++ to Rust.

This led to the development of the Servo engine, written wholly in Rust as a Mozilla research project. While never intended to replace Gecko, elements of the Servo engine were integrated back into the Gecko's WebRender rendering engine.

Servo is currently available as an engine with an intentionally bare-bones user interface. Mozilla divested itself of Servo in 2020, but development continues with the aim of specifying a WebView API during 2024 for use as an embeddable engine.

Presto

We're going to jump ahead a little in the diagram and turn our attention to the Opera browser. There are many unique and fascinating facets to Opera that it won't be possible to explore fully here, but it's still worth skimming the surface. Opera is unusual in that it was, for a long time, one of the few independent commercial browsers. When first released in 1995 it was shareware (requiring payment after a trial period). There was no JavaScript support (the language hadn't been invented yet) and at the outset the rendering engine wasn't named separately to the browser. This changed in 2000 with the introduction of the Elektra rendering engine and the Linear A JavaScript engine. In 2003 Opera switched to using what they claimed to be a new rendering engine, the internally developed Presto, alongside a new Linear B JavaScript engine. While the Presto name stuck, Opera's JavaScript engines have enjoyed periodic renaming: the Futhark JavaScript engine in 2008, followed by the Carakan JavaScript engine in 2010. Since the browser and all of these engines are closed source it's impossible to know to what extent they were really new technology as compared to an evolution of existing code.

Through much of its life Opera forged its own path. It was the first mainstream browser to introduce tabs. It integrated a (very good) email client long after Mozilla had disentangled Firefox and Thunderbird from Netscape Navigator. It even integrated its own Web server at one point. Opera also made a point of sticking to W3C standards while other browsers were still trying to lock users in to a proprietary Web.

Perhaps it's for this reason that there was much disappointment when Opera switched to using Blink and V8 in 2013, soon after Google and announced it would fork WebKit. To find out how it got to this point we'll need to go back a bit again and look at the evolution of WebKit.

WebKit and Blink

At this point in time WebKit is the most popular engine for accessing the Web (in either its WebKit or Blink variants). Moreover it's also the go-to browser engine for use in embedded scenarios as we'll see shortly.

Initially part of the KDE project, WebKit provided the engine for Konquerer, the default browser for the KDE desktop environment. At that point the engine was referred to by the name KHTML, alongside the KJS JavaScript engine. It was picked up by Apple in 2001, apparently because of its small code footprint. Apple renamed KHTML and KJS to WebCore and JavaScriptCore respectively with the WebKit project encompassing both.

Contributions to WebKit came from both Apple and the KDE project, as well as from the Qt Project which offered the QtWebKit embeddable widget. Sailfish OS supported use of QtWebKit up until its deprecation in Sailfish OS 4.4 and removal in 4.5, the functionality being replaced by the Gecko WebView API.

In 2008 Google introduced its own Chrome browser also built on WebKit but using the new and Google-developed V8 JavaScript engine. Google's advertising for it emphasise speed (start times and JavaScript execution in particular). Chrome also had an — at the time — unusual sandboxing model with each tab executed as a separate process. This meant that crashes triggered by WebKit or V8 would only bring down a single tab, leaving other tabs and the browser intact.

Although built using many open source components, Chrome itself is made available under a proprietary licence. The Chromium project, also developed by Google, is a fully open source implementation of Chrome, but with the proprietary components removed.

From the outset Google had to make changes to WebKit to support its use in Chrome. Still it took another five years before Google officially forked WebKit in 2013, creating the Blink browser engine. Consequently Chrome now uses both its own renderer and JavaScript combination: Blink and V8.

One of the attractive features of the Blink engine, also particularly relevant to Sailfish OS, is its embedding API which allows it to be used separately from Chrome (or Chromium) and embedded in independent applications. A common example of this usage can be found in the Electron framework, which uses Blink for rendering.

This embeddable design, which neatly separates the chrome from the engine, also makes Blink attractive for use by other browser developers. As noted earlier, Opera switched from Presto and Caraken to Blink and V8 for rendering and JavaScript respectively. Microsoft similarly chose Blink and V8 as the basis for its Edge browser in 2019.

Qt introduced the Qt WebEngine component, wrapping Blink and V8 to offer an embeddable browser, around the release of Qt 5.2 in 2013. This was intended to replace QtWebKit, which was ultimately removed in Qt 5.6. The closest KDE has to a default browser is Falkon, which uses the Qt WebEngine. This therefore completed a strange cycle, with KHTML having been started as part of KDE, forked by Apple, forked again by Google and then integrated back in to KDE via Qt.

LibWeb

An unexpected entrant into the browser space was recently announced in the form of Ladybird. To understand why Ladybird exists, it helps to understand a little about Serenity OS, the operating system project it grew out of and which it has now eclipsed. According to the FAQ of Serenity OS the developers try to "maximize hackability, accountability, and fun(!) by implementing everything ourselves.". And that includes the Web browser: the project developed its own renderer and JavaScript engine in the form of the imaginatively-named LibWWW and LibJS.

Recently the main Serenity OS developer, Andreas King, refocused his attention from the operating system to the Ladybird browser. Ladybird is built using the LibWeb and LibJS browser components of Serenity OS, but which he now develops independently. This arguably represents the first new engine to be introduced with the aim of being a fully-fledged browser for over twenty years, making for a particularly interesting development.

NetSurf

Last but not least we have NetSurf, which like Ladybird, is a bit of an outlier. Like Ladybird it was originally developed for exclusive use on a non-mainstream operating system.

The first version of NetSurf was released in 2002. At that time it was developed exclusively for use on RISC OS, the operating system that powered the Acorn Archimedes (the first publicly available computer to use an ARM processor).

RISC OS is very different from most other operating systems available today. It makes no attempt to be Unix-like and has its own distinctive and cooperatively multitasking desktop environment. This heritage means that the browser is incredibly lightweight, with good CSS support but without viable JavaScript.

During the early days of development JavaScript support was considered out-of-scope for the browser. The reason for this is interesting: it wasn't for lack of a usable JavaScript interpreter, but because the browser lacked a standards-compliant DOM. It turns out JavaScript isn't especially useful without a standards-compliant way to access the elements of a Web page.

Despite the lack of JavaScript support NetSurf still managed to find a niche as a fast and lightweight browser, growing beyond RISC OS. As of today there are downloadable packages available for RISC OS, GTK (Linux), Haiku, AmigoOS, Atari and experimentally for Windows.

The Truth About Browsers

Browser history is a tangled Web. While writing this it quickly became clear that, when it comes to browsers, any generalised claim is likely to turn out false. The date a browser came into existence? Do you mean the date the project was first thought of? The first commit? The first release? An alpha release? A beta release? Release 1.0? Is a particular engine entirely new, the redevelopment of an old engine, or just a rename? To what extent does the code of one engine flow into another when they both share libraries? Every browser out there is like the Ship of Theseus at this point. When we talk about a browser engine are we talking about the renderer, the layout engine, the JavaScript engine, the chrome? Sometimes these things can be separated, other times they're intrinsically tied together. Is it good to have a single reference engine that all browsers use for a consistent experience across the Web, or should we be championing diversity as way to prevent any single entity taking control? Do we even know how to calculate browser market share?

Even the question of what a browser is, presented in anything other than the most abstract terms, is likely to suffer exceptions.

What is clear is that browsers have become deeply integrated into our lives. Whether using a computer or smartphone, access to a browser has become a necessity. Over time they've continued to become more capable and more technically complex. Combined with their convoluted history, that makes them fascinating objects of study.
Comment
23 Sep 2024 : Retrospective #
Way back last year in August 2023, before actually starting the process of upgrading the Gecko engine in Sailfish OS from ESR 78 to ESR 91, I wrote a preamble in which I set out my objectives and sketched a brief plan for how to achieve them. Although the work isn't entirely complete, after 339 days I consider the main bulk of my work on the project to be complete. We're now in the mopping up stage. That means it's a good time to look back at the process, find out what went well, what went badly, what I've learned from the experience and how I feel about things. If the preamble was the opening bracket, this retrospective can be considered its closing partner. Together they're the bookends encapsulating all the diary entries in between.

The Journey

Back when I started I hadn't quite appreciated how long this whole process was going to take. Although somewhere between half a year and a year seemed reasonable, the final 339 day tally is a little closer to the latter than I'd hoped. Moreover a year in theory feels much shorter than a year in practice. Adjusting for the fact I'm employed full-time to not do Gecko work, in practice I must have worked only around three hours a day during the week and twelve hours at the weekend. Two thirds of that time was spent coding and the other third writing up the diary entries. Given that the 339 days was made up of 244 weekdays and 48 weekends, I can be a bit more precise about how much time I actually spent on it.
$ time gecko-dev

real    48w 3d 0h 0m 0.000s
code    5w 1d 12h 0m 0.000s
diary   2w 4d 4h 0m 0.000s
Let's convert that into work time. This is interesting because practically speaking this is the "Full Time Equivalent" (FTE) or the amount of person-hours needed to complete the project from a commercial perspective. Typically the work would of course be distributed between multiple people to speed up project implementation, so the real time would be shorter.
$ time --work gecko-dev

real    67w 4d 0h 0m 0.000s
code    23w 1d 2h 0m 0.000s
diary   11w 3d 1h 0m 0.000s
Let's consider now how those days were partitioned into tasks. The following diagram shows the linear sequence of how I spent each day of work. This oversimplifies things a little given I didn't always complete tasks sequentially, but is pretty close to reality.
 
A diagram showing a timeline arranged in a left-right curve (it looks a bit like a snake) with each block representing a day; sequences of blocks are coloured to represent tasks with the name of the task and day on which it was completed marked on the diagram

On day 149 I gave a presentation of this work at FOSDEM'24, including an earlier version of this diagram. I thought I was about half way through the work at that stage, but this turned out to be an underestimate, as is clear from this diagram. I was in fact only 11/25 of the way through.

The longest task of 87 days involved getting the WebView render pipeline working. In comparison getting the first successful build to complete took only 45 days. Both of these were quite dark and gloomy times. Without a working build it's impossible to debug or test the code, whilst without a working renderer nothing else can be effectively tested. Both periods felt like dark tunnels that took an age to emerge from.

Following these in terms of length of task were PDF Printing at 28 days, the WebGL renderer at 25 days and the Sec-Fetch-* headers at 15 days. These were the only tasks that took more than 14 days, which is a bit of a cut-off point for me. After two weeks of writing daily about the tasks it becomes really hard to write more without sounding (and seeming) a bit lost and exhausted.

It was particularly frustrating how long it took to get the WebView rendering working, given the browser already worked nicely. It could have been worse though: I had a clear plan which involved gradually stripping out and adjusting pieces of code to align with the code in the ESR 78 implementation. This gradual convergence towards ESR 78 meant I knew the task was inevitably going to be time-limited, also allowing me to identify progress on a daily basis. As it turned out I had to do it twice: first removing code, then adding code back in again. But it did eventually get there.

Realising on Day 254, after all this, that I'd broken the WebGL rendering process was also a bit of a low point. By then I really wanted to move on from the render pipeline.

But eventually I did emerge from all of these tunnels and the joy of getting something to work is a crucial counterbalance to the frustration when it isn't. In retrospect the low points were all worth it for the sake of the enjoyment I also got out of it.

Apart from how things turned out in practice it's also interesting to compare how closely it matched my initial expectations. Returning to the preamble once again, it's clear I was expecting a long haul, but I also had experience to draw from my previous involvement in browser upgrades at Jolla:
 
Another piece of wisdom that Raine taught me is that the first task of upgrading the engine should always be to get it to compile. Once it's compiling, getting it to actually run on a phone, patching all of the regressions and fixing up all the integrations can follow. But without a compiling build there's no point in spending time on these other parts.

I think it's fair to say that I did follow this approach, starting with getting things to compile and then focusing on the details afterwards. This led on to the following decision about the structure of the work:
 
I'm therefore going for a three-stage process with the upgrade:
  1. Apply a minimal set of changes and patches to get ESR 91 to build.
  2. Apply any remaining patches where possible and other changes to get it to run and render.
  3. Handle the Sailfish OS specific integrations.
Looking back I did broadly follow this structure. I got the build to complete, then I applied the patches needed to get rendering working and only after that did I apply the other patches. I did diverge from this advice in one important respect. Rather than applying all of the remaining patches I actually only applied a minimal set required to get the render working.

In hindsight I think that was the right thing to do. But it also felt like a natural consequence of the situation I found myself in. Given the upstream code changes the patches I did apply needed quite a lot of work to get them to stick. That gave me the impression that many of the existing patches might turn out to be redundant, superseded by changes in the upstream code.

By applying only the patches that were necessary it give me the opportunity to potentially avoid patches which were no longer relevant in a more intentional way. Hopefully the patches I've ended up with are closer to the minimum required and have a slightly cleaner structure than would otherwise have been the case.

But practically speaking I think my original plan was a good one and, in retrospect, I followed it pretty closely.

Destination Gecko

Let's now consider where the journey took us. The point of all this work was to take the browser engine from ESR 78 to ESR 91. What does this give us?

Abstractly speaking, one of the most compelling reasons to want to upgrade is because websites routinely attempt to fingerprint browsers and serve different content depending on the result. This practice is as old as the hills, yet remains as common today as it is problematic. I understand that different browsers have different capabilities and that website creators will be blamed (unfairly) if a page renders poorly as a result of a user failing to keep their browser up-to-date. But you'd have thought at the very least browsers could test for features rather than versions.

When browsing using ESR 78 it's not uncommon for a site to chastise its own customers. Updating the engine on Sailfish OS is one way to reduce the chance of seeing these invectives, even if just changing the user agent string is often just as effective as a browser upgrade without any of the effort.

One of the worst offenders is Cloudflare, which routinely blocks the Sailfish browser from accessing sites on its content delivery network. Upgrading to ESR 91 seems to circumvent this in at least some cases.

But browser upgrades also bring genuine improvements as well. New features, improved stability, increased security and bug fixes. There have been a total of 45 point releases between the previous Sailfish OS engine of 78.15.0 and the upgraded version at 91.9.1. Each of these point releases has brought improvements, although not all will be relevant to Sailfish OS. Major releases (e.g. from 78 to 79) will typically include new features, stability improvements and security fixes, whereas point releases (e.g. 91.1.0 to 91.2.0) will often only include security and regression fixes.

Working through the Firefox changelogs the following are some of the obvious improvements that have a direct impact on the Sailfish browser:
  1. Certificate performance improvements (80.0.1).
  2. WebGL rendering improvements (80.0.1).
  3. Support for viewing more filetypes (81.0).
  4. Improved element rendering (81.0.1, 86.0.1).
  5. Improved PDF export (81.0.1, 85.0.1, 90.0.2).
  6. Increased startup and rendering speeds (82.0).
  7. Fixes for WebSocket message duplication (82.0.2).
  8. SpiderMonkey JavaScript performance improvements (83.0).
  9. An HTTPS-Only mode option (83.0).
  10. Improved shared memory performance (84.0).
  11. Increased cookie and supercookie isolation (85.0, 86.0, 89.0, 90.0, 91.0).
  12. Deprecation of WebRTC DTLS 1.0 (86.0).
  13. Private browsing compatibility improvements (87.0).
  14. Increased referrer privacy (87.0, 88.0).
  15. Working hyperlinks in PDF export (90.0).
  16. Removal of FTP support (90.0).
  17. Improved user-action response times (91.0).
  18. Fixes for microsoft.com certificate errors (91.4.1).
  19. Many crash bug fixes (81.0.1, 82.0.1, 85.0.1).
If you look through the changelogs you'll also notice several references to enabling WebRender for various platforms at various times. These are the main changes that haven't been introduced to the Sailfish OS version since it would require a much bigger change to the rendering pipeline and the consequences of making this change are unclear at this point.

In addition to the above changes, there were 15 critical, 115 high severity, 68 medium severity and 30 low severity security fixes combined into these updates. The importance of these can be best understand with reference to Mozilla's security classification:
  • Critical: Vulnerability can be used to run attacker code and install software, requiring no user interaction beyond normal browsing.
  • High: Vulnerability can be used to gather sensitive data from sites in other windows or inject data or code into those sites, requiring no more than normal browsing actions.
  • Moderate: Vulnerabilities that would otherwise be High or Critical except they only work in uncommon non-default configurations or require the user to perform complicated and/or unlikely steps.
  • Low: Minor security vulnerabilities such as Denial of Service attacks, minor data leaks, or spoofs. (Undetectable spoofs of SSL indicia would have "High" impact because those are generally used to steal sensitive data intended for other sites.)
These are all great changes to have. Probably the most important for daily use on a phone are the efficiency and performance improvements. Based on feedback on the Sailfish OS forum, users also seem to be happy with the results in this respect, with many users claiming the browser feels faster and more responsive.

Whether this is actually the case is hard to say. My tests using various performance measurement tools don't suggest significant performance improvements. But I must admit to having the same feeling of improved responsiveness. I suspect that may be due to the upstream changes in version 91.0 that claim to have improved responsiveness for user-interactions by 10-20%. That would make a noticeable improvement for users in a way that may not show up in benchmarks. It's my suspicion that the page loading feedback that's used to drive the progress bar on Sailfish OS has also been improved, although I've not found any explicit changes that would do this.

What do all of these changes mean for the state of the code? The upgrade from ESR 78 to ESR 91 also, surprisingly for me, brought with it a larger codebase. Mozilla has been intentionally transitioning code from C++ to Rust, with the number of lines of Rust code increasing by 14%. But the number of lines of C++ code also increased by 3% and for the total combined C++, JavaScript and Rust code this increased by 7%. Plotting the lines of code categorised by language, these increases are clearly visible.
 
A column chart showing lines of code in the Gecko engine categorised into C++, JavaScript, Docs, Build, Rust and IDL. For ESR 78 these are 12795046, 8314694, 8134816, 3691535, 2652738 and 183404 respectively. For ESR 91 they are 13179126, 9130130, 8134816, 3497457, 3033345 and 185528.

Although proportionally there's been a bigger increase in Rust code than C++, in absolute terms the increase in both is almost identical (380607 lines of Rust code added compared to 384080 lines of C++ code).

In the above diagram Docs refers to content that relates to documentation. Build refers to scripts used to manage the build pipeline. IDL refers to interface definition files.

It's worth pausing to consider the code needed to build the Gecko engine. Gecko has experienced several changes through its life accumulating a mixture technologies as it goes. As a result the build system is a strange combination of Build (the mozilla build system), Python, Make, ninja, GN and Cargo. At certain points the build system compiles Rust into native binaries that then become part of the build pipeline itself. This causes havoc for the scratchbox2 cross-platform build engine Sailfish OS uses. No small part of the work in getting gecko working for Sailfish OS involves taming these build systems.

Although the numbers for IDL shown in the graph are low compared to the other languages, I nevertheless wanted to include it because it's such a critical part of the way Gecko works. The combination of C++/Rust and JavaScript means that there needs to be a really solid way to expose native methods to JavaScript and JavaScript methods to native code. The type systems aren't equivalent and so this requires a careful arrangement. Gecko supports this using its Interface Definition Language. IDL files read a bit like C++ header files but are more generic. Any interface defined using IDL can be exposed both natively and to the JavaScript layer. It's critical glue that holds everything together.

The numbers shown in the graph are measured in millions of lines of code. They're big numbers, but it's worth bearing in mind that Gecko is a relative minnow when it comes to code size in the world of browsers. For comparison I ran the same code analysis on the Chromium source. I was pretty surprised by how large Chromium is compared to Gecko.
 
A column chart showing lines of code in Chromium categorised into C++, JavaScript, Docs, Build, JavaScript, TypeScript, Rust, Go, Java, Config, Obj-C, Other code, IDL and WASM. The respective numbers are 79982423, 34517513, 15319659, 8564048, 3059540, 3005379, 2628291, 2312098, 1918957, 1424269, 1360893, 438718 and 449886.

Chromium contains over four times the code: 154 981 674 lines of code for Chromium compared to a paltry 37 361 820 lines of code for Gecko ESR 91. It's also interesting to compare the range of technologies involved in the two projects. Chromium introduces TypeScript, Go, Java, Objective-C, Lua, AppleScript, TCL and WASM, although some of these will be target-specific.

Destination Sailfish

So far we've considered the differences between ESR 78 and ESR 91 in some detail, but none of this has touched on the actual changes needed to get the code to run on Sailfish OS.

As any Sailfish OS developer will be aware, Sailfish OS uses RPMs for packaging software, a technology that originated on Red Hat Linux as the Red Hat Package Management system. Work started on RPM in 1995, a good ten years before the initial release of git and two years before Netscape started work on Gecko. Back then it was commonplace for software to be provided in the form of a tarball and in some ways the RPM build process reflects this. Distribution-specific changes are provided as patches applied directly to the upstream source. These patches are all listed in the spec file which is passed to the rpm tool to perform the build. On Sailfish OS this is all hidden behind sfdk which is itself a wrapper for the scratchbox2 sb2 tool. It's a complex layered system with multiple abstractions.

The point is that even now on Sailfish OS packages that use upstream code can pull directly from the upstream repositories, rather than having to use Sailfish-specific implementations. Any Sailfish OS specific changes can then be applied onto this code in the form of patches. It's not a process I enjoy working with because patching is a lot messier and less flexible than working with commits in a repository. Even though it's possible to convert a patch list into a series of commits and back again this adds an extra step and contrains what actions can be performed at different times.

The benefit is that we always retain a very clean and clear distinction between the upstream code and the Sailfish OS specific changes, with the latter being encapsulated in the patches to be applied. We can use this separation to discover how the changes needed to get Gecko ESR 78 to work with Sailfish OS differ compared to those needed for ESR 91.
 
A column chart showing lines added and removed in the patches to Gecko, categorised by language and Gecko version. The languages are C++, Docs, Build, JavaScript, Rust and IDL. The numbers of lines added to ESR 78 are 22726, 510, 28558, 158, 544, 29 respectively. The lines removed from from ESR 78 are 606, 2, 20350, 6, 175, 3. The lines added to ESR 91 are 22476, 508, 19320, 170, 498, 39. The lines removed from ESR 91 are 631, 12, 11090, 43, 180, 6.

This figure shows only a very high-level view, but nevertheless tells a story. Note that unlike the previous figures the y-axis of this chart uses a logarithmic scale to account for the big differences in scale between different languages. This can make the values harder to read so, for clarity, here they are in tabular form.
 
Language ESR 78 added ESR 78 removed ESR 91 added ESR 91 removed
C++ 22 726 606 22 476 631
Docs 510 2 508 12
Build 28 558 20 350 19 320 11 090
JavaScript 158 6 170 43
Rust 544 175 498 180
IDL 29 3 39 6


These numbers represent the actual code I've been working on for the last year. In general the number of lines added or removed has reduced as we've moved from ESR 78 to ESR 91. This is a good thing. The fewer changes made to the upstream code the better. In general the difference isn't huge, but it does exist. The total number of patches reduced from 98 to 84. The number of lines added to ESR 91 was 82% of the number of lines added to ESR 78. The number of lines removed from ESR 91 was only 57% of the number removed from ESR 91.

Interestingly, while there were fewer change made, the differences practically balance themselves out. Overall the patches to ESR 78 increased the code size by 31 383 lines compared to 31 049 lines for ESR 91. That's astonishingly similar.

These numbers don't quite capture all of the changes because they relate only to the gecko code. There were also changes needed in the other four components that make up the Sailfish browser stack, as well as to the EmbedLIte code (which is handled separately from gecko but ends up in the same xulrunner package). Let's briefly take a look at these other components.
 
A layered component diagram. The layers (from bottom to top) are: gecko/xulrunner, qtmozembed, embedlite-components, sailfish-components-webview and sailfish-browser. Each shows the number of lines of code: 13 148 748 C++, 3 033 345 Rust, 195 528 IDL, 9 116 950 JavaScript for gecko; 9 068 C++ for qtmozembed; 12 762 JavaScript for embedlite-components; 6 658 C++, 6 328 QML for sailfish-components-webview; 17 568 C++, 8 637 QML for sailfish-browser.

The gecko renderer is by far the largest of the components. The qtmozembed component provides a QT wrapper around the renderer. The embedlite-components package adds the privileged JavaScript shims needed for Sailfish OS, largely replacing equivalent privileged JavaScript that would typically run in Firefox. The sailfish-components-webview component provides Qt components needed in order to support both the browser and WebView (for example the pop-up dialogues), but also provides the code needed to offer the rendering engine as a WebView component to other Qt apps. Finally the sailfish-browser component is the actual browser app you run when you open the browser on your phone.

Apart from the gecko renderer all of these are Sailfish-specific packages, so they don't have any "upstream" code. The Jolla repositories are the upstream repositories for these. Consequently there's no need to apply patches and we can work on the code directly. That means that when analysing changes for these we're just using the commits that take the code from ESR 78 versions to ESR 91 versions. Between them they accumulated 169 commits with the following additions and removals (these numbers also including the changes to the gecko source):
 
Language Lines added Lines removed
C++ 23 456 1 281
Docs 508 12
Build 19 724 11 381
JavaScript 452 114
Rust 498 180
IDL 52 17
QML 14 14
Total 44 704 12 999


This table essentially captures the sum total of the changes needed to move from one version to the next. As you can see, the majority of the additions have been to C++ code. The build scripts saw rather a lot of churn. I'm very surprised to see more Rust additions than JavaScript additions. The QML code changed very little, which is perhaps to be expected given the external appearance, renderer aside, is almost identical. That was intentional: there's always scope to improve the Sailfish browser user interface, but my objective with this work was to get the renderer upgraded as quickly as possible. Changing the interface would have been a diversion.

Mental Health

I put a lot of myself into the Gecko upgrade. Working on it practically every day for a year, even if not full-time, required a level of commitment that I wouldn't typically give outside of my work hours. This is a personal perspective: the world is blessed with many people who commit far more for far less reward and who don't then feel the need to tell the world about it in a blog post.

Nevertheless, this was a big deal for me. I'm not a natural blogger so the prospect of writing about my coding on a daily basis was daunting at the outset. But it turned out to be surprisingly easy. Writing about specific tasks is very different from having to come up with inventive and interesting topics to write about on a daily basis.

Having to write daily diary entries undoubtedly helped keep me on track and working on the project every day. The need to have at least a few paragraphs to write about drove me to do the coding work.

There were a few occasions when I struggled with this. Typically on a Friday night after having spent two and a half hours on public transport returning from work. Having to then write up a diary entry in a tired state of semi-consciousness was not always ideal. But these cases were relatively rare.

There were also occasions — mostly in the middle of the work to get the various rendering pipelines working — when the work really got me down. Writing the diary entries made me very conscious of the progress I was — or in many cases wasn't — making. In the middle of the trough when it's really not clear whether it will be possible to come up with a solution, some of those occasions felt quite dark. If I hadn't been writing the diary I can imagine myself choosing to take a break and then having that break go on for several days.

But, and this is a big but, I was supported the entire way through by the amazing Sailfish community who responded to my posts on Mastodon and the forum, always encouraging and supportive. I'm not a social person and this was a bit of a shock for me. People out there in the Sailfish community and beyond really are the most encouraging and thoughtful people you could hope to interact with.

The amazing images and poetry from the likes of Thigg (thigg) and Leif-Jöran Olsson (ljo) are beautiful cases in point.
 
Montage of Thigg's images, twelve in total, all colourful flyping pigs in various situations, with quite a few geckos and foxes as well.

But there are so many people who helped and contributed in so many ways, I couldn't possibly mention everyone here. I apologise for not mentioning you all individually, but I'm really grateful.

Besides the community I also have to mention Joanna, my wife, who's sacrificed more than anyone else for the sake of me spending three hours each day and most of my weekends on gecko development. She carried me through this.

With all of this support, I found the experience surprisingly effortless. Perhaps the biggest challenge, as it turns out, was being able to find a suitable point to wind things down. Dropping off from posting diary entries every day and having a very clear purpose for my free time has been hard to manage in a measured way. It was too much of a cliff edge and, if I do this again, I think I'd want to look into ways to mitigate this. But I don't yet have a good solution: writing these diary entries doesn't lend itself to a tapered reduction of work.

Future Work

Future work for this project comes in two forms. There's the future work needed to achieve the (hopefully) near-term goal of getting the browser released to users as part of Sailfish OS. Then there's the longer term goal of what to do beyond that.

As I write this the current situation is that three out of five pull requests have been merged into Jolla's repositories. The remaining three have been through a couple of review rounds already. So the immediate task is to get them through review and merged in. This alone won't result in their release as part of Sailfish OS as they're currently being merged into bespoke ESR 91 branches. Jolla will need to merge these into the main branch before they can become part of any official Sailfish OS release.

It's nevertheless exciting to see that as part of the recent upgrade from Sailfish OS 4.6.0.13 to 4.6.0.15, several changes to libhybris were included that will support the move to ESR 91. As readers of my diary entries will know, there were several issues that caused the browser to crash or hang which were ultimately traced back to libhybris and which, looking at the changelog, will now be fixed. If ESR 91 does go out in some future Sailfish OS release, this will make the transition much smoother.

At present I've been building exclusively for aarch64. The build will need to be tested and potentially amended for armv7hl and i486 targets. On top of this, it appears that getting the browser to work on native platforms such as the emulator and the PinePhone, where there is no libhybris layer, will also require some additional work.

In the longer term, there are two, maybe three, objectives. The obvious next step after the release of ESR 91 would be to move to the next ESR release, which is 102.15.1. Checking the various release notes we can see that ESR 91.9.1 was released on 20 May 2022, whereas ESR 102.15.1 was released on 12 September 2023. That's a gap of around 16 months. So far the upgrade from ESR 78 has taken 13 months, so it looks like we may have an opportunity to catch up with Firefox ESR latest. In practice though it's usually around 12 months between ESR releases so some acceleration will be needed if we're to properly keep up. It's worth noting that the extended service releases have a much longer support cycle than other releases, which can lead to some overlap. For example both ESR 115.15.0 and ESR 128.2.0 were released on 3 September 2024.

Besides the obvious upgrade to the renderer engine it would also be great to add features to the browser. On the Sailfish OS Forum Niels (fingus) suggested supporting MPRIS for the video and audio controls of the browser. That's the sort of thing I'd love to add, but which would require some research and effort to investigate and implement. I'd also love to introduce support for reader mode, scrollbars and maybe even extensions. There's no shortage of interesting ideas for things to work on.

The third objective would be to properly support the WebRender compositor on Sailfish OS. It's not clear how much work this would involve, but it's potentially substantial. Integrating this with the Sailfish OS render pipeline could be quite a challenge.

Finally there's plenty of scope to make important improvements to the browser build process. Updating Rust, fixing the multi-process hang — which remains a significant barrier to reducing build times — and introducing a build cache would all help to make development easier.

Lessons Learned

The main outcome of this work for me has been the reaffirmation that the browser is a critical component of Sailfish OS. The better the browser the more usable Sailfish OS becomes as a daily driver. Make no mistake, the reason I wanted to do this work was for entirely selfish reasons: Sailfish OS is my mobile phone operating system of choice. I enjoy using it and I want it to remain relevant so that it continues to be supported. Upgrading the browser is my way of helping ensure this happens; it's my itch and I've been scratching it.

But I've learnt a whole lot more than this and not just from the process of development, but also from the experience of writing a daily diary about it. I'd like to think that the work has helped demonstrate the importance and benefit of open source, for users of course, but also for Jolla. Jolla invested heavily in ensuring the browser is open source. Not just in giving the code the right licence and making the source available, but also in documenting it, following an open development model and supporting the community in making it accessible. In no way was this a "free" browser upgrade for Jolla, but I hope it goes some small way to justifying this open source strategy. I'd also like to think the diary entries have demonstrated some of the benefits of being open about progress as well.

I've also learnt more than I'd like to admit about Brownian debugging. This is the process of performing a random walk, changing bits of the code en route, until it works. It may not be the most efficient debugging approach and it may be that an element of strategic direction improves matters, but as long as the problem space can be constrained I've found Brownian debugging can be unexpectedly effective. Given enough time and patience.

There's a follow-up to this, which is that it also demonstrates how much can be achieved without the benefit of understanding or insight, but relying on perseverance alone. I'm definitely more familiar with the gecko code than when I started, but the gaps in my knowledge remain prodigious. Armed only with my abilities in Brownian debugging and enough time to deploy them, I managed to make some progress.

I admit this wasn't my first involvement in upgrading the browser. While working at Jolla I contributed to the upgrade from ESR 60 to ESR 68, and then again from ESR 68 to ESR 78. But that was as part of a team with an incredible depth of knowledge of the browser and impressive software development skills. When I started this process I wasn't at all certain whether I'd be able to make any meaningful contribution to the next upgrade. I'm now much more confident that not only has this been possible, but that I'd be able to do it again.

It's been great to feel some purpose within the Sailfish OS community. I really enjoyed working for Jolla, not least because it felt worthwhile contributing to an operating system I love using, but also contributing to the community I felt a part of. Doing this work has served as a great way to continue feeling like I have something to contribute.

Writing the development diaries was, I hope, helpful in demonstrating that work was continuing on the browser: it hadn't been forgotten or left to decay. It gave me a lot more visibility than I would have got otherwise. Crucially though it made me realise that there are many, many, Sailfish OS developers putting in similar or greater levels of commitment, for ports and apps and bug checking, who may not have the same visibility because they're not writing a diary, but who nevertheless put in more work and deserve the same appreciation that I've felt privileged to have received from the community.
Comment