WARP : FIREFOX BOOST JAVASCRIPT PERFORMANCE

Boost javascript performance

Introduction

Warp update to the SpiderMonkey JavaScript engine on the Firefox 83 browser. This update offers improved JavaScript performance to Firefox users. It enables by default with a Warp update to the SpiderMonkey JavaScript engine. We can also call it as WarpBuilder; by making modifications to JIT (just-in-time) compilers, Warp enhances responsiveness and memory use and speeds up page loads. JIT optimization is modified to rely solely on the simple linear bytecode format of CacheIR. The latest architecture which is leveraged in the browser is described as more sustainable and unlocking additional SpiderMonkey enhancements.

Firefox to boost JavaScript output massively.

Firefox update Warp

Firefox 83 was expected to release on November 17, but it was published in beta on October 20. Warp is showing to be quicker than Ion, the former JIT optimization of SpiderMonkey, including a 20 percent boost in load time for Google Docs. We can see the improvements by other JavaScript-intensive websites, including Netflix and Reddit. To base Warp on CacheIR allowed code removal across the entire engine. This is to track inference data of the global form used by IonBuilder, which results in speedups. Both IonBuilder and WarpBuilder create Ion MIR. It is an intermediate representation that will be used by the back-end JIT optimizer. IonBuilder also has a lot of complicated code that WarpBuilder doesn’t need. Warp can also do much work off-thread, and it just requires less compilations. Plans call for continuing optimizations on Warp, which on some synthetic benchmarks is slower than Ion.

Warp has replaced the front end of the IonMonkey JIT with the MIR building phase. Plans call for eliminating the old code and architecture from Firefox 85, which is likely to happen. As a result, extra performance and memory utilization improvements are set. Mozilla will increasingly optimize the back-end of the IonMonkey JIT. They claim that JavaScript-intensive workloads still have the potential for enhancement. Tool for web developers and Mozilla to explore CacheIR data for a JavaScript feature is also in progress.

How Warp Works?

Multiple JITs:

When running JavaScript, the first step is to parse the source code into a lower-level bytecode representation. Instantly execute bytecode using an interpreter or can compile into native code by a just-in-time (JIT) compiler. There are several tiered execution engines in modern JavaScript engines. Depending on the expected advantage of switching, JS functions can switch between tiers or levels: 

  1. Interpreters and baseline JITs have quick compilation times. They will be performing only simple code optimizations (usually based on Inline Caches) and gathering profiling data.
  2. The Optimizing JIT performs advanced compiler optimizations but requires more memory and has longer compilation times. So it is mainly for warm tasks (called many times).

Based on the profiling data obtained by the other levels, the JIT optimization makes assumptions. The optimized code is discarded if these assumptions turn out to be incorrect. The role resumes execution at the baseline levels when this occurs and must warm up again. This is bailout.

Profiling data:

The previous JIT optimization, Ion, used two very distinct systems to direct JIT optimizations for collecting profiling information. The first is Type Inference (TI), which gathers worldwide knowledge about the types of objects that are present in the JS code. The second one is CacheIR, a basic linear bytecode format which is present as the fundamental optimization primitive by the Baseline Interpreter and the Baseline JIT. Ion relied mostly on TI, but when TI data was inaccessible, it sometimes used CacheIR information.

CacheIR:

CacheIR in Warp

Consider this jS function, two inline caches are used by the Baseline Interpreter and Baseline JIT for this function. One for access to a property (o.x), and one for subtraction. This is because, without understanding the types of o and o.x, we can’t optimize this function. The Property Access IC, o.x, is invoked with the value o. For optimization, it can then append an IC stub. This works in SpiderMonkey by first generating CacheIR. If o is an object and x is a simple property of data, we generate the following:

CacheIR in Warp

Here, we first save the input (o) as an object, then save the form or shape of the object, and then load the value of o.x from the slot of the object. The subtraction IC also works in the same manner as above.

In order to optimize an operation, the CacheIR instructions capture all we need to do. There are a few hundred instructions for CacheIR that are specified in the YAML file. These are the building blocks for our pipeline for JIT optimization.

Warp:

Transpiling CacheIR to MIR: We want to compile it with the optimizing compiler if a JS feature is often called. With Warp, there are three steps: 

  • WarpOracle: runs on the main thread and generates a snapshot that contains the information from the Baseline CacheIR.
  • WarpBuilder: executes off-thread, from the snapshot, it constructs MIR.
  • JIT Back-end Optimization: runs off-thread as well, optimizes the MIR, and generates machine code.

To transpile CacheIR into MIR, WarpBuilder has a transpiler. This is a very mechanical process: it generates the appropriate MIR instruction for each CacheIR instruction(s).

Trial inlining: type specializing inline functions:

JavaScript optimization JITs are capable of injecting JavaScript functions into the caller. We can take this a step further with Warp: They are able to specialize on the call site based on inline functions.

Trial inlining is very efficient since it functions recursively. Consider, for instance, the following JS code:

Trial Inlining Warp

For each of the callWithArg calls, we will create a specialized ICScript when we perform trial inlining for the test feature.Later, in these caller-specialized callWithArg functions, we attempt recursive trial inlining.Then the fun call, depending on the caller, can be specialized. In IonBuilder, this was not possible.We have the caller-specialized CacheIR data and can produce optimal code when it’s time to Warp-compile the test function.

Optimizing built-in functions:

IonBuilder explicitly incorporates certain built-in features. This is particularly useful for Math.abs and Array.prototype.push. We can execute this with few computer instructions, which is much easier than calling the feature. They decided to create optimized CacheIR for calls to these functions because CacheIR powers warp.

Results:

Performance:

Performance

On specific workloads, Warp is quicker than Ion. The changes are mainly because we can delete the code throughout the engine by basing Warp on CacheIR. This needs to monitor the inference data of the global form used by IonBuilder, resulting in speedups across the engine. Warp is also able to do more off-thread work and requires less recompilation.

Synthetic js benchmarks:

On specific synthetic JS benchmarks like Octane and Kraken, Warp is currently slower than Ion. This is not too surprising because Warp must compete specifically for those benchmarks with nearly a decade of optimization work and tuning. In the coming months, they will continue to optimize Warp and expect to see changes to all of these workloads in the future.

Memory usage:

memory usage in Warp

Removing inference data of the global form often implies that we use less memory. They will be improving more in the coming months, as they delete the old code and when they can simplify more data structures.

Faster GCs

A lot of overhead for garbage collection was also introduced by the type inference data. We can also find some significant improvements in the telemetry data for GC sweeping. This happened when Warp was activated by default in Firefox Nightly on September 23.

Maintainability and developer velocity:

Since WarpBuilder is far more mechanical than IonBuilder, the code is much simpler, more lightweight, more sustainable, and less susceptible to errors. We can implement new optimizations with less code by using CacheIR everywhere. This makes it easier to boost performance and introduce new features for the team.

Warp Features 

Mozilla’s project Warp or WarpBuilder is to boost JavaScript efficiency and memory use in Firefox. The Spidermonkey (JS) team has been working on a significant update to Warp. They also improve efficiency by reducing the amount of tracked internal type data, optimizing for a wider variety of cases, and using the same optimizations for CacheIR.The main four features are: 

  • Simple design,
  • Supposed to increase responsiveness and efficiency of page loading,
  • With Warp, the speedometer is 10 percent quicker,
  • With active Warp, the JavaScript engine uses less memory.

Steps to enable or disable Warp in Firefox browser are : 

  1. Visit about: config,
  2. Click on Accept the Risk and Continue,
  3. Search and change the value of ‘javascript.options.warp’ to true.

Wrapping up 

To bring a huge performance boost to the Firefox JavaScript engine, Mozilla is leveraging a unique new optimization process. The improved performance of the JavaScript engine is one of the most notable improvements. This increases page loading speed and responsiveness, as well as decreases the amount of memory used. Several compatibility updates are also present in the latest Firefox releases:

  1. Screen readers now correctly report paragraphs instead of lines in Google Docs.
  2. Using a screen reader, they record words correctly when there is punctuation nearby. 
  3. The arrow keys are now functioning correctly in the picture-in-picture window after tabbing.

Mozilla replaced the front end (the MIR building phase) of the IonMonkey JIT with Warp. Removal of the old code and architecture is the next step. In Firefox 85, this will be happening. They also expect additional changes in performance and memory use from that. Mozilla will even begin to simplify and refine the back-end of the IonMonkey JIT incrementally. Moreover, for JS-intensive workloads, they think there is still a lot of space for progress or improvements. Finally, they are working on a tool to let us (and web developers) explore the CacheIR data for a JS feature since all of their JITs are now based on CacheIR data. This will assist developers in understanding JS performance and results better. For any queries, you can contact us.