Contact
Web Rendering APIs – an introduction and comparison of WebGL and WebGPU

Blog

Web Rendering APIs – an introduction and comparison of WebGL and WebGPU

What is rendering?

rendering

noun

UK  /ˈren.dər.ɪŋ/ US  /ˈren.dɚ.ɪŋ/

rendering noun (PERFORMANCE)

the way that something is performed, written, drawn, etc.

Source: Cambridge Dictionary – Rendering

In the case of performing the computations for 3D computer graphics operations rendering engines consist of computer hardware. Some computations can be run on a standard CPU, but more complex graphics operations need specialized hardware with massively parallel processors capable of rasterization or ray tracing. The computing hardware for producing the imagery affects the type and number of effects for creating a real-time experience for a designer. These types and effects are techniques for increasing the richness of a visual rendering like advanced shading methods, lighting, texture mapping translucency, and atmospheric effects. Before 1981, such graphics or rendering engines were only available exclusively in expensive military or airline flight simulators. Then Silicon Graphics, Inc. was founded and created affordable graphics hardware at a reasonable ratio of price to performance. After that, graphics hardware breached not only to academic and business research organizations but also to households and smartphones.

The basic architecture of a rendering pipeline includes application, geometry processing, rasterization, and pixel processing. Every stage can be a pipeline by itself or partly or fully parallelized. All of the stages are processed and executed on the GPU except the application which is usually run on the CPU.

Source: Real-Time Rendering, 4th Edition Figures

 

The geometry processing itself is a pipeline with the stages vertex shading projection, clipping, and screen mapping. The vertex shading is responsible for computing the vertices’ positions, their normals, and texture coordinates, applying lights to each vertex, and storing the resulting colors. The screen mapping is responsible for mapping the primitives’ coordinates to the window screen’s coordinates. There are differences between the APIs, for example, the starting corner for counting between OpenGL and DirectX.

The third rendering pipeline stage, rasterization, contains the triangle (also points and lines) setup and traversal. Its job is to synchronize between geometry and pixels.

The last rendering pipeline stage is pixel processing and is further defined by pixel shading and merging. In contrast to the rasterization stage which is performed by dedicated hardware, the pixel shading can be fully handled by an API program, for example, the fragment shader in OpenGL, and the merging is mostly configurable by API. Lastly, the frame buffer collects all the buffers on a system.

What are Web rendering APIs?

A Web rendering API can be described as a low-level API used for rendering graphics on web browsers. For starting on a top-level, the following architecture diagram gives an overall picture, how a Web app is linked with OS and resources by one upcoming Web rendering API, WebGPU:

Source: Access modern GPU features with WebGPU

 

Each browser has a more or less uniquely designed browser engine, which is why a single website’s appearance can differ from browser to browser as it is interpreted differently and furthermore, leading to cross-browser incompatibilities.

Rendering engine JavaScript engine Web browser
Blink V8 Chrome
Blink V8 Opera
WebKit Nitro Safari
Gecko SpiderMonkey Firefox
Trident Chakra Internet Explorer
EdgeHTML Chakra Edge

Source: Browser Engines: The Crux Of Cross Browser Compatibility

On the example of Mozilla Firefox’s rendering engine, Gecko, we can have a look, how it works with WebGL:

One part of Gecko is WebRender. WebRender serializes the display list describing a visible subset of a page produced by the layout module on the content process and sends it to the main or GPU process for rendering. These frames are consumed by the renderer for producing OpenGL drawing commands. Due to WebRender’s base on OpenGL, it needs ANGLE to transform these commands, for example in Direct3D for Windows.

OpenGL is a graphics hardware API (Application Programming Interface) and renders 2D and 3D objects into a frame buffer where the objects are sequenced as vertices for geometric objects or pixels for images.

ANGLE (Almost Native Graphics Layer Engine) allows running WebGL and other OpenGL ES content on multiple operating systems. The OpenGL ES API calls are translated to one of the hardware-supported APIs. ANGLE currently supports OpenGL ES 2.0, 3.0, and 3.1 to Vulkan, desktop OpenGL, OpenGL ES, Direct3D 9, and Direct3D 11, but support plans are made for ES 3.2, Metal and macOS, Chrome OS, and Fuchsia.

In summary, WebRender builds the display list, scene, frame and executes GPU commands with the help of ANGLE and hardware-supported APIs like WebGL and WebGPU.

What is WebGL?

Web Graphics Library is an API based on a subset of OpenGL, called OpenGL ES 2.0+, and is used for rasterization in the browser. Meaning that it draws triangles, lines, and points out of the supplied code in the browser without installing further plugins and runs it on the GPU and therefore hardware-accelerated. The supplied code needs to be written in GLSL (GL Shader Language) and have pairs of functions, a vertex shader, and a fragment shader. The vertex shader rasterizes primitives whereas the fragment shader colors the pixels of the primitives. The pairs of functions are set up as states and supplied by the WebGL API to the GPU.

Source: WebGL Beginner’s Guide

Common frameworks and libraries support and use WebGL like Voxel.js, Unreal 4, Unity 5, Three.js, Babylon.js, and many more.

A downside of WebGL is that it renders complex images and animations hardware-accelerated, but not fast and resource-exhaustive enough. In summary, the reason, why there will be no WebGL 3.0, is, that it is increasingly hard to implement because it doesn’t match modern GPU design and can cause performance issues. Therefore, W3C is working on a new standard, called WebGPU.

What is WebGPU?

WebGPU is an API that exposes GPU hardware capabilities for the Web and maps efficiently to (post-2014) native GPU APIs, though it’s not related to WebGL. In contrast, it uses hardware-oriented interfaces like Vulkan, Metal, and Direct3D 12 and does not port an existing graphics interface. Its raster graphics pipeline looks like this:

Source: Raw WebGPU – Graphics Pipeline

By 2017, the first proposals for this standard had been made to tackle performance, usability, portability, and security. Google plans to roll out WebGPU with Chrome 99 in appr. March 2022 and starts testing with Chrome 94 in appr. October 2021. In Firefox 96-97 it is supported but has to be enabled by a flag, same for Edge 96, Safari TP, and Opera 82. Their status can be checked out here. Furthermore, the question of whether your GPU is supported by WebGPU depends on the exposed vendor’s extension.

What are the differences between WebGL and WebGPU?

Before going through more technical differences, let’s start with a view on general differences between WebGL and WebGPU:

WebGPU is announced as the successor of WebGL being in its infancy and turned off by default in browser’s settings if even already available. Whereas WebGL is stable in its specification 2.0. WebGPU is Vulkan-based, which is a slightly lower level API than OpenGL ES and known to balance the CPU/GPU usage more.

Their history is different: In 2009 a working group of the Khronos Group started to work on WebGL based on OpenGL to find more contributing developers as OpenGL was very common. Debates about whether using WebGL or Flash or OpenGL or DirectX fired competitions and development. However, Flash support was dropped in 2020 but DirectX was preferred by game developers due to its Windows support and better performance. However, things may change with Vulkan which was started by AMD and DICE in 2013 and then donated to the Khronos group in 2016. Vulkan was developed further to be layered on top of DirectX 12 and Metal.

Another difference is the owner and licensing model: WebGPU is owned by the W3C and developed by its WebGPU for the Web Community Group. Specifications contributions are made under the W3C CLA and software contributions are made under the GPU for the Web 3-Clause BSD License. The owner of WebGL is the Khronos Group. It is free of charge and has a license text similar to MIT.

Practical reasons for using WebGL are that it can be easier learned and used due to its maturity. WebGL is supported by most browser vendors. It only needs a text editor and a browser to develop 3D graphics apps.

On the other hand, WebGL has its roots back in 1992. GPU’s design and its APIs enhanced over these years for higher performance.

The first technical difference between WebGL and WebGPU is the separation of concerns. Resource management, work preparation, and submission are in one single context object containing associated states in WebGL. Whereas in WebGPU, resource management, work preparation, and submission are separated. For the creation of textures, buffers, and other resources, GPUDevice is used. For encoding individual commands of the render and compute pass, GPUCommandEncoder is used and turned into GPUCommandBuffer which is submitted to the GPUQueue. GPU resources are created on the fly due to the streaming of the data in one or more workers and submitting it all together. Multi-core processors can be highly utilized by such multithreading scenarios:

Source: A Taste of WebGPU in Firefox

The second difference discusses the pipeline state. In WebGL, the driver takes all states in consideration after the program initializes. This also means that the driver needs to recompile certain programs and can lead to CPU stalls. WebGPU uses GPURenderPipeline and GPUComutePipeline to handle pipeline state. These pipeline state objects summarize all user creations or changes upfront. When called later in GPU operations, browsers and hardware drivers avoid additional effort. Another advantage of the state objects is the easier development as the state objects are more coarse and thus, state changes or preservations can be easier decided on. Generally, the pipeline state has shaders, vertex buffers and attributes layouts, bind groups layouts, blending, depth and stencil states and output render target formats.

The third difference concerns the binding model. WebGPU’s binding model is Vulkan-inspired and groups the resources in GPUBindGroup objects which are baked into a GPUBindGroupLayout object. The shader resources can then be handled by binding these group objects while command recording. The graphics driver is able to prepare in advance when the groups’ objects are created in advance. Furthermore, the browser draw calls can be updated much faster. Pipeline states, bind groups, and bind group layout work together, get informed from and inform each other to serve between shader and API.

Source: A Taste of WebGPU in Firefox

What are the similarities between WebGL and WebGPU?

Both are free, low-level APIs, and both are used for rendering 2D and 3D graphics on web browsers. Both don’t need any additional plugins installed. The major browser vendors are part of their working groups and implemented or are implementing them.

Both need a powerful shading language for efficiently serializing into formats,  transferring, and validation on safe shaders. Moreover, it should be human-readable and a kind of assembly language. However, WebGPU uses WGSL, a more Vulkan-based shader language, and WebGL uses GLSL.

Both need an HTMLCanvasElement for initialization, rendering the output, and seeing the drawing.

The Canvas Context setup in WebGPU looks like the following in TypeScript:

// ✋ Declare context handle

const context: GPUCanvasContext = null;

// ⚪ Create Context

context = canvas.getContext('webgpu');

Source:  Raw WebGPU – Canvas Context

For WebGL the getContext function needs to be called on the canvas like the following in TypeScript:

// 👋 Declare handles

let canvas: HTMLCanvasElement = document.getElementById('webgl') as HTMLCanvasElement;

// ⚪ Initialization

let gl: WebGLRenderingContext = canvas.getContext('webgl');

Source: Raw WebGPU – Initialize API

Both need to initialize resources like vertex and index buffers, vertex and fragment shaders, and keep everything updated by rendering.

Both consist of two programmable stages in the rendering pipeline: the vertex and the fragment shader. Additionally, WebGPU also supports compute shaders.

Summary

WebGL emerged to bring sophisticated 3D graphics in our browsers and is still fully supported by most browser vendors and many frameworks and libraries. The rendering pipeline is a highly complex and delicate field and WebGL still masters it. However, its performance could be better and it’s mostly restricted to drawing images. Instead of developing another version of WebGL, something new is seething, WebGPU and its support for modern GPUs. WebGPU promises to generally compute on GPU due to its access to highly advanced and modern GPU features. Thinking about highly-detailed scenes with CAD models, executing advanced algorithms for drawing realistic scenes, or machine learning. I am excited about the already possible WebGPU examples and extremely excited about which possibilities will be opened for browser gaming, Web engineering, and Web app development.

Sources

Cambridge Dictionary – Rendering

William R. Sherman, Alan B. Craig, in Understanding Virtual Reality (Second Edition), 2018

Tomas Akenine-Möller, Eric Haines, Naty Hoffman, Angelo Pesce, Michal Iwanicki, Sébastien Hillaire, in Real-Time Rendering (Fourth Edition), 2018

Real-Time Rendering, 4th Edition Figures

Mike Shema, in Hacking Web Apps, 2012

Sampsa Rauti, Ville Leppänen, in Emerging Trends in ICT Security, 2014

Rendering Engine

Browser Engines: The Crux Of Cross Browser Compatibility

OpenGL Reference Manual

ANGLE

Gecko: Overview

WebGL Fundamentals

Diego Cantor, Brandon Jones, in WebGL Beginner’s Guide, 2012

WebGL User Contributions

WebGPU

The story of WebGPU — The successor to WebGL

GPU for the Web Working Group

WebGPU License

WebGL License

Kouichi Matsuda, Rodger Lea, in WebGL Programming Guide: Interactive 3D Graphics Programming with WebGL, 2013

WebGPU Explainer

WebGPU Explainer – Why not WebGL3

A Taste of WebGPU in Firefox

Next-generation 3D Graphics on the Web

Raw WebGPU

Raw WebGL

Access modern GPU features with WebGPU

WebGL Beginner’s Guide

Can I use WebGPU

Expose whether WebGPU is supported or not