Brotli compression
Brotli compression improves website performance by offering 20-26% better compression than existing algorithms such as GZIP.
Brotli is a new compression algorithm first announced on Google’s Open Source Blog in September 2015. Developed by Jyrki Alakuijala and Zoltán Szabadka of the Compression Team at Google Research Europe it provides enhancements in lossless data compression, generally outperforming the current industry standard, GZIP. It claims to offer 20% to 26% better compression than existing compression algorithms so has the potential to offer a worthwhile performance improvement to websites and web applications.
A Google intern in London, Anamaria Cotîrlea, implemented Brotli compression on Google's Play Store last summer resulting in saving users an impressive 1.5 petabytes (1.5 million gigabytes) of data each day.
Brower Support
Browser support at the time of writing stands at 56% globally and whilst that’s not amazing, support is growing quickly. It’s also safe to implement now as supporting browsers indicate the fact in the Accept-Encoding HTTP request header so a supporting web server can respond with a Brotli encoded response. Meanwhile for non-supporting browsers the web server can fallback to serve a GZIP encoded response. Brotli support landed in Google Chrome version 51, Mozilla Firefox version 44, Opera 43 and Microsoft Edge 15. It's also worth noting that Brotli compression is only supported over HTTPS connections.
Web Server Support
Nginx and more recently Apache both support Brotli compression but enabling support in Nginx is not a trivial task as it requires compiling Nginx from source with Google's Brotli module included. But fear not, as we've written a how to guide on compiling Nginx from source to add Brotli compression.
Content Delivery Network (CDN) Support
Implementing Brotli faces yet more obstacles as many CDN providers normalise the Accept-Encoding header so origin servers are generally only passed GZIP even if the client supports Brotli compression. This is for good reason as there are as many as 50 different values out in the wild for the Accept-Encoding header so without normalising the header CDNs would suffer cache dilution to such an extent that their effectiveness would be undermined.
You might also like...