Editing the Image Editor
At the end of 2018, we made the decision to build our own Image Editor in order to give our customers essential tools and an intuitive experience. We released a version with the bare minimum feature set, planning to iterate on it quickly. We just released a second version, which has several new powerful, high-quality adjustments including undo/redo and a suite of custom filters that are unmistakably Squarespace. This is how we built the filters.
Filtering results to build filters
Squarespace has always followed a design-first philosophy when building our products. That was true when I started—when we had about 60 employees—and it’s still true now as we creep towards 1,000. As an engineer here, you have to make a constant, conscious effort to maintain this design-first thinking when working on something new. Sometimes that means thinking through user-experience issues before adding a new panel another level deep. And sometimes it means making sure our customers have access to top-notch design tools to make the most beautiful site they can. It was this second type of design-first thinking that led to the filters in our new Image Editor.
To facilitate the canvas-drawing and image-manipulation logic, we decided to use the popular FabricJS library. FabricJS has a small set of built-in filters, but they are limited in scope and generally not concerned with the visual quality of the final image. At this point, we had about a dozen custom-designed filters from our design team and no way to translate adjustments from a design tool into code. It was imperative that we put our own design expertise to use for our customers and offer a suite of filters that were recognizably Squarespace at heart. There were therefore two important considerations our solution had to address:
- The design team must be able to work completely from their image-editing software of choice.
- When applying the filter in our Image Editor, it must match the desired result precisely.
And, in a sense, there was a third consideration: none of us on the team had ever built anything like this before. Time to research!
Look, up in the table!
A lookup table is, at its simplest, an array or matrix of pre-calculated data that can be cached and used to look up answers via an index or key, rather than calculating them on the fly. Lookup tables also predate computers. Books of tables for looking up base-10 logarithms were essential before calculators were readily available, and were used in place of time-consuming, error-prone calculations. In fact, nearly everyone who has gone to a modern elementary school made extensive use of lookup tables when they had to learn their times tables. It’s probably still cached in your memory too....
In image processing, a lookup table consists of distinctly colored pixels that progress through the whole gamut you need in some logical way. They can be one-dimensional (a grid) or three-dimensional (a cube where the axes are red, green, and blue). While 3D tables are unsurprisingly more powerful, their usefulness really shines when dealing with unpredictable color spaces—like actual film—where you might need to change a color based on the colors around it, and converting between color spaces, where the ability to interpolate intermediary values is crucial. For our purposes, the ability to map values one-to-one with a 1D lookup table was sufficient.
For our first attempt, we started with a 512×512 lookup table, which gives you 262,144 possible output colors. It quickly became obvious that this wasn’t fine grained enough to give satisfactory results. Areas with very subtle color shifts were pushed to the closest color available and bands were visible.
Increasing to 24-bit color, 4096×4096, gave us access to 16,777,216 output colors: exactly enough for an RGB image. This is because each channel (red, green, and blue) is 8 bits, resulting in 224 possible colors. Here’s the neutral lookup table we started with:
It’s a 16×16 grid, where each square is 256×256 pixels. The blue value increases from 0 to 255 going right to left in each individual square. The green value increases from 0 to 255 going top to bottom in each square. Red increases from 0 to 255 going left to right and top to bottom in each square, i.e., the top-left square has 0 red, the top right square has 15 red, the bottom right square has 255 red, etc.
Each pixel in an image has four channels: red, green, blue, and alpha. We’re only concerned with the first three here, since we’re not doing any transparency manipulation in our current batch of filters. With our neutral lookup table, we now have a mathematical index for every possible combination of RGB values from [0, 0, 0] (black), to [255, 255, 255] (white).
The image data is returned from the canvas as an enormous Uint8ClampedArray, with four points of data for each pixel. It looks like this:
We can now determine where a pixel with those RGB values would fall in our neutral lookup table like so:
Where r, g, and b are the RGB values of the pixel you’re looking at in the original image.
We get the x coordinate by first finding which square it’s in
(r % 16), multiplying by 256 since there are 256 pixels in each square, then adding the original image’s blue value. To get the y coordinate, we get the row with
Math.floor(r / 16), then follow the same math from the x coordinate. The index is finally calculated by multiplying the y coordinate by 4,096 (the width of the table, i.e., the number of pixels/row), adding the x coordinate, and multiplying the result by 4 to account for the fact that, while we only have ~16 million pixels in our table, we have four channels of data for each pixel.
This is now our reference point for every color in the original image. Now we just need the new color at that point after applying a filter.
Here’s how this approach allowed us to make sure our design team could work completely in their own tool and hand off something we could use. They’d create the desired effect with adjustment layers and then provide us with the original image, the manipulated image that we needed to match, and the adjustment settings they applied.
By working from actual images, all we have to do is take our neutral lookup table, open it in the same image editing software, and apply the exact same adjustment layers we got from the design team.
From here, it’s as simple as finding the pixel in this table at the index we found previously, getting its RGB values, and writing the RGB values back to the original image data. That data is then put into the canvas in the Image Editor. Here’s the same image with the Bright & Crisp filter applied next to the reference image from the design team:
It’s identical to the image created in the designers’ image-editing tool.
We’ve set ourselves up so that all we need to do to add a completely new filter is add a lookup table to the module, update a few enums that drive the UI, build and publish. We also found that this solution can help with other parts of the Image Editor where all that’s needed is a one-to-one color replacement. For example, our contrast slider is now driven by two lookup tables on either end that give a design-curated result, whereas before we had to rely on Fabric’s less finessed option. We’re excited to continue improving our Image Editor, using lookup tables as another tool in our belt.