Handle async side-effects in React Chrome extension
What can go wrong if you are using async API, classic events and React hooks? As it turns out, more than I could have thought.
When I noticed a bug in my Chrome extension, it didn’t look to me like a bug that would cost me several evenings after work and that would lead me to a js event system, a topic that I thought I knew well enough to avoid reading manuals on that subject any time soon. But a bug is a bug and I had to do something to fix it regardless of what I might have thought about it.
This whole situation was a good reminder of how fragile the knowledge is. You may think that you know something, but only until facing a problem which breaks in as a blatant reminder that it’s probably not the case and it’s time to learn something new.
When everything works by design
The React app¹ that I’m referring to in this article does one simple thing: it shows a list of Shopify themes. It’s OK if you are not familiar with Shopify. For mutual convenience let’s consider a Shopify theme as an abstract object with some properties. We will refer to such objects as theme objects. From UI perspective a theme object is a React theme component, rendered by another list component. That gives us the following composition structure:
The app shows first 10 theme objects by default. The next batch of themes is rendered when a user reaches the end of the list. In other words we don’t show all themes on the first load but add them progressively.
Let’s consider a situation when we have 12 themes in total and first 10 themes are already rendered.
What happens if we remove the first theme from the list? Should we rerender all other themes? The answer depends on the way you define key props for your list items. In my case each theme has a unique key prop (theme’s id) which is not only unique among the list siblings but also does not match the element’s position index (0, 1, 2, 3, etc.). As a result React can optimize the rendering process and update components that have actually been changed, i.e. the first theme and the new one that has been added automatically to the end of the list in order to have 10 visible themes in total. So instead of 11 (1 to remove + 10 to rerender) we update only two theme components. Not bad, right?
Here comes the catch
When we remove a component from the list or add a component to the list, we update its runtime state and then send a message to the service worker to update theme’s state in the persistent storage (i.e. Chrome storage). All these operations are asynchronous. The sequence diagram below illustrates how they should be handled for one theme item:
Recall that we have 12 themes in total and 10 of them are rendered visible (Schema 1). When we remove the first theme, we automatically render visible the next theme in line to have 10 visible themes in total. So instead of one state update React does two updates and due to some internal optimization these two updates are fired almost with no delay in between. Let’s see how it looks on a sequence diagram:
You may have already spotted a problem. Both update requests get the same copy of the persistent storage to work with. So what? It becomes apparent that changes made by one of these update requests will be lost. Whenever the first request (i.e. the fastest) manages to update the storage, the second request does exactly the same thing in its turn and overwrites all the changes made by the first one. I find this situation rather ironic and humiliating at the same time as the winner in this concurrency race loses everything. Sometimes the async world is not fair at all :)
What can we do about it? Well, all we need to do is to separate update requests in time. When a new update request comes in all other update requests should be already completed. Basically we want to separate storage updates in time and do it synchronously via async API. Unfortunately, Chrome does not provide synchronous API out of the box, so we should come up with a custom solution.
The first idea that lies on the surface is to update the storage only once. Just before we close extension’s popup window would be the right moment to do so. As long as there is only one event, it does not matter whether it’s async or not — there will be no collisions. We keep all storage mutations in a runtime variable and when we are done with our app, we will save it to the storage. Though it’s a very tempting path to take, it turns out to be not a promising endeavor as there is no reliable way to detect the moment when a popup is about to be closed, thus there is no way to execute a code right before the closing.
It looks like our first approach is a dead end. A little bit upsetting but there must be another way to separate async events in time. And it does exist. Buffers! All we need to do is to create a buffer for update events. Whenever the app emits an update event, it goes directly into the buffer, then a service worker picks the oldest event from the buffer and executes it. When the storage has been successfully updated, the worker repeats the process until the buffer is empty. Elegant and simple, I like it :)
It would be nice if some smart person on the other end of the web would help me out with coding to spare my time and not reinvent the wheel. Let’s google existing solutions while keeping in mind that we need an esm js module that handles promises in LIFO/FIFO stacks or queues. A quick search gives us plenty of open source libraries to choose from:
p-limit
, p-queue
( this one will be a tough choice for devs in France 😄 ), @balena/promise-queue
, @jjmschofield/await-queue
, es6-promise-pool
, @sebowy/concurrent-queue
, contra
, etc.
Most of them are totally capable to solve our problem but there are two things that bother me: the code syntax and the size of a package.
If the first thing is a matter of personal taste and it can be suppressed by a tremendous power of will, the second one is not so easy to cope with. No matter what I think or what I want 44Kb will always be 44Kb and there is no other way around it. Let me remind you that in the front-end realm the size does matter 😉 . Every library you use, you add it to the total amount of code that a user should download from the server before he gets something working. And yes, even if you use a library only once to cover some rare use-case that may never happen, you still add the entire library into your bundle.
One library here, one library there and in a blink of an eye there will be an elephant in the room in front of a tiny doorway to push him through.
Unlike my fellow back-end developers I do not have the luxury to practice the same optimization technique as they do when something gets slower. There is no way to throw in one more server and watch how a performance returns to its normal values without changing a single line of code.
When it comes to 3rd party libraries, ask yourself two simple questions:
- do you really need that particular library?
- could you write your own code in a reasonable time to replace it?
If the answer to both questions is yes, then you should try to write your own code, otherwise use a library. In my case, I think it’s totally worth it to try to write my own implementation to minimize the code base and to keep my gray cells in shape.
As I said earlier we need to create a FIFO stack. What is it? The simplest way to describe it is to imagine a pipe with two open endings. Let’s assume that we put something inside through one end and get items out of it from the opposite end. Thus, if you have red and blue balls and you put them into the pipe in the following order: red first, blue next, then you will be able to get them out of it on the other end in the same order: red first, blue next.
Built-in Array object allows you to do the same thing with the help of push() and shift() methods. We will be using push() to add items, and shift() to get them out.
The last piece of the puzzle is to organize a synchronous execution of async tasks in the queue. With await
and async
it should not be a problem. The algorithm is straight forward:
Step 1. Init an empty array
Step 2. Add a task to the array via Array.push()
Step 3. IF there is a running task THEN do nothing ELSE go to Step 4
Step 4. Get a task with Array.shift() and run it
Step 5. IF the task is complete AND there are available tasks THEN go to Step 3
The implementation in code:
export class FifoPromiseQueue {
queue: any[];
isRunning: boolean;
constructor() {
this.queue = [];
this.isRunning = false;
}
add(fn:any):void {
this.queue.push(fn);
this.run();
}
async run():Promise<void> {
if (!this.isRunning) {
this.isRunning = true;
await this.queue.shift()();
this.isRunning = false;
if (this.queue.length > 0) {
this.run();
}
}
}
}
Hooray! With 25 lines of code we managed to solve our async problem and prevented the code base from bloating by getting yet another third party library.
Takeaway from this article
- Strange problems sometimes have simple and elegant solutions
- Buffers bring order into chaotic world of async requests
- Look for 3rd party libraries but use them as the last resort
- Write your own code. Programming != picking a stack of libraries
Footnotes:
[1] — Shopify Assistant git repository
You may find a FIFO demo here.