It isn't often we need to do something on our servers that requires extra time to process. Most of the things I develop can be processed in time for a quick request/respond round. But sometimes you end up in a use-case where you need just a couple of extra seconds, or even more. In my case it came as a campaign-project where we needed an image of an package that we already had rendered with layers of css, images and html.

A little context

Normally, when creating images server side, you end up with some kind of image processing component, System.Drawing or ImageMagick or something alike. But in this particular project, we felt that we already had done the job of composing the image in the browser, so why re-compose it in the server as well? Twice the job and more when tweaking. Instead, we could try to use PhantomJS to take a screenshot of our pre-composed webpage, thereby not only saving time from composing, but also having a much better way of knowing how the result would be, since its just css and styling.

The only problem would be that starting a PhantomJS instance, going to an url, waiting for it to load and saving it as a PNG (amazing that Phantomjs saves it as a transparent PNG if you don't a have a background colour in your page) would take a little too much time. Even doing it async would provably only work to some level, since Phantomjs would need to release its instance etc. In short, we needed something more robust to process all incoming jobs.

Que it for me

A natural next step is to build a queued processor. You add a job, and process one at a time, making sure that the process is done before getting to the next one. This will ensure that nothing could go wrong, no Phantomjs instance can be created before another is finished and so on. It is, from a stability point of view, the correct thing to do. But what if you are creating a campaign site where as many as 500 simultaneous users are using your service? Is it acceptable for the 500th person to wait so much more for the results? You can imagine that any client would say "no". So you are left with a simple solution: Share your queue. Make a queue system that can add nodes to it adhoc, whenever needed. In other words:

  • Have one central point that can take care of incoming queue items, whatever that might be.
  • When one item is in the queue, check if a node is free.
  • Take the first free node and assign it the queue item and its data.
  • The node processes it and sends it back, where the node is marked as free again.
  • If you need more processing power, just add more nodes.

It's quite simple, and that's the beauty of it. And what is even greater is that with nodejs and socket.io, it is so simple to create it as a module, that you could use for any queuing process you want. In just a couple of nights (I did it after work) I had a complete multi-node queueing system, with adhoc nodes connecting and disconnecting.

And thats when the most amazing thing about this hit me. A solution like this, just a couple of years ago, would be almost impossible to create. At least for a quick, client campaign. But with the help of nodejs (in this case), open source modules (socketio, phantomjs) and the will to solve things a little different (not redoing the composing of an image on the server again), we managed to create something that worked perfectly.

Keeping up with new technologies and standards isn't any thing new to us. But what we really need to do is to force ourselves to forget some things we learned in all these past years. Experience is great, but we need to compare it to new ways of solving problems. Be it using new types of databases or creating multi-server solutions.

Here is a link to the campaign we made, even if it isn't open any more.

And if you want to know more about how to use my queuing module, you can either check the code at Github, or read more about it here.