Skip to content

Preventing Duplicate Execution

I have just come across an article by @wealthfront explaining how debouncing event handlers helps filtering double-clicks. While I like the idea, it doesn't quite cover it. The problem is rooted deeper than double click and this post will explain why that is and how to fix it.

Before we get into code, let's have a look at the problem's source(s).

User Behavior

We developers know how to use a computer properly. We know that a link on a website has to be clicked once to engage - contrary to the double-click required to open a file on your desktop. We know the signs of a busy page, even if the page doesn't tell us that it's currently doing something. Normal people (you know, those that don't understand a binary joke) don't deal with computers in this way. In fact, normal people (let's call them muggles) use computers quite differently. They have a very limited understanding of single and double clicks. They will - intuitively - double click on anything they think is worth clicking on. Hell, they'll even click on the background just to be sure their mouse is getting some excercise. Muggles and developers live in very different mind sets.

Are You Testing Or Verifying?

You know what you're doing and that is a good thing. Unless you're developing user interfaces. You know that you have to click that submit button exactly once. Why on earth would you - or anyone else - click that button twice? And this is exactly the point where your »testing« has degraded to »functional verification«. You, the all-mighty developer you are, (usually) do not think about the millions of wrong things a user could to. Hence you don't double-click on a submit button. Or impatiently click that button again, because (apparently) nothing happened. Functional verification is not testing.

Testing is not about making sure it works. Testing is about making sure it doesn't break.

The Technical Problem

Well, that one is easy. There is no native (blocking) "work in progress" state, that would prevent a form being submit again, while a submit is being processed. There's no native way to prevent duplicate execution of any kind. Because browsers don't handle this, and no tutorial (I've seen) ever covered this, people simply forget.

The Solution

@wealthfront set out to prevent a double-click from executing something twice. While that may work in their case, I think this is fighting the symptoms.

You have to implement this state of »hey, I'm currently doing something« yourself. It can be as easy as greying-out (disabling) the button you just clicked. It may get even more expressive with a message and a fancy spinner and stuff. Showing the user that you're currently working.

jQuery("#some-button").on("click", function(event) {
  var $button = $(this);
  if ($button.hasClass("js-state-processing")) {
    // oh look, we're already in the processing state
    // so abort and forget we were ever clicked
    e.preventDefault();
    e.stopImmediatePropagation();
    return;
  }
 
  // well, we're processing now
  // style using CSS, that's what it's for
  $button.addClass("js-state-processing");
 
  // execute our actual onclick handler and
  // pass a function that it may call when it's
  // done, so the processing state may be revoked
  doSomething(function() {
    $button.removeClass("js-state-processing");
  });
});

function doSomething(doneCallback) {
  // do something time-intensive,
  // possibly asynchronous
  setTimeout(doneCallback, 1000);
}

It boils down to managing state. Slightly more complex than debouncing events. But you're doing some important UX work in the process - something a debounced handler can never give you: transparency.

Conclusion

  • especially asynchronous and time consuming methods need some UX love
  • while you're displaying "progress" state, prevent all interaction

Comments

Display comments as Linear | Threaded

Marc Hinse on :

Marc HinseTotally agree. But

>abort and forget we we're ever clicked

is too little feedback to "hey, you clicked something that makes no sense, because you already clicked that or somehow you were not supposed to click that button".

I know, it is just sample code but I would suggest:

// oh look, we're already in the processing state
// so abort, give feedback and forget we we're ever clicked
e.preventDefault();
e.stopImmediatePropagation();
showNotification(reasonWhy);
return;

Barney Carroll on :

Barney CarrollIt's a good point, but this is really the tip of the iceberg if we're to consider a comprehensive UX.. The original article wasn't missing the point, it was talking about writing functional code — whereas this point is motivated by a design concern.

Traditionally, the ultra-simplistic, lazy and fool-proof way to deal with asynchronous code behind interfaces is to have an overlay with a 'waiting' icon until the response has been received and the interface has updated. This sounds depressing because you're forcing synchronicity on the user's experience.

At the other end, there's this thing of 'this button is awaiting callback, grey it out'. It's much more granular, but just as simplistic.

The real solution is probably somewhere in-between. More than likely, several components of the interface become redundant or temporarily ambiguous when we're waiting for callback X — you might want to grey out an entire widget to let the user know that that part isn't in synch but they can get on with other work. Another solution is to simply have a persistent messaging area (à la GMail) which states sending/waiting/etc.

All in all these are esoteric application design questions, not programming best practice, and as you said it's a question of behaviour, not functionality.

The author does not allow comments to this entry