“Measuring usability”: The siren song of quantitative reasoning | PDD

“Measuring usability”: The siren song of quantitative reasoning

By PDD

on November 18 2015

We live in an age of one-click statistics, and it is more and more tempting to build up evidence by pursuing statistical support and unwittingly adhering to the siren song of quantitative reasoning. It’s quick, it’s simple, and it’s logical, but as someone working in the field I can’t help but jump to the first question I so often ask myself these days: How usable is it? Evidence in the form of statistics can make you feel like you are in a position to adequately inform and address issues within product design. But have you really measured usability? Have you actually measured whether the characteristics of a user interface facilitate use? Or have you compromised the opportunity to innovate in favour of justifying a technical decision?

I have most definitely been guilty of the latter. I know this because, when I first tried my hand at usability evaluation, I was a freshly graduated student, perhaps overeager to apply the countless quantitative research methods that I had learned. So enthusiastic was I to show how good or bad a product’s usability was that I failed to notice that I had pigeonholed usability into a metrics-only evaluation. Back then, I thought that poor usability could be shown by high error rates and task failures, but what was I really learning?  Was I really informing the design?

You may have some strong evidence that there are opportunities for improvement in a product’s design and you may even have an idea of what they are and how improvements could be made. However, the fact remains that “measuring” usability by only using statistical analysis and deduction does not necessarily take you to the root cause of the problem. For instance, if we consider air crash investigations, we know that black boxes capture vast amounts of data, yet if you were to say that a plane crashed because of a loss of altitude, there is very little you can do with that information. What actions were being taken in the cockpit? What did the pilots think was happening? What was the plane telling them or not telling them? What mechanical events were taking place? Without considering why the plane lost altitude and delving a little deeper, you miss the opportunity to inform future development in the industry and prevent future incidents. In short, you might be evaluating an issue at face value and overlooking other contributory elements.

To diagnose usability problems and outline opportunities for improvement, you need to do more than just outline an issue; you need to question why and how the issue has occurred. Sure enough, you may spot that a user has made an error, but what is more valuable is how that user perceived the error. Maybe he or she was struggling to figure out the next step or maybe he or she had an idea of how the design could be improved upon. This is the sort of information that can tell you a lot about a product. However, heed this warning and do not blindly treat the users’ wants and needs as gospel. After all, part of innovation is the development of devices or products that the users don’t even know they want or need. With every piece of user feedback you must probe and follow up comments with questions in order to validate the user feedback. In this way, qualitative data doubtlessly delves deeper into a product’s development and provides a solid framework within which improvements can be informed.

So be careful when you talk about “measuring” usability. Regardless of your angle of approach, it is easier to just accept that usability does not have a clear cut definition or a concise scale from “good” to “bad”. Much like a piece of string, its measure is most relevant when you have something to wrap it around.