In my career I have been lucky enough to gain varying perspectives on the role of usability during the development of medical devices. I’ve worked as part of a design team with a responsibility for concept generation, preliminary research and detailed design. I’ve also worked as part of a human factors team with a broader remit for integrating human factors process across an organisation. One thing experience has taught me is that there is no ‘one size fits all’ when it comes to usability work. From the design of catheters for difficult intubations to optimising the use of LCD displays and manoeuvrability of medical workstations, a range of risks apply. The approach to studying usability needs to be tailored. All the devices that I have worked on have varied in the amount of usability work that has been conducted and the amount of resource required. This issue brings me to a question which I have faced and I am sure that many manufacturers have encountered as well.
How much usability work should be carried out and what techniques should be applied?
I am certain that most, if not all medical device manufacturers are fully aware of the regulatory requirements surrounding the usability of medical equipment (i.e. the recently updated IEC 62366-1:2015). Many also use the FDA guidance ‘Applying Human Factors and Usability Engineering to Medical Devices’. One question that keeps coming up is how much usability work is needed to prove their device is safe, effective and usable? This is a difficult question when the definition of usability takes into account a range of factors including effectiveness, efficiency and user satisfaction.
I recently attended the AAMI advanced human factors course where it was communicated that the ‘new FDA’ is not only interested in ensuring the product is safe and effective (‘old FDA’) but also USABLE. The difficulty for manufacturers is finding the correct balance in terms of knowing when to apply usability methods, how much work is required and who to involve. For example when is analytical evaluation or bench testing (employed by the design team) an appropriate substitute for testing with users?
Take as an example the development of a medical device with an integrated and angled display. A manufacturer may assign a dedicated human factors team to conduct formative usability studies including a range of users and environmental settings. This could be a resource intensive process including protocols, recruitment and report writing. It is possible they might not provide the input required for the design team to create an optimal device (i.e. the means to optimise the screen angle). In turn, conducting these studies may cause the project to over-run and in a worst case scenario the product might not make it to the market.
The alternative is for the development team to examine the device on a feature by feature basis and tailor the usability work accordingly (as stated in IEC 62366-2:2016 section 6). This allows more time to be spent on assessing the features that have higher associated risks.
A tailored approach
By implementing a tailored approach, the product team can assess and develop features using a range of tools ranging from the analytical (for example CAD, Excel calculations, anthropometric data, standards) to the empirical (for example bench testing, small scale formative evaluations). As per the previous example, analytical methods could help refine the screen angle. This doesn’t mean that no user testing would be conducted but the summative parts of the study would be focussed on the higher risk tasks that require a significant component of user interaction.
With experience as both a design engineer and human factors engineer I understand that with enough upfront background research provided by the human factors team (for example ethnographic data, anthropometric data, user profiles, environmental boards and use specifications / use related requirements) designers can make informed decisions. This allows them to create a refined and functional design therefore reducing the need for multiple rounds of simulated user testing.
It is also easy for manufacturers to forget the potential to reduce the resource required by arguing equivalence or analogy with previously marketed products. For example if you are a manufacturer developing numerous low risk devices in the same family group (for example workstations), the manufacturer can look at the usability file for one of the previous products and assess whether this data can be used on developing projects. If the manufacturer believes that there is no new identified risk (i.e. no potential for use related error) and/or no design features require usability testing, such efforts can be avoided. This could save thousands in development costs. At the same time it is worth mentioning that manufacturers must also ensure an extensive uFMEA (Use Failure Mode Effects Analysis) or equivalent has been conducted and an appropriate rationale for not conducting any/more usability studies has been given.
In summary, on one hand testing with users provides an important tool for reducing use related error. On the other hand it makes little sense to test every aspect of a product in this way. It is often not practical to do so. A balance is required and this balance comes with the experience to know when to involve the design team, the human factors teams and/or a multidisciplinary team. If I am honest, it is quite a grey area. What I do understand is that implementing a human-centred design into the development process allows a product to be developed with the user’s safety, needs, feelings and thoughts considered throughout. This requires collaboration across multiple teams and a product specific approach to usability work.
For upcoming Human-Centred Design workshop dates visit www.pdd.co.uk/capabilities/#innovation-training.
Featured image credit: webdesignerdepot.com