Prof Harold Thimbleby has had a letter published in The Times today (10 February 2015) on page 29 of the printed paper. Below is a photograph of the letter and further down is a transcript of the text.
Harold works at the intersection of computer science, interactive devices (including medical devices) and patient safety and is one of the Principal Investigators, based at Swansea University, on the CHI+MED project.
He will also be speaking in Edinburgh on Monday 16 February, 6.30pm, at the Royal College of Physicians Edinburgh, on The challenge of computerisation in hospitals “first do less harm”.
His own website is here http://www.cs.swan.ac.uk/~csharold/
“Sir, You report that human error will be “all but eliminated” with the introduction of driverless cars (Feb 9). But how are the computer systems that run the driverless cars designed? The computers and the software certainly won’t be free from human error. Human error will not be all but eliminated: it is a fact of life.
Given that the world’s largest civilian IT project, the National Programme for IT in the NHS, failed despite and investment of billions of pounds, what has been learnt since about specifying large and complex IT systems? I am not sure anything has.
However, if the government wants to invest in safety, hospitals currently kill more people from preventable and IT errors than die on our roads.
It is a shame that healthcare does not have a business model that gets the government as excited as playing with imaginary cars.
Professor of computer science,
His sentence “Human error will not be all but eliminated: it is a fact of life” is one of the ‘take-home messages’ of CHI+MED. Our strapline is ‘making medical devices safer’ and part of this involves acknowledging that people make errors. No amount of training or retraining can stop someone inadvertently making an error* but we can design systems that bolster us against errors – for example by letting us know that one has occurred and prompting us to address it.
This idea also has something important to say about how medical error affects the ‘second victim‘ – where the person is blamed for their error (could a better design have flagged it up?) they suffer too, but in blaming and retraining (or blaming and sacking) there is perhaps a danger that the situation is believed resolved and an opportunity to learn is lost. (Perhaps better design could prevent someone else from ‘getting’ to make that same error).
An example from everyday life might be a computer spellcheck that has been designed to recognise if I type ‘teh’ instead of ‘the’**. It can autocorrect the word (so I won’t know that an error has happened – although this is probably the best solution in document drafting it might be disastrous in other situations) or it can put red squiggly lines under it to alert me to the problem. An example from a medical setting might be a drugs infusion pump that runs software with information and recommended doses for different drugs that the pump might be used for. If someone tells the pump they want to deliver a dose 5 times more than the recommended dose for that drug the pump can notify the user – “are you sure?”
* and ** No amount of touch-typist training can ever entirely eliminate typos.