What Do Scores Really Mean?
The National Health Service was in the news again this weekend, and not for good reasons. The children's hospital in Bristol has a high mortality rate. The medical director of NHS England, Sir Bruce Keogh, has ordered an independent review of the matter, which is remarkable in that normally investigations have to go through numerous levels before getting his attention. The hospital's chief executive defended the "good clinical outcomes" of her Trust, pointing out "98 percent of its patients' parents said in a survey that they had received excellent, very good, or good care." Patient perceptions are "good clinical outcomes?" Really?
I've written before about the imbalance between qualitative and quantitative measures used here in the UK. Both are important and neither should stand alone. One of the most widely disputed targets here in England is the 4-hour maximum wait time in Accident and Emergency Departments. The mid-Staffordshire Trusts did very well on those quantitative measures, but people still wound up dying unnecessarily.
We seem to be forgetting why these measures are in place in the first place. We want to be able to determine if our services are actually doing what they're supposed to be doing; getting people well and treating them well. If all we're doing as leaders is trying to achieve a target, we will fail.
This is not a situation unique to the UK. One need only look at the Medicare website to compare hospitals, home care agencies, and now even physicians! In the past, I've used the tools available on this site to see the rankings of providers I knew. Some brilliant companies were ranked low while some dodgy ones were ranked highly. So what do we really learn from these scores and how does that help us shape the future of healthcare?
I'm reminded of the old saying, "The operation was a success. Too bad the patient died."