Standard deviation index (SDI) measures bias using simple, easy to understand criteria. I also like this for daily quality control, because it works on all levels. Here’s the calculation:
SDI = (Value - Target Mean) / Standard Deviation
Thus, a glucose of 97 with a control range of 80-100 has an SDI of 1.4. A positive SDI indicates a value above the mean; a negative value indicates a value below the mean.
Also called z-score, the SDI corresponds to where on a run chart a value falls. As James Westgard explains on his web site, “It is very helpful to have z-scores when you are looking at control results from two or more control materials at the same time, or when looking at control results on different tests and different materials on a multitest analyzer.”
SDI is commonly used on laboratory peer reports, comparing to dozens or hundreds of other laboratories. Here are some guidelines for interpreting an SDI:
- 0.0 - perfect match peer group
- Less than or equal to 1.25 - acceptable performance
- 1.25 - 1.49 - some investigation may be required
- 1.5 - 1.99 - investigation is recommended; marginal performance
- Greater than or equal to 2.0 - unacceptable performance
Typically, peer reports are scanned for high or low SDI values, especially if they are seen across multiple levels. But the SDI is useful internally, too. If your information system calculates an SDI a tech can quickly see what’s in, what’s out, and what’s trending instead of just responding to Westgard flags. For example, if all levels of QC on an analyte have a negative SDI, there may be a calibration bias.
As I’ve described, our lab uses a program that plots SDI values instead of standard run charts. Every chart looks similar with plus or minus 2 SDI; actual means and standard deviations are indicated. Because this data depends on the parameters at the time of posting, these charts reflect QC at run time and answer the question, “What did the QC look like on a particular day?”
NEXT: We Never Got The Result