Welcome to Health Care POV | sign in | join
Stepwise Success

Confirmation Bias
October 21, 2016 3:19 PM by Scott Warner

A catch-all for errors is to blame them on “human error.” Human error is seen as inevitable, unpredictable, and common to us all. We forget, get distracted, make judgment errors, draw false conclusions, overlook the obvious, and make other mistakes. It’s curious that something so universal seems to be so mysterious. In effect we end up taking it for granted that human errors will always happen.

One source of human error is confirmation bias. Awareness of confirmation bias can help prevent many errors.

Confirmation bias is our natural cognitive bias to seek, interpret, favor, and recall information that confirms our beliefs, assumptions, or expectations. It’s one reason for the ongoing success of cable news networks. Conservative viewers tend to watch stations that confirm their viewpoints; the same is true for liberals. We also tend to weigh evidence in proportion to our belief systems, ignoring or giving less importance to that which contradicts what we assume is true.

Confirmation bias can have a significant effect on laboratory results. A few examples:

  • A hemogram MCV is 105. When reviewing the peripheral smear are you more or less likely to see macrocytes?
  • If a patient has an anti-E and a unit of blood is negative for E, are you more or less likely to see a negative reaction?
  • A urine dipstick is positive for leukocyte esterase and nitrite. Are you more or less likely to look for white cells and bacteria, and ignore renal epithelial cells?

Confirmation bias can affect how we troubleshoot instrument problems. Based on prior experience or a perception of what is usually wrong, we can ignore evidence to the contrary.

This intellectual “cherry picking” is our brain’s shortcut to understanding patterns and information. It makes sense from a time-saving standpoint: it takes time and energy to refocus on information to be sure nothing is missed. But it can be a source of serious human error, too. Indeed it is identified in the Merck Manual as a source of error in clinical decision making: “For example, a clinician may steadfastly cling to patient history elements suggesting acute coronary syndrome (ACS) to confirm the original suspicion of ACS even when serial ECGs and cardiac enzymes are normal.”

The laboratory is complex enough to have a multitude of opportunities for this kind of error, especially in reviewing culture plates, peripheral smears, and other testing that requires judgment. Fortunately, the strategy to avoid confirmation (and other cognitive) bias is simple: stop and think. Put your assumption aside as a hypothesis and ask, “Is there anything else this could be?” This can be difficult to do in a hurry, but avoiding negative outcomes is worth the small amount of time it can take to question yourself. Do you really see those spherocytes? Was that antiglobulin crossmatch really negative? Are you sure there isn’t any sample carryover with that probe?

Sometimes, it isn’t worth it trusting our assumptions.

NEXT: GFR for Everybody?

The Power of Listening
October 10, 2016 3:43 PM by Scott Warner

Most people are aware of the difference between hearing and listening. And as we know, most of the time we pretend to listen while we’re thinking of the next thing we’re going to say. Listening can be hard work, because it requires focus. It is an essential skill for leaders but also for anyone who wants to be successful.

The web site Skills You Need has this: “Listening is so important that many top employers provide listening skills training for their employees. This is not surprising when you consider that good listening skills can lead to: better customer satisfaction, greater productivity with fewer mistakes, increased sharing of information that in turn can lead to more creative and innovative work.”


Listening is particularly important in healthcare, where information has to be communication rationally and specifically in ways that are clearly understood. This can be difficult in an environment such as a laboratory, which has many distractions. Most of us manage this basic level of communication day to day well enough.

It’s much harder to listen to how we can help another person, be that person a patient, coworker, or employee. That takes concentration and effort, because it’s about the agenda of the other person. Everything in your mind has to stop to learn what the other person is saying. While we are motivated to listen to patients who are likewise motivated to receive our help, things get much more complicated in the workplace itself.

Mind Tools has some good tips for active listening: pay attention, show that you’re listening, give feedback, don’t judge, and respond appropriately. This can be easy if the person is honestly trying to communicate and there are no distractions, both of which don’t always happen in a busy workplace.

Like any skill, listening takes practice. It is never enough to sit with an employee at a yearly performance evaluation to have an “honest conversation.” That evaluation should be a summation of the year’s communication, if anything, to reach an understanding. Good managers listen constantly and offer feedback on a variety of topics: working conditions, quality issues, morale concerns, and anything else that pops up. A daily huddle is a good place to start.

Any worthwhile disclosure between parties assumes a bond of trust that is built over time. Any new manager has a blank slate to get this done, but this can be tough for someone with a reputation for not listening. It isn’t enough, for example, to say you have an open-door policy. An open door means all work stops, the computer screen is ignored, and the phone isn’t answered. I try very hard to do all these things when a staff member stops by my office. That’s what email and voicemail are for. For those few minutes, nothing should be more important than what the other person has to say.

These few steps can help build trust and make sure you are available and interested in what people are saying. If people are willing to talk, it’s much easier to listen.

NEXT: Confirmation Bias

Charge Reconciliation
September 28, 2016 3:32 PM by Scott Warner

Here’s a reality check: if we don’t get paid, the doors don’t stay open. Sure, that’s the problem of the billing office, collection agencies, and insurance companies. Bench techs don’t need to worry about that stuff. Right?

Depends on who you ask, I guess. It’s the manager’s responsibility to bill accurately and timely for each test performed, and that includes reflex charges (in-house and referral), supplemental charges, and late charges. Ideally, this should be done on a daily basis to capture as much revenue as possible and not wait until an invoice arrives weeks after the date of service.

Referral testing as made this task especially onerous. It’s common to have “exploding” charges attached to panels that conflict or duplicate what is already ordered for the day. Along with medical necessity, ABNs, and registration issues, the modern laboratory has become a compliance nightmare.

Still, a bench tech can remain blissfully ignorant, I suppose, depending on the size and function of the laboratory. An awareness of what drives our business never hurts. It can be quite eye-opening.

At a basic level, for example, we have to get paid for the work that we do. But what do we do that we get paid for? A close examination of this may change your practice. For example, in the 2016 CPT Code book the following are listed:

85004 Blood count; automated differential WBC count
85007    blood smear, microscopic examination with manual differential WBC count
85008    blood smear, microscopic examination without manual differential WBC count

The indented descriptions are clue that these are just variations of the parent; in other words 85007 and 85008 are just variations of 85004 (most use 85025). That makes sense, but it means you can’t bill the codes together, even with a modifier. Since reimbursement may be immaterial (especially in the DRG world), does it make sense to perform manual differentials? Are we getting paid for our extra labor? The short answer: not really.

At a more basic level, bean counters compare apples to apples in benchmarking your billable tests compared to other labs of similar size. Lab tests that take longer but don’t add value to the report will decrease your productivity. Our business has become all about making widgets, it seems. Counting the widgets accurately per hour worked can be compared to a standard. While this makes a certain sense, it doesn’t begin to capture to complexity of what we do.

With labs everywhere under fire to do more with less, push older techs to outperform competitors, and create innovation out of thin air, there has never been a more important time to bill accurately and get paid for what we do. But knowing that our charges drop accurately and timely is nearly as difficult a problem to resolve as report distribution. We largely assume our information system does both. Does it?

NEXT: The Power of Listening

Is Lab Morale Plummeting?
September 9, 2016 6:06 AM by Scott Warner

Low morale, to paraphrase Justice Stewart, can be hard to define but we all know it when we see it. But unhappy employees, obvious or not, who are overstressed and overworked can cause a lot of problems: increased absenteeism, short tempers, poor customer services, and more. A workplace where few people smile is a miserable experience on both sides of the counter.

For laboratories, as pointed out in an article on the AACC website, “low morale can have significant implications for patient safety. Low morale can lead to a dangerous disconnect between employees and their jobs that may cause them to cut corners, not pay attention to details, or simply not care whether or not they do the right thing.”

Consultant John Schaefer writes on the American Management Association web site, “In these hectic, overworked, understaffed times, it’s easier than ever... to come across as a leader who believes that everybody is lucky to have a job, so you better suck it up, keep your nose to the grindstone, and don’t complain.”

Given the state of flux healthcare is in, an industry-wide shift to outsourcing and consolidation to reduce cost, and an aging, dwindling staff that is not being replaced, it’s easy to imagine that laboratory morale is suffering. Labs forced to cut back, lose people by attrition, do more with less, and see work outsourced feel under fire. This is all the more difficult because of the nature of laboratory medicine: almost no one outside our profession understands what we do, so support can be absent or a long time coming.

A lab manager is caught in the middle on these issues. A disgruntled, overworked staff is difficult to please, and being under pressure to cut costs from above adds to the mix. It may not be fun to work the bench with low morale, but it’s a nightmare as a manager.

But what causes low morale? Generally it’s dissatisfaction with why decisions are made. Often this is misinterpreted as “employees just want to make decisions,” although I’ve never believed this. Everyone wants respect, and that includes knowing why things are the way they are. Nothing is more demoralizing than being treated without basic dignity and respect. Making decisions is - face it - just more work.

But we all expect leadership to have our back, too. Working longer hours, working extra hours, and being asked to do more has a demoralizing effect, especially if there is no end in sight seen. As educated professionals we work in a different environment than assembly line factories with quotas, foremen, and whistles. Is this feeling changing for some labs out there?

Lab managers aren’t immune to “do more with less.” Working managers are asked to do a full-time job in part-time hours. That’s a tough assignment for anyone; I’ll bet the burnout rate for managers in that position is higher.

But does the above describe today’s laboratory? Is lab morale plummeting?

NEXT: Charge Reconciliation

A Script Example
August 30, 2016 7:19 PM by Scott Warner

As promised, here is an example AutoIt script:

; NotepadMemo.au3 - example using AutoIt

#include <MsgBoxConstants.au3>
#include <Date.au3>
Local $sTo
Local $sSubject

If WinExists("Untitled - Notepad") Then
   $sTo = InputBox("Memo", "To:", "All Staff")
   $sSubject = InputBox("Memo", "Memo Subject:")
   WinActivate("Untitled - Notepad")
   Send("M E M O{ENTER}{ENTER}")
   Send("TO: " & $sTo & "{ENTER}{ENTER}")
   Send("FROM: The Big Boss{ENTER}{ENTER}")
   Send("DATE: " & _NowDate() & "{ENTER}{ENTER}")
   Send("SUBJECT: " & $sSubject & "{ENTER}{ENTER}")
   MsgBox($MB_OK, "Memo", "All set to write your memo, Boss. Continue?")


What it does: if a blank Notepad document is open when you run this script, it prompts you to enter the To: and Subject: lines of a memo. It then creates the memo header, a trivial time saver. With modification this works with any word processor; all that’s needed is to change the WinExists(title) parameter. With a little more modification you can use this to create emails, too.

A few notes:

  • Your script can be documented with comments that are ignored by the interpreter. These are preceded by a semicolon. You can put these anywhere; anything after the semicolon is ignored. The #include lines are needed by the interpreter. These contain code that is needed for AutoIt’s internal functions.
  • Variables e.g. $sTo are “declared” before being used. These are all preceded by a dollar sign. Best practice naming convention is to also use an identifier of the type of data contained in the variable (s=string, i=integer, a=array). “Local” defines the scope of the variable, that is what parts of your program can access its contents. This lets you reuse variable types inside functions or declare Global variables that are used everywhere. That’s a very powerful feature.
  • Variables can be defined by constants ($sTo = “Everyone”) or by the product of a function. In this case, I’ve used the AutoIt InputBox() function, a good example of portability. This kind of internal command can be used everywhere, and you can even create your own.
  • Just like Excel formulae, the string parameter passed to the Send() function can be a constant, a variable, or a combination of the two. You can also include a function, joining the elements with an ampersand.

The actions of the script hinge on a conditional statement (sometimes called a branching statement). The idea is straightforward: If this, then that. There are many variations of conditional statements in computer programming. These can be nested (within an If-then you can have another If-then, and so on) and contain exceptions (If this, then that, else this). Along with loops, they are what drives and redirects your code. All computer languages have these basic elements.

The script fires by AutoIt’s built in Windows related functions. If you run it without Notepad being opened nothing happens, because If WinExists(“Untitled - Notepad”) is false. None of the code between If and Endif runs. It’s easy to add an Else clause that pops up a MsgBox reminding you to open Notepad first!

So now you have the elements to build a script to automate your LIS: a way to identify the window (WinExists), activate the window (WinActivate), receive any special input (InputBox), send keystrokes (Send, which can also include alt- and ctrl- combinations), and for the program to talk back to you (MsgBox). AutoIt can do much, much more, including mouse movements, mouse clicks, and reading and writing to files. It is a remarkably sophisticated programming tool. But all you need is the basics.

Suppose, for example, you want to automate printing a report. It’s not difficult to make a half-dozen clicks and enter date parameters, but AutoIt can do that faster and easier. You can use Windows task scheduler to run these automatically! All you need to know is what the title of the window is, where the cursor is, and what your next keystroke is. The Send() function includes constants for every keystroke.

The possibilities are numerous within the scope of simple, repetitive functions. This includes maintaining

and updating your item master. You can, for example, write a script to change all your barcode labels to a certain type. By hand this would take hours of work that is expensive payroll time. (Can we afford not to do this with our current shortages?) AutoIt can do this quickly and accurately. The best part of writing a script is portability: once a script is written it can be reused over and over with slight changes.


For those of you who want to try this, let me know how it works out. It will be worth your time.

NEXT: Is Lab Morale Plummeting?

AutoIt Your LIS
August 19, 2016 2:28 PM by Scott Warner

For those of you who like to tinker with programs, who remember typing BASIC programs from magazines in the early Eighties, and a few of you who understand programming, AutoIt will be fun. For the rest of you, it won’t be nearly as difficult as you might think.

The benefits are enormous: speed and accuracy. A computer does exactly what you tell it to do, and telling it doesn’t get much simpler than AutoIt. The site describes it as “a freeware BASIC-like scripting language designed for automating the Windows GUI and general scripting.” But what does that mean?

  • BASIC (Beginner’s All-Purpose Symbolic Instruction Code) refers to a family of text-based, interpreted languages. An interpreter program translates the instructions into code your computer understands. AutoIt works in a very similar fashion. The idea of this kind of “high level” language is that you can write commands in easy-to-understand language and let the computer do all the grunt work. Simple!
  • The Windows GUI (Graphical User Interface) refers to everything that is common to Windows programs: the name of the window, button, text box, or other control element.
  • General scripting refers to sending keystrokes and mouse clicks as though a human operator has done so.

Before I continue, I’ll add that AutoIt is one of several. AutoHotKey is a competitor. I prefer AutoIt, but I have a programmer background and appreciate what it does. These and other programs are available and well-documented for those with the interest and patience.

First things first: you download AutoIt.

Next things next: you write a simple Autoit script. This can be done with Notepad, although the installation includes an editor. The command to send keystrokes to an application is called Send(string). A string in programming is a variable ($i, $value, etc.) or constant (“Monday”) that is enclosed in quotes.

The Send command simulates keystrokes exactly as though you typed them on a keyboard. And what about the non-alphabetical keys, such as shift, spacebar, and enter? AutoIt uses “macros” that are predefined to simulate these. Thus the command Send(“{DOWN}{DOWN}Hello{ENTER}”) sends two down arrow strokes, types the word “Hello,” and hits the enter key. AutoIt has a full range of macros that include date and time.

You might be curious why string is in parentheses. All commands are “functions.” Think of a function as a task. Send() is a built-in function that mimics keystrokes, but you can easily write your own. These used to be called “subroutines” in BASIC; the difference here is that functions are portable. A function is like a little program inside your program. If you write a function for one script it will also work in another. Thus the more scripts you write, the less time it will take. It’s even possible to create “libraries” of functions. But you do NOT need to write functions to create a script that saves you time.

Using AutoIt you can automate reports, searches, item master creation, and even make global changes to fields in your LIS. There’s a learning curve involved, but it can save hundreds of valuable tech hours. More to the point, a script doesn’t make mistakes; it does what it’s told. Writing a script to save payroll hours makes good sense. Indeed, your LIS vendors do exactly that for the same reason.

Next, I’ll show a simple example and describe how it works.

NEXT: A Script Example

Automating LIS Maintenance
August 9, 2016 6:01 AM by Scott Warner

If your lab is like most, you have one or maybe two IT geeks who like computers enough to figure out how your information system works. To everyone else it’s a means to an end at best or an irritant at worst. How tests are built and maintained is way off the radar of most people.

Partly, that’s because of the hidden nature of most software. A GUI (Graphical User Interface) is designed to keep it that way, and it’s one good reason that “updates” can be frustrating. Bugs are squashed beneath the GUI only to spawn elsewhere in the code. This is understandable, considering that complex systems can easily have thousands of lines of code to maintain. The more features that get added, the harder it is to squash bugs. Coding is a far from linear process.

Your LIS vendor, like most businesses, likes to work efficiently, and most have canned scripts that can automate tasks. If, for example, you want to set all physicians in the database to have a certain report setting, it’s worth asking your vendor. They may have already built such a program to avoid endless keystrokes. A computer can easily perform any such repetitive task.

But building tests from scratch still falls on your laboratory as part of ongoing maintenance. Unlike analyzer daily or preventative maintenance, you’re unlikely to crosstrain techs.

One of the biggest challenges is building a large database of tests all at once, either on installation (the vendor may help) or when switching to a different reference lab. An in-house test menu can be small compared to tests sent to your reference lab. Building such a database from scratch can be a nightmare for any lab already stretched to the limit with personnel cuts and a shortage of geeks.

There are scripting solutions that are free to download. With a bit of a learning curve and a little patience you can write a script to automate at least some, if not most, of building tests. These work by parroting your keystrokes, mouse clicks, and even mouse movements. Any keystroke combinations - such as Ctrl+C (Copy) and Ctrl+V (Paste) - can be simulated by easy to understand, English commands.

Scripts are written in Notepad or another text-based editor and run in real time by an interpreter program that does the work. A script is more or less a set of instructions for the scripting program itself to carry out. “Hit this key, wait two seconds, hit that key,” etc. It’s a lot less intimidating than writing an actual computer program.

The advantages to scripting outweigh the disadvantages of learning how. Dozens (if not hundreds) of hours of valuable tech time can be saved. Data entry is fast and accurate - a computer only does what it’s told. Scripts can be saved, revised, and used again and again.

Next, I’ll describe how a simple script can automate data entry.

NEXT: AutoIt Your LIS

The Joys of Shopping
July 29, 2016 6:09 AM by Scott Warner

Change is constant, especially in laboratory medicine. Monoclonal antibody assays changed latex agglutination kits; discrete random-assay analyzers changed batch testing; point of care instruments are refocusing core laboratory testing. These days, leaving the field for more than a few years almost guarantees learning some areas all over again. New technology is one of the more exciting aspects of our field.

Lately I’ve been looking at blood gas analyzers for our lab. Ah, the joys of shopping.

Some instruments, such as the IL GEM4000, bring total automation to the process with a cartridge-based, self-contained reagent system. This low-maintenance technology saves time, but lower volumes will waste reagent.

The Radiometer ABL90 FLEX analyzer is cartridge-based but more portable than the GEM. Similarly, it runs controls automatically. What’s really cool about this level of technology is that it’s portable, with a battery and WiFi. The future of blood gas analysis is leaning toward point of care.

There are practical reasons to embrace point of care blood gas testing: more immediate results, faster treatment, and more space in the core laboratory. Downsides are similar for other point of care testing: less control, more competency, and more time to manage the system.

I’m leaning toward a point of care analyzer also for the sake of cost. Our volumes are low enough to make systems like IL and Radiometer cost prohibitive. Fortunately, there are several good alternatives:

The Abbott iSTAT is a handheld analyzer without reagents. Everything is self-contained in a small one-use cartridge. While it seems to me that this system is showing its age, its proven technology still works well. Labs can have control over cartridge configurations that include chemistry and coagulation testing. The disadvantage for me is that the iSTAT doesn’t have WiFi capability. It needs to be docked to a central workstation.

Alere’s EPOC system is somewhat similar to the iSTAT but adds that WiFi capability that is essential to point of care testing. I don’t have experience with the EPOC, although I’ve seen demos. It looks like a good point of care alternative.

OPTIMedical’s CCA-TS2 analyzer is an interesting point of care alternative. It is cassette-based like the iSTAT and EPOC, and includes a color touch screen with step-by-step instruments and a built-in printer. It strikes me that this system would require the least amount of training.

Choices each have advantages and disadvantages. From a core laboratory standpoint it comes down to the business of cost: reagents and consumables, service, and maintenance time. But considering point of care testing alternatives adds many variables: portability, ease of use, ease of training, connectivity, and other issues. Other departments such as the ED, ICU, NICU, and respiratory therapy should be consulted. Each will have its own needs. From the lab standpoint it’s still mostly a matter of cost. But deciding the biggest bang for that buck is more complicated.


Ah, the joys of shopping.

NEXT: Automating LIS Maintenance

Embracing Point of Care
July 15, 2016 6:20 AM by Scott Warner

For an industry that is frequently at the forefront of new technology in healthcare, laboratory workers can be among the most resistant to change. Computers coexist with paper; manual diffs are still done when automated counts are far superior; point of care technology is disdained as inferior to central lab testing.

But point of care testing is coming into its own. Indeed, as hinted at in a recent Advance article, microfluidics will bring the core laboratory to the patient. Investors are shelling out big bucks for this idea of “lab on a chip” that will make the current crop of glucose meters look as outdated as 1980’s cordless phones. Our world is changing rapidly.

This doesn’t sound like good news for an industry already on the ropes with centralized testing, testing formularies, labor shortages, and demands for faster, cheaper testing. What will happen? I wonder.

Traditionally laboratory professionals have been leery of point of care testing, because non-laboratorians don’t understand the technology or how values are verified. If I ask a lab tech, “How do you know that glucose is accurate?” I will get a detailed response: the QC is within peer-defined limits, the last calibration is acceptable, the other patients I have run meet a statistical norm, etc. If I ask I person expected to do point of care testing - such as an RN - how he or she knows a glucose is accurate, I’ll probably get a less specific response.

But that’s no reason for a lab not to embrace point of care. As point of care testing becomes simpler and more foolproof, laboratories should step up to the plate and make sure they are there to enhance services and improve patient care. This change is inevitable as technology advances.

“Foolproof” is a misnomer as we all know just like a zero-maintenance analyzer. Errors are often subtle, and an understanding of what affect results is necessary to produce quality results. It’s what separates laboratory training from all others. Who better to shepard in a new era of technology?

The lab is unique in that it provides technology useful in point of care. We likewise have a unique opportunity to educate, encourage, and engage others in what we do. This isn’t easy, but it is inevitable. How easy has this been in your lab? I wonder. There are considerable cultural barriers between the evidence-based focus of laboratories and the support-based focus on other departments.

Has your lab embraced point of care testing? If so, are you an active partner with doctors and nurses or is it a necessary evil brought about by affordable technology?

NEXT: The Joys of Shopping

To Cut Costs, Change
July 1, 2016 6:20 AM by Scott Warner

These days it’s all about change. I have heard a constant drumbeat for the last thirty years that change is the only constant we can count on. The only thing more constant than change is the need to cut costs. Now that laboratories are becoming cost centers in small hospitals and groups are recognizing the economies of scale in centralizing testing and developing formularies to cut costs, those two are intertwined.

In one sense, it’s easy to save money: just search for a cheaper vendor. The problem for smaller hospitals is less bargaining power. A Group Purchasing Organization (GPO) can help but hinder depending on compliance terms. Small labs just don’t have the clout, especially in the Critical Access world, to convince vendors. The market charges what consumers will pay the world around. So cutting costs by going cheap without sacrificing quality only goes so far and as a strategy is quickly exhausted.

An easier scenario is one understood by every bean counter out there: cut staff. As a rule of thumb payroll is half of all expenses. If full time can be cut to part time, hours reduced, positions eliminated, or managers replaced with “working” managers, it all looks good on the bottom line. When the above is exhausted, this is the next logical step. Labs are benchmarked against each other to determine staffing load, positions are lost through attrition, and managers are challenged to come up with new ways to do more work with fewer people. It’s too bad benchmarking can’t capture the hard work that goes into making a good lab a great lab. And in fact benchmarking doesn’t take quality into account at all.

When that is done, what’s next? Many labs are facing the question today. They’re compliant with GPO contracts, they have evaluated pricing and chosen the cheapest without sacrificing quality, they have evaluated “make it or buy it” to cut costs, and they have lost people to never be replaced.

The answer is a hard one: change.

As an industry we have to change what we do, how we do it, and how we work to deliver a service that is cheaper, faster, and that doesn’t sacrifice quality. Doctors (and the insurance companies) do not care about the how and why, but they want completely separate things. The first wants speed, reliability, and quality. The second wants the lowest cost. Maybe that means outsourcing, merging, or centralizing. Maybe, it means changing.

If the world around us has changed, we must change with it. This requires creative thinking. It requires rethinking what we have done for the last ten or twenty years. It means trying completely different approaches to getting work done. For example, you may have a test kit in your lab that requires more QC if nonwaived and less if waived, and the only difference is the sample type. As another example, maybe moving a workstation to another department will streamline workflow and improve turnaround time.

Only we can change. If we wait for outside forces to change us, it’s too late.

NEXT: Embracing Point of Care

Do Doctors Read Comments?
June 18, 2016 6:40 AM by Scott Warner

Laboratories add comments to reports, some of which are informative e.g. CRITICAL VALUE REPEATED and others that are interpretive e.g. explaining the meaning and utility of the MDRD estimated GFR equation. It is the latter that brought me to this current question.

Most doctors have little or no idea how results are generated. I think they assume that professional, trained staff under the supervision of a pathologist give them the best number possible. A GFR estimated from a creatinine, age, and sex alone is a perfect example. What if a doc assumes that this is accurate for dosing? It is not, generally, and will tend to overestimate GFR in the elderly, the infirm, and those otherwise with extremes in muscle mass. It is intended to be a screen for chronic kidney disease in a subset of patients with a normal to slightly elevated creatinine value.

The question is how much of this instruction needs to appear on a laboratory report? And will the physicians read it?

Second question first: no.

In my experience unless a result is completely obtuse or difficult to interpret physicians do not read comments for an explanation. They are conditioned (and we are too, when you really think about it) to respond in a knee-jerk fashion to flags generated by the information system. The nuances of utilizing a test are either a) well known by the physician, or he or she would not have ordered the test initially or b) lost in the comments. “Interpretive comments” exist to cover the laboratory.

The real issues here aren’t covering the laboratory or informing the physician, which are two separate issues. The first relates to producing quality results and is boolean in nature: it is or isn’t, regardless of comments. The second is an education issue that will never be resolved by a lengthy comment. And a lab that thinks so is doing a disservice to the patient. No, the real issue is, “How do I add value to the report?”

I had a conversation with a physician recently that brought this issue into sharp focus.

He said, “We can add a comment to any test, such as potassium. But should we? Unless it explains anything we need to know about the particular test, does it add anything?”

That is an excellent question that most laboratorians and many pathologists have difficulty answering. The only way to know for sure is to ask your medical staff what is helpful to them. And you might be surprised.

Glucose is another example. A fasting glucose <= 100 mg/dL is considered normal. But we report out many glucoses on samples drawn throughout the day. Should all laboratories add a comment that states “Glucose ranges are for fasting patients only?” What about non-fasting patients? What about diabetics? What about patients on steroids? There are far too many contingencies, most of which the attending physician is more than aware of, for any laboratory to address on a report. And it would be near crazy to try.

The litmus test (if there is one) is “Does this comment tell the docs something unique about this number?” And that can be a hard call. The best approach (in my experience) is to ask them. You could be surprised.

NEXT: To Cut Costs, Change

Justifying Staff
May 31, 2016 6:41 AM by Scott Warner

These days it’s all about shortages. Shortages of techs, shortages of patients, and shortages of money. In small hospitals there are fewer of us working with fewer patients for less money. Those who are working are older than the average worker, are wondering who is replacing them, and are tired of hearing about doing more with less when new technology requires people to test, validate, and perform the assays. This is what I’ve been hearing for the last few years.

A few random thoughts on this.

A laboratory test menu is constantly in flux as tests are brought in house or sent out to reduce cost or improve quality, but what I hear now is, “If you send out that test can you reduce hours?” I find myself justifying what little staff I have more and more, and I suspect many managers would say the same thing.

Payroll is a huge portion of ongoing expenses, so that’s understandable. I get that reducing expenses is crucial to managing a dwindling cash flow and can make or break a hospital in a competitive market. But the reality of managing laboratories is different from other departments.

Few benchmarks: there are benchmarking factors, but laboratories are so different in employee mix, services, and outreach that it’s difficult to compare them in a meaningful way. I’ve been in labs with many phlebotomists, for example, and some where techs performed most of the phlebotomy. It all depends on how far away a phlebotomy station is, how versatile the information system is, and other factors. Equipment varies greatly from lab to lab, and not all instruments offer the same mix of quality and speed. “Efficiency” varies from lab to lab, often having little to do with the skill level of techs.

Make or buy issues: bringing in tests to justify staff is fine if it’s cheaper to perform a test in house, but that can be a boondoggle if it requires new instrumentation, more maintenance, more training, and more competency. It’s been my experience in general that people are poor multitaskers. Asking people already doing multiple tasks to do one more creates a drag on overall efficiency and a chance to increase errors that will drag a system down even more.

A manager caught in a feeding frenzy of cost cutting has to recognize benchmarking and other comparisons for what they are: an attempt to manage expenses using verifiable data. That can be a smart idea in a big picture sense. But a manager also has to use dwindling resources with a mindset that these issues aren’t going away, necessitating new ways of thinking about old problems. Our futures in this industry will likely be shaped by more than “make it or buy it,” outsourcing, or cost cutting. We can only do so much of this, and in the meantime the demand for faster, better laboratory results is increasing. What we do has never been more important.

This could mean different workflow models, consolidated platforms, software AI, or something completely different. Whatever it turns out to be for our individual labs, it has to be invented under constant pressure to do more with less. Inventing new ways to produce better care may be the best way to justify staff we have.

NEXT: Do Doctors Read Comments?

Slide Review or Manual Diff?
May 18, 2016 5:59 AM by Scott Warner

The CPT code 85004 (Blood count; automated differential WBC count) has many variations, each of which is charged instead of 85004 and including the work. These codes can be distinguished in the CPT code book, because they are indented. Examples:

  • 85007 (blood smear, microscopic examination with manual differential WBC count)
  • 85008 (blood smear, microscopic examination without manual differential WBC count)

In other words, performed a manual differential if a CBC with automated differential is performed is not billable in addition to the code 85004. All of these are “bundled” into 85004. Neither is a slide review, a spun hematocrit, or an automated reticulocyte count. All are considered iterations of a blood count with an automated diff, recognizing that the term “CBC” encompasses variations.

One of which is: should you reflex to a slide review or a manual differential?

The consensus rules from the ISLH (International Society for Laboratory Hematology) suggest the former. A slide review is a targeted review of a peripheral smear specific to the particular parameter or instrument flag. For example, a white blood cell count of >30 thousand reflexes to a “slide review” if it is the first time or a delta check failure within 3 days. That slide review logically involves checking for abnormal cells or cells that suggest a leukemoid reaction. A manual differential is necessary only to enumerate abnormal cells.

The slide review concept is an attractive idea in many ways: it takes advantage of technologist judgment, targets a review based on accurate instrument readings, and avoids the busy work of just reflexively banging out a 100-cell diff that may not add value to the report.

More significantly, a slide review reinforces what we already know: the instrument is much, much more accurate that we can ever be. Performing a manual differential to “prove” the instrument count is OK leads physicians down a path that doesn’t trust our technology. Performing a slide review sends the message that we are using the instrument to guide our workflow and look for abnormalities.

In my experience that’s been an easy sell for docs but much harder for staff. Almost all the techs I’ve known have had a knee jerk reaction that a slide review is more work. Why?

For one thing, after performing thousands of manual differentials it becomes second nature. It is comfortable, familiar, and repetitiously easy. Having said that, what is the first thing we all do when we finish? We compare the numbers to the automated differential. We might even repeat the manual differential if we think the numbers don’t match, which is really nutty behavior when you think about it. This begs the question, “What are we really doing?”

This could just be the shock of the new. We have to use new technology to change our diagnostic techniques. As suggested by the ISLH, part of this is slide reviews designed to look for what the instrument suggests. But can we teach old techs new tricks?

NEXT: Justifying Staff

Is PCR Ready for Small Labs?
May 2, 2016 6:36 AM by Scott Warner

What is true for big labs eventually becomes true for small labs, mostly because volume discounts drive affordability. This is most recently true for PCR, a technology that has arrived in small laboratories for two platforms, the Meridian Illumigene and Nanosphere Verigene.

But is PCR ready for small labs?

I’m intrigued by this technology, and I can easily imagine it playing a role in small laboratories. This kind of platform offers rapid, definitive testing for infectious agents such as C. diff, pertussis, and bacterial agents in blood culture specimens. Testing a stool specimen for the C. diff toxin can eliminate the need for GDH antigen testing that detects non-toxin producing strains. Identifying bacteria sooner in a blood culture can give the physician a 24-hour head start. In theory this faster turnaround time with better results will reduce length of stay and assist antibiotic stewardship programs.

I wonder.

One reason I’m on the fence is expense. It’s great to claim that a new instrument will reduce length of stay, but bean counters aren’t impressed by soft cost savings that can’t really be quantified. Unless one is testing for something completely new that changes a protocol - the introduction of BNP comes to mind - it can be hard to convince bean counters that a faster turnaround time equals fewer inpatient days.

Another reason is wondering how having these results will change treatment. Clinicians embrace technology at different rates. How many of you are still running cardiac troponin and CKMB, for example? How many are still reporting percentile differentials with absolutes? How many are still performing more ESRs than CRPs? It sounds great to me to give the docs a better, faster result, but clinicians have to buy into new technology and use it to its potential. As primary information consumers they always drive demand for technology.

Finally - and this is a big reason - new platforms with radically different technology require a lot of training and competency assessment to get off the ground. In a small lab staffing is often sparse, raising questions such as, “What happens on weekends?” and “What happens when a doc wants it done STAT in the middle of the night?” The advantages of rapid PCR testing imply STAT requests, after all. Labs with squeezed budgets and payroll will have a difficult time fitting anything new and different into their menus, no matter how wonderful. It all takes time, money, attention to detail, and bodies to run the tests. In the current healthcare climate it can be hard enough just to get the bread and butter tests done in a timely fashion as more and more labs adopt a rapid response model.

I don’t know the answer, but I don’t hear physicians beating down the door for in-house PCR just yet.

I’m interested in how this technology has affected your laboratory. Has it lived up to the hype, and what problems have you encountered? Is it worth the expense?

NEXT: Slide Review or Manual Diff?

Moving That Needle
April 21, 2016 3:15 PM by Scott Warner

One of the phrases I hear lately is “we need to move the needle,” meaning enough effort has to be put into change to not just make it stick, but change what matters. This might be customer satisfaction scores, test volumes, or cost containment.

If there’s one thing that change has taught me, it’s that no matter how much things change they seem to stay the same. The needle almost never moves.

I’ve witnessed countless alphabet soup campaigns, LEAN initiatives, customer service gimmicks, changed hours, changed protocols, and new technology. It isn’t so much that each change is filled with false promises or defeated by dashed hopes. In fact, we can all be easily convinced that any of these things can make a difference. They almost never do. Change is a constant in a world stubbornly set in the present rather than anticipating the future.


There are several good reasons. We are motivated by emotions, not numbers. Data is an excellent rationalization tool, but it won’t convince people to change how they feel about something. We each have a gut feeling about what works and what doesn’t based on the culture we work in. That is an extremely powerful force to try to overcome from within. Indeed, it may be impossible for a negative culture to change itself.

We are also motivated by leadership, a quality rare enough that we don’t just know it when we see it - we are surprised. I think leadership is a skill like many others that can be acquired with a good working knowledge of what it is. Without good leaders who can articulate a vision and make decisions that cement values in place, change is just pointless change that has no lasting effects.

Finally, each of us views change as something different. The classic management wisdom is “people hate change,” which is misleading. People love change if it means making their lives more convenient. What they don’t like about change varies enormously. Some don’t like change because they don’t trust the motives behind it; others don’t like it if they weren’t included in the decision making; still others have too much stress in their life outside of work to deal with one more change. Management frequently forgets that change happens everywhere in life, and often a workplace is the most stable environment a person has.

So, how do we move that needle?

For me why people are “resistant” hints at the answer. We need leaders who are not afraid to motivate people with emotions and who understand that our work lives are but a part of whatever we are going through. We need leaders who can articulate in plain, blunt language the stakes. We need the values to be articulated honestly and plainly enough to be supported by human interest stories that really motivate people. Leadership needs to walk the talk.

But that’s just me. What about your lab?

NEXT: Is PCR Ready for Small Labs?



About this Blog

    Scott Warner, MLT(ASCP)
    Occupation: Laboratory Manager
    Setting: Critical Access Hospital
  • About Blog and Author

Keep Me Updated

Recent Posts