Welcome to Health Care POV | sign in | join
Stepwise Success

Is Lab Morale Plummeting?
September 9, 2016 6:06 AM by Scott Warner

Low morale, to paraphrase Justice Stewart, can be hard to define but we all know it when we see it. But unhappy employees, obvious or not, who are overstressed and overworked can cause a lot of problems: increased absenteeism, short tempers, poor customer services, and more. A workplace where few people smile is a miserable experience on both sides of the counter.

For laboratories, as pointed out in an article on the AACC website, “low morale can have significant implications for patient safety. Low morale can lead to a dangerous disconnect between employees and their jobs that may cause them to cut corners, not pay attention to details, or simply not care whether or not they do the right thing.”

Consultant John Schaefer writes on the American Management Association web site, “In these hectic, overworked, understaffed times, it’s easier than ever... to come across as a leader who believes that everybody is lucky to have a job, so you better suck it up, keep your nose to the grindstone, and don’t complain.”

Given the state of flux healthcare is in, an industry-wide shift to outsourcing and consolidation to reduce cost, and an aging, dwindling staff that is not being replaced, it’s easy to imagine that laboratory morale is suffering. Labs forced to cut back, lose people by attrition, do more with less, and see work outsourced feel under fire. This is all the more difficult because of the nature of laboratory medicine: almost no one outside our profession understands what we do, so support can be absent or a long time coming.

A lab manager is caught in the middle on these issues. A disgruntled, overworked staff is difficult to please, and being under pressure to cut costs from above adds to the mix. It may not be fun to work the bench with low morale, but it’s a nightmare as a manager.

But what causes low morale? Generally it’s dissatisfaction with why decisions are made. Often this is misinterpreted as “employees just want to make decisions,” although I’ve never believed this. Everyone wants respect, and that includes knowing why things are the way they are. Nothing is more demoralizing than being treated without basic dignity and respect. Making decisions is - face it - just more work.

But we all expect leadership to have our back, too. Working longer hours, working extra hours, and being asked to do more has a demoralizing effect, especially if there is no end in sight seen. As educated professionals we work in a different environment than assembly line factories with quotas, foremen, and whistles. Is this feeling changing for some labs out there?

Lab managers aren’t immune to “do more with less.” Working managers are asked to do a full-time job in part-time hours. That’s a tough assignment for anyone; I’ll bet the burnout rate for managers in that position is higher.

But does the above describe today’s laboratory? Is lab morale plummeting?

NEXT: Charge Reconciliation

10 comments »     
A Script Example
August 30, 2016 7:19 PM by Scott Warner

As promised, here is an example AutoIt script:

; NotepadMemo.au3 - example using AutoIt

#include <MsgBoxConstants.au3>
#include <Date.au3>
Local $sTo
Local $sSubject

If WinExists("Untitled - Notepad") Then
   $sTo = InputBox("Memo", "To:", "All Staff")
   $sSubject = InputBox("Memo", "Memo Subject:")
   WinActivate("Untitled - Notepad")
   Send("M E M O{ENTER}{ENTER}")
   Send("TO: " & $sTo & "{ENTER}{ENTER}")
   Send("FROM: The Big Boss{ENTER}{ENTER}")
   Send("DATE: " & _NowDate() & "{ENTER}{ENTER}")
   Send("SUBJECT: " & $sSubject & "{ENTER}{ENTER}")
   Send("____________________________________________________________________________{ENTER}{ENTER}")
   MsgBox($MB_OK, "Memo", "All set to write your memo, Boss. Continue?")
EndIf

Exit

What it does: if a blank Notepad document is open when you run this script, it prompts you to enter the To: and Subject: lines of a memo. It then creates the memo header, a trivial time saver. With modification this works with any word processor; all that’s needed is to change the WinExists(title) parameter. With a little more modification you can use this to create emails, too.

A few notes:

  • Your script can be documented with comments that are ignored by the interpreter. These are preceded by a semicolon. You can put these anywhere; anything after the semicolon is ignored. The #include lines are needed by the interpreter. These contain code that is needed for AutoIt’s internal functions.
  • Variables e.g. $sTo are “declared” before being used. These are all preceded by a dollar sign. Best practice naming convention is to also use an identifier of the type of data contained in the variable (s=string, i=integer, a=array). “Local” defines the scope of the variable, that is what parts of your program can access its contents. This lets you reuse variable types inside functions or declare Global variables that are used everywhere. That’s a very powerful feature.
  • Variables can be defined by constants ($sTo = “Everyone”) or by the product of a function. In this case, I’ve used the AutoIt InputBox() function, a good example of portability. This kind of internal command can be used everywhere, and you can even create your own.
  • Just like Excel formulae, the string parameter passed to the Send() function can be a constant, a variable, or a combination of the two. You can also include a function, joining the elements with an ampersand.

The actions of the script hinge on a conditional statement (sometimes called a branching statement). The idea is straightforward: If this, then that. There are many variations of conditional statements in computer programming. These can be nested (within an If-then you can have another If-then, and so on) and contain exceptions (If this, then that, else this). Along with loops, they are what drives and redirects your code. All computer languages have these basic elements.

The script fires by AutoIt’s built in Windows related functions. If you run it without Notepad being opened nothing happens, because If WinExists(“Untitled - Notepad”) is false. None of the code between If and Endif runs. It’s easy to add an Else clause that pops up a MsgBox reminding you to open Notepad first!

So now you have the elements to build a script to automate your LIS: a way to identify the window (WinExists), activate the window (WinActivate), receive any special input (InputBox), send keystrokes (Send, which can also include alt- and ctrl- combinations), and for the program to talk back to you (MsgBox). AutoIt can do much, much more, including mouse movements, mouse clicks, and reading and writing to files. It is a remarkably sophisticated programming tool. But all you need is the basics.

Suppose, for example, you want to automate printing a report. It’s not difficult to make a half-dozen clicks and enter date parameters, but AutoIt can do that faster and easier. You can use Windows task scheduler to run these automatically! All you need to know is what the title of the window is, where the cursor is, and what your next keystroke is. The Send() function includes constants for every keystroke.

The possibilities are numerous within the scope of simple, repetitive functions. This includes maintaining

and updating your item master. You can, for example, write a script to change all your barcode labels to a certain type. By hand this would take hours of work that is expensive payroll time. (Can we afford not to do this with our current shortages?) AutoIt can do this quickly and accurately. The best part of writing a script is portability: once a script is written it can be reused over and over with slight changes.

 

For those of you who want to try this, let me know how it works out. It will be worth your time.

NEXT: Is Lab Morale Plummeting?

0 comments »     
AutoIt Your LIS
August 19, 2016 2:28 PM by Scott Warner

For those of you who like to tinker with programs, who remember typing BASIC programs from magazines in the early Eighties, and a few of you who understand programming, AutoIt will be fun. For the rest of you, it won’t be nearly as difficult as you might think.

The benefits are enormous: speed and accuracy. A computer does exactly what you tell it to do, and telling it doesn’t get much simpler than AutoIt. The site describes it as “a freeware BASIC-like scripting language designed for automating the Windows GUI and general scripting.” But what does that mean?

  • BASIC (Beginner’s All-Purpose Symbolic Instruction Code) refers to a family of text-based, interpreted languages. An interpreter program translates the instructions into code your computer understands. AutoIt works in a very similar fashion. The idea of this kind of “high level” language is that you can write commands in easy-to-understand language and let the computer do all the grunt work. Simple!
  • The Windows GUI (Graphical User Interface) refers to everything that is common to Windows programs: the name of the window, button, text box, or other control element.
  • General scripting refers to sending keystrokes and mouse clicks as though a human operator has done so.

Before I continue, I’ll add that AutoIt is one of several. AutoHotKey is a competitor. I prefer AutoIt, but I have a programmer background and appreciate what it does. These and other programs are available and well-documented for those with the interest and patience.

First things first: you download AutoIt.

Next things next: you write a simple Autoit script. This can be done with Notepad, although the installation includes an editor. The command to send keystrokes to an application is called Send(string). A string in programming is a variable ($i, $value, etc.) or constant (“Monday”) that is enclosed in quotes.

The Send command simulates keystrokes exactly as though you typed them on a keyboard. And what about the non-alphabetical keys, such as shift, spacebar, and enter? AutoIt uses “macros” that are predefined to simulate these. Thus the command Send(“{DOWN}{DOWN}Hello{ENTER}”) sends two down arrow strokes, types the word “Hello,” and hits the enter key. AutoIt has a full range of macros that include date and time.

You might be curious why string is in parentheses. All commands are “functions.” Think of a function as a task. Send() is a built-in function that mimics keystrokes, but you can easily write your own. These used to be called “subroutines” in BASIC; the difference here is that functions are portable. A function is like a little program inside your program. If you write a function for one script it will also work in another. Thus the more scripts you write, the less time it will take. It’s even possible to create “libraries” of functions. But you do NOT need to write functions to create a script that saves you time.

Using AutoIt you can automate reports, searches, item master creation, and even make global changes to fields in your LIS. There’s a learning curve involved, but it can save hundreds of valuable tech hours. More to the point, a script doesn’t make mistakes; it does what it’s told. Writing a script to save payroll hours makes good sense. Indeed, your LIS vendors do exactly that for the same reason.

Next, I’ll show a simple example and describe how it works.

NEXT: A Script Example

0 comments »     
Automating LIS Maintenance
August 9, 2016 6:01 AM by Scott Warner

If your lab is like most, you have one or maybe two IT geeks who like computers enough to figure out how your information system works. To everyone else it’s a means to an end at best or an irritant at worst. How tests are built and maintained is way off the radar of most people.

Partly, that’s because of the hidden nature of most software. A GUI (Graphical User Interface) is designed to keep it that way, and it’s one good reason that “updates” can be frustrating. Bugs are squashed beneath the GUI only to spawn elsewhere in the code. This is understandable, considering that complex systems can easily have thousands of lines of code to maintain. The more features that get added, the harder it is to squash bugs. Coding is a far from linear process.

Your LIS vendor, like most businesses, likes to work efficiently, and most have canned scripts that can automate tasks. If, for example, you want to set all physicians in the database to have a certain report setting, it’s worth asking your vendor. They may have already built such a program to avoid endless keystrokes. A computer can easily perform any such repetitive task.

But building tests from scratch still falls on your laboratory as part of ongoing maintenance. Unlike analyzer daily or preventative maintenance, you’re unlikely to crosstrain techs.

One of the biggest challenges is building a large database of tests all at once, either on installation (the vendor may help) or when switching to a different reference lab. An in-house test menu can be small compared to tests sent to your reference lab. Building such a database from scratch can be a nightmare for any lab already stretched to the limit with personnel cuts and a shortage of geeks.

There are scripting solutions that are free to download. With a bit of a learning curve and a little patience you can write a script to automate at least some, if not most, of building tests. These work by parroting your keystrokes, mouse clicks, and even mouse movements. Any keystroke combinations - such as Ctrl+C (Copy) and Ctrl+V (Paste) - can be simulated by easy to understand, English commands.

Scripts are written in Notepad or another text-based editor and run in real time by an interpreter program that does the work. A script is more or less a set of instructions for the scripting program itself to carry out. “Hit this key, wait two seconds, hit that key,” etc. It’s a lot less intimidating than writing an actual computer program.

The advantages to scripting outweigh the disadvantages of learning how. Dozens (if not hundreds) of hours of valuable tech time can be saved. Data entry is fast and accurate - a computer only does what it’s told. Scripts can be saved, revised, and used again and again.

Next, I’ll describe how a simple script can automate data entry.

NEXT: AutoIt Your LIS

0 comments »     
The Joys of Shopping
July 29, 2016 6:09 AM by Scott Warner

Change is constant, especially in laboratory medicine. Monoclonal antibody assays changed latex agglutination kits; discrete random-assay analyzers changed batch testing; point of care instruments are refocusing core laboratory testing. These days, leaving the field for more than a few years almost guarantees learning some areas all over again. New technology is one of the more exciting aspects of our field.

Lately I’ve been looking at blood gas analyzers for our lab. Ah, the joys of shopping.

Some instruments, such as the IL GEM4000, bring total automation to the process with a cartridge-based, self-contained reagent system. This low-maintenance technology saves time, but lower volumes will waste reagent.

The Radiometer ABL90 FLEX analyzer is cartridge-based but more portable than the GEM. Similarly, it runs controls automatically. What’s really cool about this level of technology is that it’s portable, with a battery and WiFi. The future of blood gas analysis is leaning toward point of care.

There are practical reasons to embrace point of care blood gas testing: more immediate results, faster treatment, and more space in the core laboratory. Downsides are similar for other point of care testing: less control, more competency, and more time to manage the system.

I’m leaning toward a point of care analyzer also for the sake of cost. Our volumes are low enough to make systems like IL and Radiometer cost prohibitive. Fortunately, there are several good alternatives:

The Abbott iSTAT is a handheld analyzer without reagents. Everything is self-contained in a small one-use cartridge. While it seems to me that this system is showing its age, its proven technology still works well. Labs can have control over cartridge configurations that include chemistry and coagulation testing. The disadvantage for me is that the iSTAT doesn’t have WiFi capability. It needs to be docked to a central workstation.

Alere’s EPOC system is somewhat similar to the iSTAT but adds that WiFi capability that is essential to point of care testing. I don’t have experience with the EPOC, although I’ve seen demos. It looks like a good point of care alternative.

OPTIMedical’s CCA-TS2 analyzer is an interesting point of care alternative. It is cassette-based like the iSTAT and EPOC, and includes a color touch screen with step-by-step instruments and a built-in printer. It strikes me that this system would require the least amount of training.

Choices each have advantages and disadvantages. From a core laboratory standpoint it comes down to the business of cost: reagents and consumables, service, and maintenance time. But considering point of care testing alternatives adds many variables: portability, ease of use, ease of training, connectivity, and other issues. Other departments such as the ED, ICU, NICU, and respiratory therapy should be consulted. Each will have its own needs. From the lab standpoint it’s still mostly a matter of cost. But deciding the biggest bang for that buck is more complicated.

 

Ah, the joys of shopping.

NEXT: Automating LIS Maintenance

0 comments »     
Embracing Point of Care
July 15, 2016 6:20 AM by Scott Warner

For an industry that is frequently at the forefront of new technology in healthcare, laboratory workers can be among the most resistant to change. Computers coexist with paper; manual diffs are still done when automated counts are far superior; point of care technology is disdained as inferior to central lab testing.

But point of care testing is coming into its own. Indeed, as hinted at in a recent Advance article, microfluidics will bring the core laboratory to the patient. Investors are shelling out big bucks for this idea of “lab on a chip” that will make the current crop of glucose meters look as outdated as 1980’s cordless phones. Our world is changing rapidly.

This doesn’t sound like good news for an industry already on the ropes with centralized testing, testing formularies, labor shortages, and demands for faster, cheaper testing. What will happen? I wonder.

Traditionally laboratory professionals have been leery of point of care testing, because non-laboratorians don’t understand the technology or how values are verified. If I ask a lab tech, “How do you know that glucose is accurate?” I will get a detailed response: the QC is within peer-defined limits, the last calibration is acceptable, the other patients I have run meet a statistical norm, etc. If I ask I person expected to do point of care testing - such as an RN - how he or she knows a glucose is accurate, I’ll probably get a less specific response.

But that’s no reason for a lab not to embrace point of care. As point of care testing becomes simpler and more foolproof, laboratories should step up to the plate and make sure they are there to enhance services and improve patient care. This change is inevitable as technology advances.

“Foolproof” is a misnomer as we all know just like a zero-maintenance analyzer. Errors are often subtle, and an understanding of what affect results is necessary to produce quality results. It’s what separates laboratory training from all others. Who better to shepard in a new era of technology?

The lab is unique in that it provides technology useful in point of care. We likewise have a unique opportunity to educate, encourage, and engage others in what we do. This isn’t easy, but it is inevitable. How easy has this been in your lab? I wonder. There are considerable cultural barriers between the evidence-based focus of laboratories and the support-based focus on other departments.

Has your lab embraced point of care testing? If so, are you an active partner with doctors and nurses or is it a necessary evil brought about by affordable technology?

NEXT: The Joys of Shopping

0 comments »     
To Cut Costs, Change
July 1, 2016 6:20 AM by Scott Warner

These days it’s all about change. I have heard a constant drumbeat for the last thirty years that change is the only constant we can count on. The only thing more constant than change is the need to cut costs. Now that laboratories are becoming cost centers in small hospitals and groups are recognizing the economies of scale in centralizing testing and developing formularies to cut costs, those two are intertwined.

In one sense, it’s easy to save money: just search for a cheaper vendor. The problem for smaller hospitals is less bargaining power. A Group Purchasing Organization (GPO) can help but hinder depending on compliance terms. Small labs just don’t have the clout, especially in the Critical Access world, to convince vendors. The market charges what consumers will pay the world around. So cutting costs by going cheap without sacrificing quality only goes so far and as a strategy is quickly exhausted.

An easier scenario is one understood by every bean counter out there: cut staff. As a rule of thumb payroll is half of all expenses. If full time can be cut to part time, hours reduced, positions eliminated, or managers replaced with “working” managers, it all looks good on the bottom line. When the above is exhausted, this is the next logical step. Labs are benchmarked against each other to determine staffing load, positions are lost through attrition, and managers are challenged to come up with new ways to do more work with fewer people. It’s too bad benchmarking can’t capture the hard work that goes into making a good lab a great lab. And in fact benchmarking doesn’t take quality into account at all.

When that is done, what’s next? Many labs are facing the question today. They’re compliant with GPO contracts, they have evaluated pricing and chosen the cheapest without sacrificing quality, they have evaluated “make it or buy it” to cut costs, and they have lost people to never be replaced.

The answer is a hard one: change.

As an industry we have to change what we do, how we do it, and how we work to deliver a service that is cheaper, faster, and that doesn’t sacrifice quality. Doctors (and the insurance companies) do not care about the how and why, but they want completely separate things. The first wants speed, reliability, and quality. The second wants the lowest cost. Maybe that means outsourcing, merging, or centralizing. Maybe, it means changing.

If the world around us has changed, we must change with it. This requires creative thinking. It requires rethinking what we have done for the last ten or twenty years. It means trying completely different approaches to getting work done. For example, you may have a test kit in your lab that requires more QC if nonwaived and less if waived, and the only difference is the sample type. As another example, maybe moving a workstation to another department will streamline workflow and improve turnaround time.

Only we can change. If we wait for outside forces to change us, it’s too late.

NEXT: Embracing Point of Care

0 comments »     
Do Doctors Read Comments?
June 18, 2016 6:40 AM by Scott Warner

Laboratories add comments to reports, some of which are informative e.g. CRITICAL VALUE REPEATED and others that are interpretive e.g. explaining the meaning and utility of the MDRD estimated GFR equation. It is the latter that brought me to this current question.

Most doctors have little or no idea how results are generated. I think they assume that professional, trained staff under the supervision of a pathologist give them the best number possible. A GFR estimated from a creatinine, age, and sex alone is a perfect example. What if a doc assumes that this is accurate for dosing? It is not, generally, and will tend to overestimate GFR in the elderly, the infirm, and those otherwise with extremes in muscle mass. It is intended to be a screen for chronic kidney disease in a subset of patients with a normal to slightly elevated creatinine value.

The question is how much of this instruction needs to appear on a laboratory report? And will the physicians read it?

Second question first: no.

In my experience unless a result is completely obtuse or difficult to interpret physicians do not read comments for an explanation. They are conditioned (and we are too, when you really think about it) to respond in a knee-jerk fashion to flags generated by the information system. The nuances of utilizing a test are either a) well known by the physician, or he or she would not have ordered the test initially or b) lost in the comments. “Interpretive comments” exist to cover the laboratory.

The real issues here aren’t covering the laboratory or informing the physician, which are two separate issues. The first relates to producing quality results and is boolean in nature: it is or isn’t, regardless of comments. The second is an education issue that will never be resolved by a lengthy comment. And a lab that thinks so is doing a disservice to the patient. No, the real issue is, “How do I add value to the report?”

I had a conversation with a physician recently that brought this issue into sharp focus.

He said, “We can add a comment to any test, such as potassium. But should we? Unless it explains anything we need to know about the particular test, does it add anything?”

That is an excellent question that most laboratorians and many pathologists have difficulty answering. The only way to know for sure is to ask your medical staff what is helpful to them. And you might be surprised.

Glucose is another example. A fasting glucose <= 100 mg/dL is considered normal. But we report out many glucoses on samples drawn throughout the day. Should all laboratories add a comment that states “Glucose ranges are for fasting patients only?” What about non-fasting patients? What about diabetics? What about patients on steroids? There are far too many contingencies, most of which the attending physician is more than aware of, for any laboratory to address on a report. And it would be near crazy to try.

The litmus test (if there is one) is “Does this comment tell the docs something unique about this number?” And that can be a hard call. The best approach (in my experience) is to ask them. You could be surprised.

NEXT: To Cut Costs, Change

0 comments »     
Justifying Staff
May 31, 2016 6:41 AM by Scott Warner

These days it’s all about shortages. Shortages of techs, shortages of patients, and shortages of money. In small hospitals there are fewer of us working with fewer patients for less money. Those who are working are older than the average worker, are wondering who is replacing them, and are tired of hearing about doing more with less when new technology requires people to test, validate, and perform the assays. This is what I’ve been hearing for the last few years.

A few random thoughts on this.

A laboratory test menu is constantly in flux as tests are brought in house or sent out to reduce cost or improve quality, but what I hear now is, “If you send out that test can you reduce hours?” I find myself justifying what little staff I have more and more, and I suspect many managers would say the same thing.

Payroll is a huge portion of ongoing expenses, so that’s understandable. I get that reducing expenses is crucial to managing a dwindling cash flow and can make or break a hospital in a competitive market. But the reality of managing laboratories is different from other departments.

Few benchmarks: there are benchmarking factors, but laboratories are so different in employee mix, services, and outreach that it’s difficult to compare them in a meaningful way. I’ve been in labs with many phlebotomists, for example, and some where techs performed most of the phlebotomy. It all depends on how far away a phlebotomy station is, how versatile the information system is, and other factors. Equipment varies greatly from lab to lab, and not all instruments offer the same mix of quality and speed. “Efficiency” varies from lab to lab, often having little to do with the skill level of techs.

Make or buy issues: bringing in tests to justify staff is fine if it’s cheaper to perform a test in house, but that can be a boondoggle if it requires new instrumentation, more maintenance, more training, and more competency. It’s been my experience in general that people are poor multitaskers. Asking people already doing multiple tasks to do one more creates a drag on overall efficiency and a chance to increase errors that will drag a system down even more.

A manager caught in a feeding frenzy of cost cutting has to recognize benchmarking and other comparisons for what they are: an attempt to manage expenses using verifiable data. That can be a smart idea in a big picture sense. But a manager also has to use dwindling resources with a mindset that these issues aren’t going away, necessitating new ways of thinking about old problems. Our futures in this industry will likely be shaped by more than “make it or buy it,” outsourcing, or cost cutting. We can only do so much of this, and in the meantime the demand for faster, better laboratory results is increasing. What we do has never been more important.

This could mean different workflow models, consolidated platforms, software AI, or something completely different. Whatever it turns out to be for our individual labs, it has to be invented under constant pressure to do more with less. Inventing new ways to produce better care may be the best way to justify staff we have.

NEXT: Do Doctors Read Comments?

4 comments »     
Slide Review or Manual Diff?
May 18, 2016 5:59 AM by Scott Warner

The CPT code 85004 (Blood count; automated differential WBC count) has many variations, each of which is charged instead of 85004 and including the work. These codes can be distinguished in the CPT code book, because they are indented. Examples:

  • 85007 (blood smear, microscopic examination with manual differential WBC count)
  • 85008 (blood smear, microscopic examination without manual differential WBC count)

In other words, performed a manual differential if a CBC with automated differential is performed is not billable in addition to the code 85004. All of these are “bundled” into 85004. Neither is a slide review, a spun hematocrit, or an automated reticulocyte count. All are considered iterations of a blood count with an automated diff, recognizing that the term “CBC” encompasses variations.

One of which is: should you reflex to a slide review or a manual differential?

The consensus rules from the ISLH (International Society for Laboratory Hematology) suggest the former. A slide review is a targeted review of a peripheral smear specific to the particular parameter or instrument flag. For example, a white blood cell count of >30 thousand reflexes to a “slide review” if it is the first time or a delta check failure within 3 days. That slide review logically involves checking for abnormal cells or cells that suggest a leukemoid reaction. A manual differential is necessary only to enumerate abnormal cells.

The slide review concept is an attractive idea in many ways: it takes advantage of technologist judgment, targets a review based on accurate instrument readings, and avoids the busy work of just reflexively banging out a 100-cell diff that may not add value to the report.

More significantly, a slide review reinforces what we already know: the instrument is much, much more accurate that we can ever be. Performing a manual differential to “prove” the instrument count is OK leads physicians down a path that doesn’t trust our technology. Performing a slide review sends the message that we are using the instrument to guide our workflow and look for abnormalities.

In my experience that’s been an easy sell for docs but much harder for staff. Almost all the techs I’ve known have had a knee jerk reaction that a slide review is more work. Why?

For one thing, after performing thousands of manual differentials it becomes second nature. It is comfortable, familiar, and repetitiously easy. Having said that, what is the first thing we all do when we finish? We compare the numbers to the automated differential. We might even repeat the manual differential if we think the numbers don’t match, which is really nutty behavior when you think about it. This begs the question, “What are we really doing?”

This could just be the shock of the new. We have to use new technology to change our diagnostic techniques. As suggested by the ISLH, part of this is slide reviews designed to look for what the instrument suggests. But can we teach old techs new tricks?

NEXT: Justifying Staff

0 comments »     
Is PCR Ready for Small Labs?
May 2, 2016 6:36 AM by Scott Warner

What is true for big labs eventually becomes true for small labs, mostly because volume discounts drive affordability. This is most recently true for PCR, a technology that has arrived in small laboratories for two platforms, the Meridian Illumigene and Nanosphere Verigene.

But is PCR ready for small labs?

I’m intrigued by this technology, and I can easily imagine it playing a role in small laboratories. This kind of platform offers rapid, definitive testing for infectious agents such as C. diff, pertussis, and bacterial agents in blood culture specimens. Testing a stool specimen for the C. diff toxin can eliminate the need for GDH antigen testing that detects non-toxin producing strains. Identifying bacteria sooner in a blood culture can give the physician a 24-hour head start. In theory this faster turnaround time with better results will reduce length of stay and assist antibiotic stewardship programs.

I wonder.

One reason I’m on the fence is expense. It’s great to claim that a new instrument will reduce length of stay, but bean counters aren’t impressed by soft cost savings that can’t really be quantified. Unless one is testing for something completely new that changes a protocol - the introduction of BNP comes to mind - it can be hard to convince bean counters that a faster turnaround time equals fewer inpatient days.

Another reason is wondering how having these results will change treatment. Clinicians embrace technology at different rates. How many of you are still running cardiac troponin and CKMB, for example? How many are still reporting percentile differentials with absolutes? How many are still performing more ESRs than CRPs? It sounds great to me to give the docs a better, faster result, but clinicians have to buy into new technology and use it to its potential. As primary information consumers they always drive demand for technology.

Finally - and this is a big reason - new platforms with radically different technology require a lot of training and competency assessment to get off the ground. In a small lab staffing is often sparse, raising questions such as, “What happens on weekends?” and “What happens when a doc wants it done STAT in the middle of the night?” The advantages of rapid PCR testing imply STAT requests, after all. Labs with squeezed budgets and payroll will have a difficult time fitting anything new and different into their menus, no matter how wonderful. It all takes time, money, attention to detail, and bodies to run the tests. In the current healthcare climate it can be hard enough just to get the bread and butter tests done in a timely fashion as more and more labs adopt a rapid response model.

I don’t know the answer, but I don’t hear physicians beating down the door for in-house PCR just yet.

I’m interested in how this technology has affected your laboratory. Has it lived up to the hype, and what problems have you encountered? Is it worth the expense?

NEXT: Slide Review or Manual Diff?

0 comments »     
Moving That Needle
April 21, 2016 3:15 PM by Scott Warner

One of the phrases I hear lately is “we need to move the needle,” meaning enough effort has to be put into change to not just make it stick, but change what matters. This might be customer satisfaction scores, test volumes, or cost containment.

If there’s one thing that change has taught me, it’s that no matter how much things change they seem to stay the same. The needle almost never moves.

I’ve witnessed countless alphabet soup campaigns, LEAN initiatives, customer service gimmicks, changed hours, changed protocols, and new technology. It isn’t so much that each change is filled with false promises or defeated by dashed hopes. In fact, we can all be easily convinced that any of these things can make a difference. They almost never do. Change is a constant in a world stubbornly set in the present rather than anticipating the future.

Why?

There are several good reasons. We are motivated by emotions, not numbers. Data is an excellent rationalization tool, but it won’t convince people to change how they feel about something. We each have a gut feeling about what works and what doesn’t based on the culture we work in. That is an extremely powerful force to try to overcome from within. Indeed, it may be impossible for a negative culture to change itself.

We are also motivated by leadership, a quality rare enough that we don’t just know it when we see it - we are surprised. I think leadership is a skill like many others that can be acquired with a good working knowledge of what it is. Without good leaders who can articulate a vision and make decisions that cement values in place, change is just pointless change that has no lasting effects.

Finally, each of us views change as something different. The classic management wisdom is “people hate change,” which is misleading. People love change if it means making their lives more convenient. What they don’t like about change varies enormously. Some don’t like change because they don’t trust the motives behind it; others don’t like it if they weren’t included in the decision making; still others have too much stress in their life outside of work to deal with one more change. Management frequently forgets that change happens everywhere in life, and often a workplace is the most stable environment a person has.

So, how do we move that needle?

For me why people are “resistant” hints at the answer. We need leaders who are not afraid to motivate people with emotions and who understand that our work lives are but a part of whatever we are going through. We need leaders who can articulate in plain, blunt language the stakes. We need the values to be articulated honestly and plainly enough to be supported by human interest stories that really motivate people. Leadership needs to walk the talk.

But that’s just me. What about your lab?

NEXT: Is PCR Ready for Small Labs?

1 comments »     
Limit Input
April 13, 2016 4:59 PM by Scott Warner

One of the oft-quoted nuggets in The Elements of Style is “omit needless words.” I’ve seen this rule praised and criticized with equal fervor. As a writer, pruning and trimming prose seems like a necessary path to clarity for the reader. It is also intensely personal and driven by one’s own style. But in general, it’s a good rule of thumb. If it takes 4 words to say the same thing in 15, use 4. (Have politicians read Strunk and White?)

Unfortunately as bench techs we don’t have the option to rewrite and revise with time to mull over or forget what we’ve written. What we tell physicians has to be clear, concise, unequivocal, and reproducible. The more judgment that is involved in a test, the more difficult this can be and the more variation one encounters.

Variation dilutes quality, clinically significant or otherwise, and invites STAT abuse. If, for example, 95% of in-house chemistry tests are completed in a narrow time frame, there is less variation and the process creates consistent expectations for providers. They are less likely to order a test STAT to bump it to the top of a queue that has widely variable turnaround times. To a lesser extend, limiting variability in reporting non-numeric values builds similar expectations and can reduce telephone calls, repeats, and requests for further review.

I described a way to simplify urinalysis and reduce variation in a recent blog. While seemingly trivial, this kind of approach reduces the number of choices made by techs, another form of variation that increases the amount of work done.

One of the advantages information technology brings to the laboratory is the ability to reduce variation by defining result entry parameters. For numeric results this can mean entering a lower reportable limit in a test definition that causes the system to automatically change a value below the assay range to less than that range. For alphanumeric (text) values this can mean limiting the number of choices to a drop down list or selection box.

A simple example is limiting qualitative testing results to Positive or Negative. This can be improved in a few ways, perhaps. Positive can be offset with characters e.g. ***Positive, or these can be abbreviated to ***POS and NEG. One problem I have run into with fine-tuning what these look like on native reports is that usually formatting is lost in HL7 translation to another system such as Athena or the T-System. Something to think about.

Other examples: blood bank types, RBC morphology choices, urinalysis dipstick values, Gram stain morphology. Most information systems out there have this capability to limit input in the sense that only a few selections are available to result the test. From an IT perspective this makes sense when assigning SNOMED codes and uploading discrete data to central database systems. The fewer results listed in “comments,” the better.

What about your lab? Have you limited input, and does this improve quality?

NEXT: Moving That Needle

0 comments »     
Are You Getting Paid?
April 4, 2016 4:53 PM by Scott Warner

The laboratory is a unique clinical department. The possible tests and their associated billable codes that are routinely ordered day in and day out can number in the thousands or even tens of thousands. The big moving target is referral lab testing, which may change according to where it is forwarded, reflex testing, or method changes. I spend a significant amount of time comparing bills to the charge master. If we don’t get paid, we can’t keep the doors open forever. Are you getting paid?

Some of the issues that make laboratory work complex include:

  • Associated charges that change. Some associated charges are routine, but many reflex or are conditional. This can be a real problem with referral or esoteric testing.
  • Charges that have to be bundled with other charges. Many physicians have patterns, but there are only so many patterns that can be learned. The sheer volume of testing and ordering tactics make this a constantly moving target.
  • Late charges. Many late charges are related to testing that is performed a day or two after collected, such as with microbiology cultures or pathology specimens. Still others are caused by invoices with additional charges.
  • Incorrect CPT codes. CPT codes change year to year, but they also change when methods change in laboratories. This is another constantly moving and often confusing target.

As a manager, you can tread dangerous ground stressing accurate billing. What we do is for the good of our patients. But if you don’t get paid, you can’t keep the doors open. And if you don’t charge for everything accurately, you won’t be able to justify the staff you have. Another lab with the same workload that bills for everything will look better.

I’ve puzzled over this idea of charge reconciliation - how do we know we have charged for everything? - for the past few months. It is not an easy problem for a laboratory to solve. Laboratory testing is complex.

One crucial strategy is to make sure your referral testing - easily the biggest target in terms of change - is accurate. Ask for an electronic version of your bill in Excel. From there it’s easy to compare it to a download from your information system to make sure you’re billing accurately.

Another strategy is to look at duplicates, missing tests, and things that don’t look right from day to day. This is extraordinarily difficult and time consuming in anything but the smallest labs, even more so when short staffed. I’ve created a SQL report that pulls out CPT codes and totals them by test per account number. Anything that could be a panel missing CPT codes or is a duplicate is flagged as an error. This is a great report, but I have a history of computer programming. I’ve no idea how any other lab would do such a thing. But we have to get paid. And if you don’t bill, you won’t get paid.

NEXT: Limit Input

0 comments »     
Simplify Urinalysis
March 25, 2016 6:40 AM by Scott Warner

Urinalysis is one of the simpler screening tests laboratories perform. Modern dipstick readers have standardized and simplified the chemical analysis of urine. But what about the microscopic? Shouldn’t that also be simplified?

Beckman Coulter and Sysmex offer instrumentation that performs cell counting. For many if not most labs, urine sediment is examined under light microscopy. White and red blood cells are reported as an average per HPF.

Sounds simple enough. Count the number of white cells in 10 fields, move the decimal point, and report that.

In fact what labs do is much more complicated and more difficult to reproduce with any precision. Cellular elements are reported as a range of cells and are only accurate to the degree where counting or estimating becomes cumbersome. In recent data mining I discovered a wide variation that implies the method is far more precise than possible e.g. 10-12, 15-17, 20-25, 50-60, 70-80, etc. This reflects more tradition than accuracy. More importantly, does it meet the needs of the ordering physician?

At a meeting I asked physicians about clinical efficacy. One replied, “It’s pretty straightforward. I want to know are there a few, some, a lot, or too many to count?”

So we changed reporting as follows:

WBC = 0, <5, 5-20, 20-100, >100

RBC = 0, <3, 3-20, 20-100, >100

While a cutoff of 20 cells per HPF is arbitrary, it is also practical. It’s easy to count 20 per HPF by counting the average per quarter of a field. And while 100 per field can be estimated accurately but counting cells along an arc of the field, a guesstimate works just fine for clinical purposes.

It’s nice to have more reproducible results, even if some physicians may not notice. More importantly, consistent, reproducible results improve quality. To put it another way, when techs report microscopic elements with a wide degree of variation this dilutes quality. A physician may understand the imprecision of a method, but he or she will also guess that finer distinctions won’t matter. “A few, some, a lot, or too many” is what they will read when they see the report. Why not make that easier for them?

Finally, while it may seem that 20-100 is too broad, it is consistent with implied imprecision. Assuming these are probable ranges, 5-20 is an expected 30% coefficient of variation and 20-100 33%. Thus, what we claim is consistently precise; it is easier to estimate 5-20 cells than any significant interval between 20-100 e.g. 50-75. 5 and 3 are cutoffs related to reflex culture criteria and microhematuria respectively.

Of course, arbitrary limits are just that. An implication that 30, 40, 60, and 80 mean the same thing to a clinician may or may not be true. There is also great variation among doctors. That’s why I asked first.

NEXT: Are You Getting Paid?

1 comments »     

Search

About this Blog


    Scott Warner, MLT(ASCP)
    Occupation: Laboratory Manager
    Setting: Critical Access Hospital
  • About Blog and Author

Keep Me Updated

Recent Posts

Archives