Welcome to Health Care POV | sign in | join
Stepwise Success

How Do You Work?
November 26, 2014 6:15 AM by Scott Warner

As the holidays approach I’m reminded how families get along (or don’t). Our work families are no different. Coworkers with different styles and priorities forced to work together can test patience and professionalism. So long as the work gets out accurately and timely, the ends may justify the means, but we’ve all worked with a tech (or are one!) whose work habits grate nerves.

Here are a few morphotypes:

  • The Slob - this person has junk everywhere: tubes, caps, labels, scraps of paper, etc. Worksheets are scribbled on, written on sideways, and filled out haphazardly. Yet when you ask what is going on, he or she knows the status of everything.
  • The Neatnik - the exact opposite of the Slob, meticulously aligning everything, making sure all junk is cleared away, and dotting all i’s and crossing all t’s to an OCD degree. Likely to arrive at any bench and start immediately rearranging it. Also likely to arrive at the Slob bench and just straight arm the whole lot into a bin.
  • The Hoarder - closely related to the Slob, the Hoarder never chucks anything. (The Slob cleans house once in a while.) Every nook and cranny is packed with something: pens, magazines, caps, samples, clips, cheat sheets, phone lists, package inserts, tools, extra tubing, “extra” stuff just in case, etc. Their free bench space is a tiny little square, yet they manage to get everyone done on time.
  • The Paper Freak - has to write everything down and leaves note everywhere. Instruments are plastered with paper, computer monitors have a halo of notes, and cheat sheets are taped to procedures and kits. This isn’t a bad thing in itself, just impossible to know what to read first.
  • The Talker - while there are people who gab and gossip, this person talks incessantly while working, muttering to themselves to the point of distraction. You just hope they don’t do it around patients.

Dull would our workplace be if we were all the same. And surely each of us annoys someone. How do you work?

NEXT: It’s What We Do

0 comments »     
Action Meetings
November 21, 2014 6:07 AM by Scott Warner

Every meeting takes at least an hour, not including prep, finish, and homework time. Many meetings run over. They are, as Charlie Kim describes in the Huffington Post, red wine discussions (might as well drink, because nothing will be accomplished) or lectures (why attend at all?).

I’ve been working hard to break that, at least in the lab. Instead of monthly staff meetings, we do this:

  • Daily Time Out - following rounding, we have a standing meeting to discuss issues on the floor and ED and anything else going on in the lab e.g. instrument repairs, product recalls, procedure changes, etc.
  • Monthly Action Meeting - once a month we have an hour-long meeting with an agenda that is our action plan from the previous meeting. The goal is to improve efficiency or service. Each meeting we look at HCAHPS scores; other than that we focus on how to make real improvements.

These action meetings result in decisions that directly affect how work is performed on the bench. Recently, for example, we decided to change the time of day that certain reports are run to reduce afternoon bottlenecks. Not only does this involve everyone in decisions, the next month the action is reviewed to see if it’s working. If it isn’t, another decision is made.

I don’t know if this would work on a broader scale, but I don’t see why not. Traditional staff meetings consisting of reviewing minutes, old business, to-do lists, attendee reports, etc. are boring to run and torture to sit through. But changing this culture is tough. Most people who go to meetings hate them but assume they are what they are.

I can’t imagine anything would get done on the bench if before each decision was made techs had to review the minutes of the last decision, old business, to-do lists, etc. Instead, of course, the daily work is done while dozen of decisions are made on the fly. And if those don’t work, different decisions are made. It’s time we pushed action meetings up the ladder. Isn’t it?

NEXT: How Do You Work?

0 comments »     
One Size Never Fits All
November 17, 2014 6:03 AM by Scott Warner

One size fits all is an advertising gimmick, but we all know it isn’t true. Shelves are too high, microscopes are too low, and there is never enough room for paper. But the reality of our workspace is that it has to be designed to fit most people. This can be difficult in today’s laboratories.

For example, according to OSHA a computer monitor should be placed at least 20 inches away so the top of the screen is at or below eye level. That’s fine if you’re a twenty-something telemarketer, but a data manager monitor attached to an instrument will often be to one side or at angle. And what about those huge monitors, or those users with bifocals or trifocals? According to one site, a monitor will need to be adjusted in both cases so the user’s neck can remain in a neutral position.

If your laboratory is like most, there’s a wide age gap between existing staff approaching or exceeding retirement age and new graduates in their twenties. There is no one size fits all computer station.

And as it turns out, chairs, countertops, drawer handles, and everything else we use on a daily basis is designed according to anthropometric tables. Anthropometry (Greek for “man measure”) is the science of measuring size and proportions of the human body. As another site points out, designing a space for an average body only accommodates half the population; the best solution is to design adjustability into any workspace.

“Average” should include clothing and personal protective equipment, too. For example, fluid-resistant coats and face shields may restrict movement and field of vision, reducing the size of a work area.

Finally, there are gender differences to consider in not only anthropometrics but physical workload capacity. One study in The Spine Journal concludes, for example, that men generate greater compression loads on their spines during equal lifting. Women are at greater risk for injury, however, especially when lifting is asymmetric or heavy. This single study suggests there may also be no one size fits all for occupational ergonomic issues.

NEXT: Action Meetings

0 comments »     
Do Disclaimers Add Value?
November 12, 2014 6:44 AM by Scott Warner

Our tendency to comment results with disclaimers is strong. Examples:

  • Reporting pathogens in a urine culture with many skin flora and adding “possible contamination”
  • Reporting a potassium on a hemolyzed sample and adding “hemolysis may increase results”
  • Reporting a WBC differential and adding “fibrin strands seen on peripheral smear”

There are a few schools of thought here. One is that if all but one result are affected e.g. a potassium on a chemistry panel, the result of the panel may provide value to the physician. Another is that bench techs should disclose problems that may affect results to protect themselves legally and ethically. And yet another is to let the physician decide whether to use the results or ask for another specimen.

The concept of “value added” is defined as “The enhancement a company gives its product or service before offering the product to customers.” It creates a competitive advantage by bundling or offering features that customers want.

Flexible report formats, report distribution options, and custom panels are all examples of value-added laboratory services. All things being equal, physicians want the cheapest, fastest, and most reliable testing performed. They rely on laboratories to collect and process samples using state of the art techniques to provide accurate results. They have to, when you think about it. Laboratory medicine has dramatically increased in complexity in the last few decades, something family practice or ED physicians can’t keep pace with.

And we should keep in mind there is nothing unique about our services. Except for STAT situations, a physician will leave the choice of site to the performing lab or specify a lab to use. One doc’s idea of “value added” may be different than the next or the laboratory.

The value of our product is reliability. A disclaimer detracts from that value except in emergent situations where a new specimen can’t be obtained. Even worse, a disclaimer may not travel with a result if it is later viewed in cumulative format or mined. Despite best intentions to be professional and give a physician what he or she wants, do disclaimers add value?

NEXT: One Size Never Fits All

0 comments »     
Smartphone Infections
November 7, 2014 6:01 AM by Scott Warner

Do you remember other kids eating dirt when you were young? It was commonplace to make mud pies, jump in puddles, and put tadpoles in pockets. We all played outside in dirt and grime, bit fingernails, ate baloney and cheese sandwiches without hand washing, drank from the garden hose, and played on the floor in retail stores. It never occurred to any of us that we could get sick, and if we’d been told we all would have laughed, anyway.

Today we’re cleaner and filthier than ever. Everywhere I go I see alcohol based cleansers, hand washing signs, and wipes to clean, scrub, and disinfect our hands and fingers and everything we touch. People seem angry that they have to touch anything touched by anyone else. This seems bizarre to me, but that’s beside the point.

In the midst of all this many of us carry technology that might change how we do everything: a smartphone.

Smartphones are being used more and more by physicians, presenting security risks but also infection control issues. These aren’t new. As one California hospitalist points out, “we’ve had infection control issues with stethoscopes since they were invented.” Smartphones aren’t the only surfaces that aren’t religiously cleaned.

In one study 200 smartphones were swabbed, including 50 in a surgical unit. Sixty percent showed contaminants, 20 phones had negative culture growth, and no resistant or pathological strains were recovered. It turns out that proper hand hygiene makes smartphones safe, just like many medical devices.

Interestingly, the site EngagingPatients.org suggests that patients use smartphones to notify the staff, adding, “there are progressive hospitals that are offering something like this for their clinicians, but very few offer this option to its patients. This is very interesting given the literally tens of thousands of apps pushed at patients for use outside the hospital.”

Are we too paranoid about what may become the dominant technology used by line staff? Smartphones instantly connect us not just to an information grid, but to each other and patients. And when was the last time you saw someone clean a keyboard?

NEXT: Do Disclaimers Add Value?

1 comments »     
Small Town Traps
November 3, 2014 6:00 AM by Scott Warner

I once watched on television a fascinating experiment in which a stranger pretended to be sick on the street of a big city vs. a small town. City dwellers ignored the man, who lay on the ground writhing in pain, but almost everybody in the small town stopped to help. It’s part of the magic in small towns, that our tribe is small, our interests are local, and our resources are limited. Most people are known by most people, which can be problematic in a rural hospital.

For example, we have to be mindful of confidentiality traps. In a metropolitan hospital it’s entirely possible that staff would know almost none of the patient names, but in a rural hospital exactly the opposite is true. There is a high expectation of privacy in hospitals, more in the countryside. It’s tempting to see an acquaintance outside of work and ask about a condition or family member’s condition. But we have to maintain confidentiality.

A more dangerous trap is avoiding positive patient identification because “we know our patients.” I’ve heard this mantra repeated as a virtue many times in my career in small hospitals, and it’s never made much sense to me. It isn’t just a name that we verify, after all, but a second identifier, physician, and perhaps the procedure. We must know that we have the right patient and the right order.

For example, we ask every patient to state his or her full name, date of birth, and the name of the physician who is to get the report. Even when we know the patient personally, we have found spelling and other errors. Sometimes, as when a person remarries, the name is completely different.

The laboratory seems alone in this concern. As a patient I was rarely asked my name, much less date of birth, and my wristband was seldom checked by non-laboratory employees. Perhaps, we don’t want to offend our neighbors, create a more homey atmosphere, or not appear to be stupid. All of which is irrelevant. Positive patient identification is an industry standard that should be followed.

NEXT: Smartphone Infections

0 comments »     
Ebola is a Wake Up Call
October 29, 2014 6:00 AM by Scott Warner

The latest personal protective equipment guidelines from the CDC for Ebola emphasize training in donning and doffing PPE, no skin exposure, and a buddy system to make sure the process is followed. As their site states, “Focusing only on PPE gives a false sense of security of safe care and worker safety. Training is a critical aspect of ensuring infection control.” And while Ebola is an unusual threat, this principle applies to everything in the laboratory. It should be a wake up call.

Laboratories deal with many threats, such as multidrug resistant organisms, fungi, parasites, and viruses. Many of these are bloodborne pathogens like Ebola. Are we as careful with these specimens or as careful when we don’t know the status of the patient? Healthcare workers sometimes don’t wash hands long enough, snap on and off gloves, don’t use hoods and shields, leave skin or mucous membranes exposed when handling materials, or discard PPE inappropriately. All specimens are potentially infectious, and many infections are deadly.

While PPE is used to protect us, we use it to protect our patients, too. Isolation precautions can also be carelessly followed, increasing the risk of infection. In your facility, for example, are all physicians trained and observed in donning, doffing, and use of PPE when entering isolation rooms? Are family members and visitors trained, observed, and monitored? I can’t imagine busy nurses or infection control practitioners have the time.

As our hospital’s Safety Officer I work closely with our infection control nurse, and we’ve discussed hand washing stations for visitors. There is a cultural tendency to treat the public as guests in these settings instead of a potential threat, as though forcing them to follow protocol is a breach of trust. We all need to change this mindset.

I recall the early days of HIV when an ugly sense of paranoia existed. HIV was a wake up call to realize that many agents, such as hepatitis, are more common and equally infectious. Laboratories are much safer now. Ebola is another wake up call that we shouldn’t miss.

NEXT: Small Town Traps

0 comments »     
Vanity Metrics
October 24, 2014 4:18 PM by Scott Warner

Data is what we do. From collection times to bread and butter lab test results to quality control to maintenance to report distribution and all that’s in between, laboratories collect and document more data than more departments. One of the challenges in quality assurance is choosing what to measure.

The push from the QI (or whatever the acronym du jour is) mavens is to improve. Collect the data, make a change, measure the impact, rinse, repeat. That’s easy, because it’s how laboratories are designed. These cycles are built into what we are, again perhaps more than most departments. Technology and data and constantly changing.

I’ve seen a lot of numbers bragged about that amount to little more than statistical variation, such as a minor drop in injuries, falls, or turnaround time. Most trained technologists are very good at spotting genuine shifts and trends, although the cause isn’t always obvious. A less obvious but more dangerous tendency is to use vanity metrics.

A vanity metric is “feel good” data that serves no constructive purpose. A common example is number of visitors to a website, which may or may not correlate with a business bottom line but feels good to talk about. They are common with internet startups in citing registered users, downloads, etc. but are not as useful as active users vs. the cost of getting these new customers. The latter are described as actionable metrics.

The late Steve Jobs was apparently fond of using vanity metrics to change reality to motivate employees in the face of insurmountable odds, what Bud Tribble at Apple coined Jobs’ “Reality Distortion Field” back in 1981.

But we’re treating patients and not selling pretty hardware. It might motivate lab employees to think a turnaround time has decreased or volumes have increased, but our reality hinges on producing the highest quality at the lowest cost. As such as we need to avoid measuring useless, touchy-feely statistics that don’t tell us what we need to know: how to improve patient care. That usually involves removing the rose-colored glasses and putting on the black and white safety goggles.

NEXT: Ebola is a Wake Up Call

0 comments »     
Distractions Cause Errors
October 20, 2014 6:01 AM by Scott Warner

A study at Michigan State University found that three second interruptions doubled error rates; longer distractions increased errors. Another study in Australia found that nurse medication errors increased twelve percent for every interruption. And a study at Oregon State University showed that 8 out of 18 surgical residents made serious errors when distracted.

In the laboratory we think of distractions as part of the job. But distractions cause errors.

I’ve never worked in a lab where I wasn’t constantly interrupted by telephone calls, buzzers, STAT work, and chit chat. Most labs are chaotic, noisy places geared around getting work done with minimal error. Yet distractions that cause errors are built into the system.

The Wall Street Journal reports that distractions are increasing in the workplace, partly because of a reliance on internal email and meetings to communicate. One professor at the University of California at Irvine found that workers typically have three minutes of uninterrupted time on a task.

Bench techs loading specimens, monitoring instruments, and verifying results who are constantly interrupted and distracted may have less time than that. Many tests are built around a prep-wait-offload cycle, and those wait times vary from a few minutes to hours. A distraction at either end can be disastrous.

Not all distractions are equal. Many techs will cite telephone calls, but any event that derails a planned prep-wait-offload cycle counts. This includes instrument problems, critical values, failed quality control, nosy colleagues, and even bathroom breaks. Once away from a task, it is even easier to be further distracted and forget one’s place.

While everyone works this way, that doesn’t make it right, especially if it causes serious errors. We all are too busy on the treadmill to step off. It’s a logical quality improvement goal in labs, but I’ve never seen a process designed with a “no distraction” ground rule. We all assume it’s impossible.

But why? It’s not an unreasonable thought. If possible it is not valued despite the evidence that it contributes to medical errors. Phones ring, pagers beep, timers ding, busybodies chatter, and have you checked your email today?

NEXT: Vanity Metrics

1 comments »     
For the Best Results, Hire Professionals
October 15, 2014 6:03 AM by Scott Warner

The late oil well firefighter Red Adair said, “If you think it’s expensive to hire a professional to do the job, wait until you hire an amateur.” During staffing shortages or budget crunches it can be expedient to hire a warm body but disastrous in the long run. Amateurs often don’t know what they don’t know, think they know more than they do, and lack an ability to self-correct behavior that all professionals possess. Amateurs just don’t “get it.”

Anyway, it’s a great quote. While it’s easy to toss around labels like “professional” and “amateur,” what do they mean?

Merriam Webster defines professional as “characterized by or conforming to the technical and ethical standards of a profession” and “exhibiting a courteous, conscientious, and generally businesslike manner in the workplace.” An amateur is “a person who does something poorly; a person who is not skillful at a job or other activity” and “one lacking in experience and competence in an art or science.”

Sure, amateur isn’t a compliment, but we hire plenty of them. I’ve seen managers hired because they applied (one euphemism is “stepped up to the plate,” like that’s a qualification for the job), techs hired out of desperation, and travellers hired without a lot of searching or screening. Some -- not all -- exhibited amateurish skills in making decisions, leadership, bench competency, and ethical behavior.

We all recognize an amateur decision or level of skill, yet it’s a taboo subject. I even hesitate to write about it. But I’ve worked for and beside professionals and amateurs, and there is a difference between the two. This is also distinctly different from an inexperienced person, who should not be promoted or placed in a position where he or she can appear to be an amateur.

I’ve never seen it or asked it during an interview, but questions like “Do you consider yourself a professional, and why,” “How do you define professionalism,” and “Give examples of your professional behavior” all work. Assuming someone is professional because he or she has a degree or resume can be an expensive mistake.

NEXT: Distractions Cause Errors

2 comments »     
IG# and Sepsis
October 10, 2014 6:15 AM by Scott Warner

Our current hematology analyzer was a big step up from a Sysmex K4500 with a 3-part differential to a Sysmex XT-1800i with a 5-part differential. I remember arranging a conference call between the bean counters and the pathologist to explain why the difference was important. Since then we have successfully eliminated percentage differential reporting, bands on scans, and reduced manual differentials to one or two a day. Automated counts have arrived.

I’ve had my eye on analyzers for a year or more, and it’s time to upgrade again. The new analyzers perform a 6-part differential, adding an immature granulocyte (IG#) parameter. This has been available on the 1800 under the “Research” tab, but it isn’t anything we have used. Reporting it routinely will be useful.

In cases of SIRS (systemic inflammatory response syndrome), the IG count can differentiate between infected and non infected patients; one source cites a sensitivity of 89.2%. Another study points out the correlation between bacteremia and the IG is high, correctly stating that the so-called “shift to the left” is difficult to measure using manual differential techniques. A hospital in St. Louis found a significant correlation between manual counts that included myelocytes, metamyelocytes, and promyelocytes and the IG parameter on the Sysmex XE-2100.

I’m evaluating two instruments: the Sysmex XN-1000 and the Beckman-Coulter DxH-600. This is very early in the process, but my understanding is that the latter does NOT report an IG parameter. If anyone has experience with these two analyzers, please comment below. I’d love to hear about it.

My point isn’t to proselytize so much as observe that laboratory medicine follows this steady progression toward better accuracy and precision using automation. Tests have gotten better over my career, and the jump to reliable cell counts is huge compared to the old manual dilution - stopcock days on a Coulter counter. And that was really cool compared to manual counts using spit tubes (yes, they were as disgusting as that sounds). Choosing technology is about recognizing that growth curve as much as anything else. The IG parameter is just one example.

NEXT: For the Best Results, Hire Professionals

0 comments »     
The Cult of Busy
October 6, 2014 6:00 AM by Scott Warner

A hospital’s culture defines how it responds to customers and crises, whereas a cult is defined by the dictionary as “a group or sect bound together by veneration of the same thing, person, ideal, etc.” Cults have ideology, rituals, and symbols. Most hospitals have a cult of busy.

  • Ideology - the belief that “busy” means “important,” no matter how much important work actually exists
  • Rituals - tardiness, complaining, not taking work breaks, maintaining a to do list, constantly checking a pager
  • Symbols - overflowing email box, piles of paperwork, unfinished projects

The NZ Herald points out that a cult of busy could be a smokescreen for other issues: fear of getting fired, lack of direction, lack of skills, or boredom. Being busy sends a message that you matter more than those who aren’t. But there’s a difference between working harder instead of smarter.

Author Tim Kreider correctly writes in The New York Times that whining “Busy!” is “a boast disguised as a complaint.” This is behavior we have all chosen as a reassurance of self worth, perhaps; if we are constantly busy we must matter. “Obviously your life cannot possibly be silly or trivial or meaningless if you are so busy, completely booked, in demand every hour of the day,” Kreider writes.

This isn’t to say techs don’t work hard. Lab techs work hard! Everyone has days that are busier than others. Departments can be understaffed or overwhelmed with workload. Doing more with less is always a challenge. But serious professionals approach this with a “How do we get this done?” attitude.

Trying to do everything at once to be “busy” is a different attitude. We’ve all met these people who chronically overbook their lives, subsisting on caffeine, sugar, and bragging to anyone within earshot how busy they are, how much work they haven’t yet done, and how little time they have. They always find time to complain, don’t they?

When that person is your boss it’s a nightmare. Maybe, you work or have worked for one of these cult leaders. If so, please share your stories in the comments.

NEXT: IG# and Sepsis

0 comments »     
More Sample Lookback
October 1, 2014 6:00 AM by Scott Warner

In 2011 I blogged about using a binary search algorithm to find a point of failure when performing a sample lookback with a large number of samples. In dealing with sample lookback and revising our own policies since then, we’ve hit a few snags:

  • How should we account for other instrument variables e.g. scheduled maintenance?
  • How should we handle qualitative testing? Example: microbiology biochemical testing.
  • How should we handle blood bank testing?

Most sample lookbacks occur in chemistry, but many times they don’t at all. Techs used to repeating a control may not think to perform a lookback if they need to replace or rehydrate reagent to obtain acceptable QC. But it’s a good analytical phase quality indicator and a chance to educate techs. Quality control records can be audited, etc.

If quality control is performed weekly or monthly on qualitative kits, a sample lookback may be impossible. In these cases values have to be pulled and a decision made regarding recollection and retesting, essentially an internal recall. It can be difficult to know how significant these failures are, because there often isn’t enough data.

Blood bank lots are spot checked with daily quality control; new vials opened may not be checked unless it’s a new lot or a new day. Tube testing reagents are reliable or contain procedural controls, but most techs I’ve known are nervous enough in blood bank that any lookback would entail repeating all testing. I’ve never seen a lookback in blood bank, but I do occasionally see product notices about reaction strength e.g. we might see weaker reactions with control cells for a particular lot.

Point is there is no “one size fits all.” A technologist suspecting a method failure has to investigate and decide if and when the method failed. That takes time. Maybe, a lot of time. If one is working alone or in a lab that is short staffed, time is a luxury.

I’d like to see information systems or middleware smart enough to analyze and perform virtual sample lookbacks. Does such software exist?

NEXT: The Cult of Busy

0 comments »     
Virtual Keystrokes
September 26, 2014 6:00 AM by Scott Warner

In my last blog I said computers are stupidly reliable. They do whatever they are told, over and over. And they don’t get bored or make mistakes. It’s easy, for example, to create little programs that send keystrokes to applications. I use a freeware program called AutoIt for Windows (there are others, such as AutoHotkey) to create simple and effective time savers, including:

  • Print mailing labels by linking a turnaround time report to an address database
  • Change thousands of item master names to “Tallman” lettering
  • Standardize associated charge item master names
  • Add thousands of blank templates to the item master to add new items quickly
  • Keep a terminal alive that switches between two system status screens every 2 minutes

... and many others. While I have a programming background in a half dozen or so languages, you don’t need to understand a lot of programming to use a tool like AutoIt. Its BASIC-like syntax is straightforward, and Notepad is all you need. These “virtual keystroke” programs can save time and money.

To give it a try, download and install AutoIt from the “Download AutoIt” button. Don’t worry about the add-ons and tools, you won’t need them.

AutoIt has many built in “functions,” bits of code that do one of three things: run a task, change a value, or return a value. You can write your own functions -- effectively expanding the AutoIt language -- and while cool, that’s for geeks. Let’s face it, it’s silly to spend weeks learning something that will save minutes. You need a fast solution.

Their documentation contains good instructions. But here’s a trivial example from one of their tutorials, which you can save as a text file called npad.au3, that sends a message to Notepad:

Run(“notepad.exe”)
WinWaitActive(“Untitled - Notepad”)
Send(“This is some text.”)

From here it’s a hop and skip to automating Excel, maintenance programs, and your laboratory information system. The Send function is powerful, sending any keystroke or combination of keystrokes. (You can send mouse clicks, too.) That’s what I call smart computing.

NEXT: More Sample Lookback

0 comments »     
Computers are Stupid
September 22, 2014 6:00 AM by Scott Warner

Back in the day we imagined computers were smart. In a 1964 Twilight Zone episode called “The Brain Center at Whipples,” a CEO who heartlessly replaces workers with robots is himself replaced by Robby The Robot from the 1956 classic Forbidden Planet. Capek to H.A.L. to Nomad to Tron’s Master Control -- science fiction is a junkyard of them -- machine intelligence is smarter than us and invariably malevolent, paranoia culminating in the 1999 thriller The Matrix.

All fiction, luckily. Fact is computers are stupid.

Merriam-Webster defines “stupid” as “not intelligent: having or showing a lack of ability to learn and understand things.” (The fourth definition includes “exasperating,” and they are that, too.)

This is important to remember, since we live in a time when staffing shortages and expansion of information technology gives computers greater authority in laboratories. Autoverification, report distribution, complex reflex rules, and middleware save time by performing routine, mundane tasks. Software increasingly aspires to a role of laboratory assistant. But that doesn’t make it smart.

Fast Company lists these four things that we do better:

  • Unstructured problem solving - novel and unusual problems are extraordinarily difficult to solve with software. This won’t change in my lifetime, if ever.
  • Acquiring and processing new information - we may or may not be approaching “the Singularity” -- a fancy name for smart -- but until then computers gather data without context.
  • Physical work - many tasks we find trivial, improvised or not, are impossible for computers.
  • Being human - obvious, but empathy and compassion are essential to any healthcare mission.

Still if there is anything a computer does, it typically does it better than us. Computers are known for speed, accuracy, and reliability. Many laboratory tasks are algorithmic; our skills will have to evolve to building these in software. This includes Excel spreadsheets, middleware and data manager rules, custom scripts that automate simple tasks, and much, much, much more. IT professionals don’t have the knowledge to make it happen.

Are lab techs being trained for a future in which stupid computers will have to be used intelligently?

NEXT: Virtual Keystrokes

1 comments »     

Search

About this Blog


    Scott Warner, MLT(ASCP)
    Occupation: Laboratory Manager
    Setting: Critical Access Hospital
  • About Blog and Author

Keep Me Updated

Recent Posts