The Big Disconnect
Over the weekend I had the opportunity to visit with some of my peers. One of the things I brought up was the difference between animal models and what we do in the clinic. There is a big difference. Yet much of our new evidence is based on animal models. Obviously you can't take a bunch of subjects, give them strokes and see what happens as is possible with animals. Nor can you take some stroke patients and place them in an environment where they press a lever or don't get food.
Therein lies the problem. In a research study the rat has nothing better to do than push the lever so it can eat. It may push that lever a thousand times a day. If the resistance is increased the rat pushes harder. That situation is as far removed from the clinic as possible. The result is research telling us what works with rats under those conditions. The same disconnect is present to some degree with almost all research.
I learned in my various conversations that many therapists are unaware of the current research period. Those who were aware didn't realize how the researchers manipulated those rats. Nor did they realize the rats used had clean strokes, meaning only a small portion of the brain was damaged. That isn't the real world. In the real world patients have large strokes involving many areas of the brain. They present with multiple functional deficits. They aren't going to improve by simply pushing a lever. The same is true of other diagnoses. TBIs are rarely clean. Degenerative diseases such as MS are known for their random presentation.
This creates another problem. The research we're using for evidence only partially translates to our patients. Show me a study where the rat had a huge MCA infarct with aphasia and hemi-neglect and I'll change how I think. A whole lot of rats would be starving if someone did that. Yet those are our patients. Our patients don't have little, clean strokes. They have large, messy ones. The studies are based on thousands of repetitions in one day. Sometimes I'm lucky to get 10 repetitions in an entire treatment session.
Clinicians are running into one set of problems. Researchers seem to be looking at another set. There doesn't seem to be much communication between the two. One issue is the lack of agreement on what is important to study. Researching theories is good. We also need research of practical treatment applications. The stroke team I work with is heavily involved in research. Currently they have five studies running. Every one of them looks at a different aspect of reducing the amount of damage following a stroke. In this case there is universal agreement that limiting damage following a stroke is a good thing.
I'm not sure if it's a question of priorities, special interests or lack of fundamental groundwork research. Physical therapy seems to be going at this backwards. There is a study here in Houston looking at robotics for gait training. I find it ironic that the robotics existed before the studies supporting their use appeared. We need to start looking at things before we implement them, build them or encourage their use. Having the supporting evidence in advance would decrease the amount of disconnect. It's one thing to look at a novel use for an existing piece of equipment. It's another to put something into use before the research exists. For what it's worth, the research I've seen finds robotics is no more effective than body-weight-supported treadmills.
I want to do research. When I get there I'll have the benefit of having been in the clinic for many years. Many researchers lack that period of clinical practice. I don't think you can imagine what it's like in the clinical setting if you haven't been there. At the same time, most clinicians have no idea what is involved in research. Using evidence-based practice is starting to change that. Reading the literature isn't a substitute for actually doing research.