How Do We Know What We Know

Considering how we know what we know can help massage therapists improve their client practice skills.

By Joseph E. Muscolino, May 16, 2011

How do we know what we know? This question may seem strange. After all, most of us are probably more concerned with the knowledge that we acquire rather than how it's acquired. But, examining this question isn't just an exercise in abstraction; it can improve our client practice skills by helping us choose what techniques we want to learn. Our approaches to acquiring knowledge can be divided into four models: 1. knowledge imparted by an authority, 2. gleaning knowledge from research, 3. testing the new knowledge in our practice, and 4. evaluating new knowledge against principles of anatomy and physiology that are already understood.

Authority Model

The authority model involves knowledge being imparted by an individual who we respect and place in a position of authority. This model is probably the most common approach to learning, and begins in school, where as empty vessels, we sit and try to absorb as much of the knowledge of our teachers as possible. You might also know this method of learning as sage on the stage, because the teacher is the sage standing on the stage in front of us. Sage on the stage, or perhaps sage on the page, can also describe textbook authors.

The authority model of learning usually continues after graduation. As practicing therapists, we subscribe to magazines devoted to our field and read articles by more sages. And we further our knowledge base by attending continuing education workshops where continuing education instructors are sages who present their techniques for us to learn.

The authority model depends on the idea that wisdom is passed from mentor to pupil, and we are enriched. However, there is a three-fold danger to this model. First, this model assumes that each authority is truly a knowledgeable and wise expert, and this isn’t always the case. As brilliant as some sages might be, there might be aspects of their knowledge base that are lacking. Or, the perspective they present might not fully encompass the entirety of the knowledge area being taught. They might even hold some beliefs that simply aren’t true. But how are we to know? How do we choose which pieces of information are pearls of wisdom that we should hold onto and use with our clients, and which pieces would best be discarded?

This dilemma lies at the heart of the second problem, which is that the authority model often discourages independent and creative thought. Instead of critically thinking through the information given to us, the authority model often presents cookbook recipes that are to be followed. We trust the information because we believe in the infallibility of the authority—especially in the world of continuing education, where charismatic instructors might not explain the anatomic and physiologic basis for their technique protocols and only offer their successful case studies as validity of their technique. A good maxim might be: Beware of case studies. Anyone who has been in practice for a few years can cherry pick out a handful of miracle case study success stories from all the clients they have seen.

And the third problem is likely the most vexing of all. What do we do when two (or more) authorities we trust disagree with each other? And looking at the world of continuing education, it does seem that many authorities are convinced of the superiority of his/her own technique over the techniques of others. Who do we choose to trust more when this occurs?

Research Model

The second approach to learning is to look to research for our answers. Research is based on the scientific method, which relies on a very simple and logical concept: if something works, results should be reproducible. The research model seems to solve the problems with the authority model. For example, if an authority states that a certain treatment technique helps low back pain, and they back this up by describing two or three case studies, scientific research applies their treatment technique to a large group of people who have low back pain to see if their treatment is as effective as they state.

The results for this treatment group are compared to a large control group that did not receive the treatment (usually the control group receives what is called a placebo or sham treatment that is known/considered to be ineffective). A comparison is then made to see if the clients in the treatment group fared better than those in the control group. If they did, then the proposed treatment is effective and valid. Alternatively, the proposed treatment could be compared to another treatment that is recognized and accepted to see which one is more effective.

Certainly, trusting research is a lot safer than blindly trusting an authority. The very essence of research is to put the ideas of authorities to the test. But relying too much on research also has its dangers. The efficacy of a research study depends on it being designed and carried out correctly, which is not always the case. Research study design can be complicated, and errors are sometimes made. Further, incorrect interpretations and conclusions of the research data can occur.

Study Population. First of all, an effective research study involves working with a large number of people (the number of people in a study is referred to as “n”). Whereas a single case study (n of 1) or a few case studies (an n of 2 or 3) might make the proposed treatment technique seem effective, these results might not be reflective of the entire client population.

If n is large enough, we can better trust that the technique is representative of the entire client population that we might treat, and therefore will work for us with our clients. For a research study to be effective, tens, if not hundreds or thousands, of people need to be involved. This can be expensive, and these types of large studies are not always available.

Inclusion and exclusion factors. Next, we have to make sure that the inclusion and exclusion factors are carefully chosen. As these names imply, inclusion factors are those factors/parameters that we want included in the study; exclusion factors are those that we want excluded.

Continuing with our example, if the study is evaluating the effectiveness of the proposed treatment on clients with low back pain, do we include all people with low back pain, or do we pick and choose those we want to be a part of the study? For example, we might want to include all people with muscle spasms, strains and sprains, but exclude all people with herniated discs or severe degenerative joint disease.

The idea of inclusion and exclusion factors becomes more complicated when we start to consider all the other parameters that might affect the study. Are people included who also exercise, meditate or engage in some other activity that might affect the study? The very essence of a research study is that we try to study just one parameter—the proposed treatment.

But so many factors affect health that it’s virtually impossible to achieve this goal. Therefore, we try our best to identify all of these factors and then make sure they are equally represented in both the treatment and control groups. If this is achieved, then we assume that any difference between the two groups is due to the proposed treatment technique. However, accounting for all of these factors and then distributing them evenly is not always successfully achieved.

Isolation versus wholistic approach. In fact, this points to the larger conceptual difficulty of research. A research study, by design, is meant to evaluate the effectiveness of just one parameter. In other words, to be valid, a research study must isolate this one parameter and then decide if it is effective in improving one’s health.

However, the concept of wholistic health involves the realization that no one parameter works in a vacuum. Good health is often attained only when a number of treatments are administered in conjunction with each other. For example, the best treatment for a client with low back pain might be to use massage, heat and stretching together, not to mention advising the client about postures, stress and diet—among other things. A multifaceted treatment approach such as this is inherently difficult to evaluate with scientific research models.

Treatment administration: validity and bias. Another consideration is whether the treatment was administered correctly. This may seem to be a given, but is not always the case. It’s not uncommon for treatment to be administered by people who are not experts in that technique. This is especially true with touch/massage research, where the people administering the care are often nurses or family members.

A valid question is: If the treatment was not administered by experts, can we trust the results? Ironically, if experts are used to administer the treatment, because of their interest in seeing their technique succeed, bias may creep in. To prevent bias, it’s important that the therapists are not the same people who chart the progress of the participants in the study. In this way, the people who chart the progress are blinded in their knowledge of who is in each group.

Client bias and hands-on placebo treatment. In fact, even the participants may want to improve so much that they bias the study. This is why it is important to design the study to include a sham placebo treatment so that the participants don’t know whether they are in the treatment group, or the control group that received the placebo. In other words, they are also blinded.

This brings up a problem that is particularly challenging when conducting research in the world of manual therapy: Creating a valid hands-on placebo treatment for the control group is diffi cult. In the world of prescription drug research, both groups receive the same little white pill so they cannot know if they’re getting the drug or a placebo. But in the world of massage and other manual therapies, clients know whether hands-on massage is being given to them. Therefore, an ineffective placebo hands-on treatment must be devised. But this is extremely difficult. After all, doesn’t all touch involve some therapeutic healing?

Interpretation and conclusions. And on top of all this, the final conclusions at the end of a research study may be open to interpretation, so you need to read the entire study carefully to see if you agree with the conclusions drawn by the authors. Reading just the abstract—or simply listening to someone who has read the study—isn’t going to give you any idea of whether or not you trust the outcomes and conclusions of the research.

Not all research is in. Our last challenge when relying on the research model for what we know is that there aren't research studies available to prove or disprove the value of every treatment technique—likely because valid research is expensive and takes time.

However, we cannot always wait for all the studies to be conclusively done because our clients need treatment now. In the meantime, it’s important to remember that the absence of research does not prove that a technique is not valid. The fact that no proof exists that treatment X works doesn’t mean there is proof that treatment X doesn't work. To make a comparison, gravity still existed the day before the apple fell on Newton’s head, we simply did not yet have a scientific formula to explain it. In the absence of definitive proof, we need to be open-minded.

Testing New Knowledge Model

In the face of not blindly trusting an authority, and also not having conclusive valid research upon which to rely, we can always try testing the knowledge/technique in our own practice. For example, on Monday morning, we can practice on our clients whatever we learned in a continuing education workshop over the weekend. However, this can also be problematic for many reasons.

In effect, we would be conducting our own limited research study, and we might not be designing and executing it very well. We might not yet be proficient with the treatment technique to implement it correctly, for example, or we might not have enough clients to determine if it is effective. Additionally, if we are administering other techniques at the same time, how do we know which one was responsible for a client’s improvement, if any?

Beyond these concerns, there are literally tens if not hundreds of techniques being marketed to manual and movement therapists. Do we need to test them all? And if we did try out a technique for a reasonable period of time, and it did not prove to be effective, didn’t we just waste our client’s time and money? Our clients didn’t sign up to be part of a research study—they came for effective treatment and we have a responsibility to administer it.

Evaluating new knowledge against anatomy and physiology principles. We can see that the authority model of learning requires trust that the authority is infallible, which is definitely problematic. Relying on the research model requires clear and conclusive valid research to already be done, which is not always the case. And relying upon the model of testing all new knowledge in our practice is logistically problematic, as well as potentially unfair to our clients.

Where does this leave us? Are we back to being open minded and trusting our sages on the stage? We usually think of being open-minded as a good thing, but there is another old saying we should keep in mind: “Be open minded, but don’t be so open-minded that your brains fall out.” This is where our fourth model of learning—evaluating new knowledge against principles of anatomy and physiology—is so valuable.

Essentially, evaluating new knowledge against principles of anatomy and physiology allows us to critically think through the mechanics of a new technique that is being proposed, and determine for ourselves if the basis for this technique makes sense given what we know about anatomy and physiology.

Certainly, not all of anatomy and physiology is known and understood, but we do have some very well-established principles about how the human body functions. And applying that knowledge to a new technique empowers us to critically think through the likelihood of how effective that technique will be, as well as when to apply the technique.

For example, by knowing anatomy and physiology, we can reason what stretches for a muscle would and would not be correct. We understand that stretching a muscle involves making it longer, which is accomplished by simply doing the opposite of the muscle’s joint actions. This makes sense because if the actions of a muscle bring it to its shortened state, then doing the opposite of the actions would make the muscle longer, thereby stretching it. (One addendum to this idea is that it might be expanded to include actions at other joints if myofascial continuity across these other joints is considered.)

So, we think of the joint actions that the target muscle to be stretched can do and compare that knowledge to the stretch that is offered by the authority. If the knowledge matches, we can trust that the stretch will, in fact, be effective and begin employing it in our practice. If not, then we can choose not to use the technique.

For example, given that the brachioradialis does not cross the wrist joint, why would moving the hand into ulnar deviation at the wrist joint add to its stretch, as is often recommended by authorities? Could it be that the increased stretch felt by the client is occurring in the nearby extensors carpi radialis longus and brevis, which do cross the wrist joint and are stretched with ulnar deviation of the hand?

Additionally, given that the end forearm position when the brachioradialis is maximally contracted and shortened is halfway between full pronation and full supination (at the radioulnar joints), why would we want to place the forearm in that position, which again is often recommended? Stretching a muscle—making it longer—is not accomplished by placing it in the position of its actions, but rather by doing the opposite of its actions. Wouldn’t full pronation (or even full supination) of the forearm make more sense because this position brings the attachments farther apart, therefore the muscle is lengthened?

Looking at a stretching example in the lower extremity, we can ask why so many authorities recommend changing the position of the hip joint when stretching the vastus musculature of the quadriceps femoris group? If the vastus muscles do not cross the hip joint, then other than flexing the hip joint to slacken the rectus femoris and knock it out of the stretch (so it does not limit stretching the vastus musculature), what are we trying to accomplish by altering the position of the hip joint? If the goal has to do with myofascial meridian continuity, then a specific position should be determined based on the adjacent muscle/myofascial units that are in the meridian.

Ask yourself, does the recommended change in the hip joint make sense when compared with this information? Using trigger point (TrP) treatment as another example, if a TrP is understood to be due to local ischemia in the tissues, does it make sense to create any further ischemia with deep pressure? And if deep pressure is administered, does it make sense to hold it for a prolonged time? What are we trying to accomplish and are we accomplishing it as effectively as possible?

Given that ischemia is the problem (because it causes a decrease in blood supply that then causes a decrease in ATP molecules that are needed to break the actinmyosin cross-bridges that create the contraction), then wouldn’t a stroking technique that increases local blood supply be more efficient? Therefore, wouldn’t multiple, short deep effleurage strokes be more effective when treating TrPs than holding sustained compression?

These are the kinds of questions that can be asked and answered without benefit of authority, research studies and months of testing in your practice. Evaluating new knowledge against principles of anatomy and physiology can also improve our assessment skills. Continuing with the brachioradialis example, if we want to assess it through palpation and we need to make it contract to engage it and locate it, it makes sense that we want to contract the brachioradialis and only the brachioradialis so that we can discern it from the adjacent musculature. This requires an isolated contraction.

So, we ask the client to place their forearm in a position that is halfway between full pronation and full supination (the best position for it to effectively contract, given its actions), and then flex the forearm against our resistance. It’s crucially important that our resistance is placed against their distal forearm, not their hand. If we add our resistance to the client’s hand, their radial deviators (extensors carpi radialis longus and brevis) will engage, making it harder to discern the brachioradialis from these adjacent muscles (Figure 4, left). By understanding basic principles of anatomy and physiology, we can reason through how to most effectively palpate and assess our clients.

The essence of evaluating new knowledge against established principles of anatomy and physiology is that we are empowered by critical thinking. Of course, this requires first learning anatomy, which is often not as well taught and learned as might be desired. But, if the time is spent to learn and understand anatomy, physiology can be figured out. If physiology is understood, then pathophysiology can be figured out. If the mechanics of pathophysiology are understood, then assessment can be figured out. And if assessment is known, then treatment can be figured out.

Perhaps the most effective way to become a more effective clinical orthopedic massage therapist is not to continually frequent continuing education workshops, not to continually read every research study that is published, and not to spend hundreds of hours testing new techniques on our clients, but to spend more time going over the basics of anatomy and then critically thinking from there.

It’s true that we’ve discussed the limitations of some of the most common ways of acquiring knowledge, but that isn’t meant to discredit any of the models in their entirety. The knowledge of the authority isn’t the danger, but putting blind trust in those who teach can cause problems. Similarly, research has helped massage therapy gain credibility by proving when and where it can be beneficial—but you need to read research studies critically and with careful attention to how the study was conducted. And certainly there is nothing wrong with being creative in our practice by introducing and trying new treatment techniques, but we need to be sure we’re not constantly subjecting our clients to the newest technique that is the flavor of the month.

You might want to think about gaining new knowledge this way: Most every technique must have something valid—if not many things—otherwise, it wouldn’t last very long in the world of manual and movement therapies. Yet, if every technique were as effective as its proponents state, why isn’t everyone doing that technique? A logical conclusion is that each technique has something to offer, but does not offer the solution to every problem for every client.

Therefore, our role is to learn as many techniques as possible, adding the elements of each one to our toolbox of therapies. Then, with the wise judgment that comes from experience and knowledge of anatomy and physiology; we can learn how to reason through which combination of assessment and treatment tools to use in each case for the best improvement of the client who is on our table.