Time to turn off the Tap: Why Emotional Freedom Technique is dangerous nonsense

“Tapping therapy”, or Emotional Freedom Technique (EFT), has squirmed its way into mainstream media once again.

On Wednesday, BBC Midlands ran a segment on the results of a recent study using the technique, which combines tapping various points on the body with repeating positive statements. Apparently, all but one of the 36 patients in the trial had recovered. Senior doctors in the segment appeared to be pleasantly confused but utterly won over.

So if nearly every patient in the trial got better, why was there such an outpouring of derision on social media? Why exactly is “tapping therapy” a load of nonsense?

There’s no evidence it works

Firstly, as the lead author on the trial, Professor Tony Stewart, was keen to point out, the study was only a service evaluation. All they did was give a group of people with mild mental health difficulties some “tapping therapy”, to see if it was practical to do in a GP surgery. This is a very different thing to testing if a treatment works or not.

Just because the patients got better after some “tapping therapy” doesn’t mean it was the therapy that caused the change. People naturally get better anyway, especially if their problems are relatively mild. This is called regression to the mean. Even half of people with major depression recover completely within a year if you do nothing at all.

And even if the patients getting “tapping therapy” recovered quicker than they might have done without it, that doesn’t mean that there’s something special about the technique. The “tapping therapy” involves the patient saying lots of positive things to themselves while tapping – the nice comments would make you feel pretty good, regardless of whether you were tapping yourself, hopping around on one leg or watching Thomas the Tank.

But we don’t have hopping therapy, and the Fat Controller is thoroughly underqualified.

So we really can’t say whether or not the tapping did any good at all from this trial. What we need are trials that compare a group of patients who get “tapping therapy” to a group of patients who get something that cancels out the effects discussed above – perhaps a few sessions with a friendly counsellor.

 Unsurprisingly, there really aren’t any good studies out there. McCaslin published a fair review of the meagre collection trials of “tapping therapy” in 2009, finding them riddled with basic methodological errors, including:

  • Drawing conclusions from a p value of 0.09
  • Not declaring the number of patients who dropped out
  • Poor, if any, blinding
  • Not controlling for placebo effects
  • Not controlling for demand characteristics
  • Tiny sample sizes
  • Bizarre, or inadequate, control groups

In fact, the biggest study, by Waite and Holder, who used the technique on phobias, found that all four of their groups (including doing nothing, tapping on the wrong places and quite brilliantly, tapping on a doll) did equally well.

Why would it work anyway? 

Before we even ask if something works, we have to ask why we think it might. Is it plausible?

This is where “tapping therapy” really starts to get unhinged. Flicking through the 79-page manual written by Gary Craig, we find choice quotes like:

EFT was originally designed to overhaul the psychotherapy profession. Fortunately, that goal has been reached…” 

The manual states the starting point for the theory behind “tapping therapy”: 

“The cause of all negative emotions is a disruption in the body’s energy system” 

and therefore that 

“Tapping [various points of the body] sends pulses through the meridian lines and fixes the disruption”. 

The tapping is combined with making positive statements, like 

“Even though I still have some of this war memory, I deeply and completely accept myself

until the bad feelings go away.

I don’t know about you, but I sat in lectures at medical school for 5 years. I’ve assisted in countless operations, looked at hundreds of scans, and studied physiology and neurology, but I’ve never seen anything resembling a meridian line in a human being. There is nothing special about the parts of the body “tapping therapy” chooses. In real life, there is simply no rational basis why tapping on arbitrary parts of the body would have any effect – apart from giving you a sore finger if you did it hard enough.

Any benefit really is just down to people saying self-affirming, hopeful things to themselves while they look a bit silly.

Why this is dangerous 

So if “tapping therapy” doesn’t do anything special, how can it do any damage?

It can’t do any damage directly – but it can certainly harm patients who urgently need treatments that do work, by delaying and fooling them. Every second someone spends having “tapping therapy” is time they could be spending seeking effective treatment for their mental illness, or perhaps even worse, for their physical illnesses. The manual claims to be able to cure allergies and respiratory conditions, as well as cancer too – things which can kill quickly if left untreated.

Pursuing “tapping therapy” as a potential therapy, by wasting thousands of pounds on further trials and therapist training, diverts sorely needed resources from interventions that really do have rational, believable promise. Things that could help people.

On top of all of that, lending it credibility in the form of airtime and column inches will only skew the public’s idea of what real science is about – hard work and small steps. Everyone wants a miracle cure, but we can’t delude ourselves into thinking we’ve found one when it makes no sense on any level.

Time to turn off the tap.

Advertisements

Mental shortcuts: A psychological explanation of why psychiatrists overestimate risk

In his 2011 book Thinking, Fast and Slow, Nobel Prize winning psychologist Daniel Kahneman explains how humans make decisions. He discusses how we use heuristics, otherwise known as “rules of thumb”, to cut corners in our thinking processes to help save time and effort in making judgments. Usually these heuristics work well, but sometimes, in tricky situations, they can fall seriously short of what we need. Estimating the likelihood of a rare event is one such area.

The power of heuristics

Kahneman gives the example of driving around Israel during a time when suicide bombings on buses were relatively common.  He knew the hard statistics – over the last 3 years a total of 236 people had been killed by bombs, but over 1.3 million people rode the bus every day in Israel. His chances of coming to harm were incredibly small, but he couldn’t help feeling afraid of buses. Why?

Several heuristics will have been interfering with letting Kahneman make a rational decision here. Firstly, the availability heuristic. The easier it is to think of a previous example of a bus bomb, the more likely you are to think it will happen again. This is how terrorism works. With so much media attention publicising the bombs, there will have been copious opportunities for people to conjure up mental images of bus bombs – which will scare them into thinking it will recur more frequently than it actually will.

At the extreme, this can grow into an availability cascade, in which a small or non-existent threat is blown out of all proportion because one person voices concern, creating a highly accessible mental image that others use to voice concern, and so on, until everyone is concerned. Mad cow disease is a classic example of this, or electricity pylons giving people cancer. More topically, Romanian immigrants “flooding into the UK” fits into this category nicely too.

Also involved is the representativeness heuristic. How much a bus looks like a typical bus used for suicide bombs will influence how likely people think it is to blow up. If Israel’s buses were a variety of different models, people would be far more scared of the models in which bombs had already been placed, and less scared of the others, even though in reality the model probably wouldn’t have had much impact on the risk.

The third heuristic to interfere is the affect heuristic. Whether we like it or not, how we feel about something plays a big part in how good we think something is for us. We tend to overestimate the benefits of things we like and the risks of things we don’t like. Obviously, vivid images of the result of suicide bombings are going to inspire some pretty serious levels of dislike in people, so they’ll view the risks of more bombings as even bigger. Incidentally, we use the technique of making people feel bad about something to increase their estimation of it’s risk for good purposes too – putting unpalatable pictures of tumours on cigarette packets, for instance.

This ties in with another cognitive trick we use to save mental effort sometimes, called attribute substitution. When we have trouble answering a tough question, like how likely is this bus to blow up, our minds will often substitute an easier question and answer that instead, like how awful would it be if that bus blow up? We then take our answer and match it an answer of corresponding intensity for the first question. “It would be really awful if that bus blew up” becomes “that bus is really likely to blow up”.

As if those weren’t enough reasons to declare humans flawed at judging risk, it turns out that we’re way keener to avoid a loss than to make a gain. This is called loss aversion. Most people, says Kahneman, will turn down the offer of tossing a coin that would win them $125 or lose them $100. In fact, it’s only when people are offered $200 for winning that coin toss that they start to agree to gamble. We hate to lose, even if it costs us a chance of winning.

People are so keen to avoid losing that they’ll make statistically bad decisions in order to avoid it. Take insurance, for example. People will pay way more to insure their belongings than the risk of them being damaged would suggest is reasonable. The more you stand to lose, the greater the amount over the odds you’ll pay to not lose it.

Risk in mental health

So how does all this tie in to mental health? Well, psychiatrists judge risk all the time. At least we think we do.

Let’s take judging the risk of a mental health patient killing themselves or someone else. We often section people because we think the risks of this are so high.

In fact, on paper, the risks of both those outcomes are incredibly small.

In 2009, for example, 84 patients killed themselves during hospital admission or on a period of trial leave, out of a total of approximately 120,000 admissions involving 108,000 people. That’s a suicide rate of around 1 in 1400 admissions. Even if we take the cases in which risk of suicide was the major contributing factor to admission (21%), that’s still only 1 in 285 admissions.

Likewise for homicide, in the whole of 2011 a miniscule total of 18 people who were convicted of homicide had been given a diagnosis of schizophrenia at any point in their life. That’s on a prevalence of schizophrenia of 0.3-0.7% for the entire country, adding up to a risk quoted by one study as 1 in 9090 cases. The same study found that the homicide rate during a first episode of psychosis, often the time during which a patient is most ill and is most likely to be sectioned, was 1 in 629 patients.

Mental shortcuts

So when we deem someone to be “high risk” – and I do this myself – what do we really mean?

If you take Kahneman’s point of view, it means we’ve used our heuristics to cut corners in our decision making and come up with a heavily biased assessment based on how we feel and our active memories, rather than the facts of the matter.

When we see someone in A+E who feels suicidal, or is so psychotic we feel they might hurt someone, what do we think of? Is it the statistics? No.

We use the availability heuristic to conjure up images of previous patients we’ve had, or even just heard of, that have killed themselves or hurt others. Newspaper headlines like The Sun’s “12,000 killed by mental patients” crops up in our heads.

We use the representativeness heuristic to liken the patient in front of us to what we imagine the typical patient who will kill themselves or someone else to look like – and they usually fit the bill because the image is so vague.

We use our affect heuristic to allow our feelings to judge how risky the situation is – and because there can be no worse feeling that making a wrong decision and seeing someone die because of it, we decide that the risk is high.

Finally, we use our aversion to loss to take more precautions than the statistics deem necessary to ensure what’s important – our patient’s safety.

None of these heuristics would cause too much of a problem if we were dealing with small decisions, or dealing with a moderate decision only once. But that’s not what we do; we make big decisions about bringing people into hospital because they’re “high risk” all the time. Potentially hundreds of people, to prevent just one or two bad outcomes.

Why use risk as a basis for treatment?

On this evidence, I don’t see admitting or treating someone purely for reasons of risk to be a reasonable way of practicing psychiatry. I certainly don’t see it as a good enough reason for taking away someone’s liberty by sectioning them. If we were any good at forecasting risk there might be a case for it, but as it stands, we’re so poor at predicting bad outcomes that the amount of treating and coercing we do to stop one averse event is far too large.

Fortunately, almost all the people that are brought into hospital as “high risk” also need to be there for their own health – because they are so unwell. The illness and the risk often come together.

Improving people’s health is a far more rational use for a hospital, and I’m sure that by focusing on treating illnesses, the tiny risks our patients do pose will shrink even further – not that our heuristics would be able to tell.

%d bloggers like this: