Community Treatment Orders: Good, Bad or Ugly?

The Psychiatric Bulletin has devoted an entire issue to the topic of CTOs (Community Treatment Orders), so I thought it would be an opportune time to discuss what they are, and the various arguments for and against them.

I’ll discuss three possible conclusions about CTOs:

  • They’re good (they work)
  • They’re bad (they don’t work)
  • They’re ugly (they’re a breach of human rights)

What are CTOs?

CTOs, or Community Treatment Orders, are “community sections”.

If you’ve been in hospital under Section 3 of the Mental Health Act and are well enough to be discharged home, your consultant psychiatrist might decide to put you on a CTO instead of let you go completely. They need the agreement of an AMHP (usually a social worker) to do this.

In simple terms, being on a CTO means that you’re free to do as you wish, as long as you keep to the conditions of the CTO. There are two mandatory conditions – you have to turn up at the end of your CTO to be reassessed, and you have to turn up to see a “second opinion” doctor if you don’t agree with your treatment plan – but your consultant can add any number of additional conditions if they feel it’s necessary to keep you well. Conditions like “avoid drugs” and “turn up to my clinic” are common.

Here’s the kicker – if you don’t keep to the conditions of the CTO, your consultant can recall you to hospital (force you to come back) if it’s in the interests of your health, your safety or the protection of others. In this respect, being on a CTO is quite like being parole.

You can’t be treated (i.e. injected) against your will in the community – that can only happen in the hospital after you’ve been recalled.

A CTO initially last 6 months, but it can be reviewed and extended as many times as your consultant likes, as long as an AMHP still agrees and the patient doesn’t win a tribunal.

You can find the exact legal criteria for CTOs here.

Why do we have CTOs?

We’ve had CTOs since 2008. Large parts of the rest of the world already had them in some form, including numerous states of the USA, Australia, New Zealand and Israel.

The idea to bring them to the UK was raised as early as 1988 by the Royal College of Psychiatrists, though fierce opposition and political stagnation delayed their arrival.

The aim was to keep “revolving door” patients out of hospital – the type of patient who disengages with their care team, stops their medication, relapses, gets admitted, gets better with treatment, gets discharged and then starts the whole cycle all over again, often many times a year. Apparently these patients are demographically similar in every place that uses CTOs – they have psychotic illnesses and tend to be male, black and use drugs.

Ultimately, CTOs were meant to free up lots of beds for other patients and to keep difficult-to-manage patients well.

Initially, the government estimated we’d only use CTOs on a few hundred patients a year, but since their inception in 2008 we’ve used over 14,000. Roughly 4,000 of those patients have been recalled to hospital and roughly 4,000 have had their CTOs discontinued. Only 5% of patients who appeal to a tribunal win their case.

Are they GOOD or BAD?

Whether CTOs work or not has been hotly debated. The key reviews of the research were written by Churchill in 2007 and Maughan in 2013.

The first point to make is that the effects of CTOs are very difficult to study scientifically – different countries have differently worded laws, with different intents, so it’s not always possible to directly apply evidence from one place to another. It’s also very hard to tease apart the effects of CTOs from the effects of other interventions that often come with them, like extra support.

A good example of how unintentionally misleading research about CTOs can be is this recent study. The researchers followed 37 patients and compared how long they spent in hospital before and after being put onto their CTOs. Low and behold, the average number of days they spent in hospital per year dropped from 133 to just 11. Admission rates per year fell from 3.3 per year to 0.3 year – a tasty 91%. It appears to be a miracle.

But it isn’t – when the patients were put onto their CTOs, they got a lot of extra care from the specialist team, which would have strongly influenced their likelihood of improvement regardless of their CTO. And people are prone to getting better anyway (“regression to the mean”).

You need to compare groups of patients whose care is identical except for the CTOs to get a valid result. When this has been done, the outcomes are clear – CTOs don’t work. Many studies have actually shown that being on a CTO makes it more likely that you’ll be admitted because you’re being watched more closely.

All three randomised controlled trials – the most reliable type of trial – support this assertion. Before 2013, the only two studies of this type, performed in North Carolina and New York, were poorly designed and of debatable usefulness, but still failed to report any benefit from being on a CTO.

Then came one of the most important mental health papers of the year – the OCTET trial. Professor Tom Burns and his team in Oxford randomised 336 patients to receive either a CTO or a short period of day leave from hospital before discharge. A year later, they checked up on how the patients did. It was the best study design they could manage under the legal circumstances, and the results were astonishing – there was no difference at all between the two groups in terms of admissions, wellness, time spent in hospital or social functioning. Literally zero change.

Though the study has been criticised, usually on the grounds that the sample of patients and clinicians who took part in the trial was biased, it’s hard to argue against such an emphatic result when the only contradictory evidence is your own gut feeling that CTOs appear to work.

As we know, you can’t judge if an intervention works just by looking at ground level. Humans just aren’t capable of making accurate inferences in that way.

Why are they BAD?

So why don’t CTOs work? I have my theories.

If you conceptualise CTOs as a threat – “if you don’t do as I say, we’ll bring you back to hospital” – there are various reasons why that threat might not be effective.

  • The threat is ignored. The type of patients who end up on CTOs aren’t usually the ones who follow the instructions of their doctor to the letter.
  • The threat is a bluff. Even when a patient ignores their doctor and breaks the conditions of their CTO, their doctor doesn’t recall them. They stay well for a while, and aren’t a risk to themselves or anyone else, so the doctor doesn’t think it would be a good idea to drag them back to hospital kicking and screaming. Eventually they do relapse, usually quickly, and are brought into hospital in the state they would have been in without a CTO.
  • The threat is hard to carry out. With services as stretched as they are, it’s difficult enough to organise admission for someone who is really unwell, let alone someone who has refused to take their medication and needs to have it given under force, even though they’re still well.
  • The treatment doesn’t work anyway. The type of patient who ends up on a CTO, who is very difficult to keep well, isn’t usually going to have an amazing response to medication even if they are coerced into taking it. They tend to relapse anyway, whatever we do.

Are they UGLY?

Some groups have stated that even if we were sure that CTOs reduced admissions and kept patients really well, it would be wrong to use them as they infringe human rights.

In February 2013 the UN Special Rapporteur on torture and other cruel, inhuman or degrading treatment or punishment, Juan E. Méndez, released a report detailing how some developing countries were discriminating against people with mental health problems, resulting in their abuse.

Though some have argued that CTOs do not amount to the same level of “diminishing human dignity” as the laws of these developing countries, it seems that the opinion is just a shade of grey; a matter of interpretation. It shouldn’t be this way.

Though research suggests that the opinions of patients on CTOs is mixed – some like the extra care and structure, others dislike the coercion – I think the emphasis on paternalism is a very unhelpful step for psychiatry.

As I’ve discussed in a blog before, our Mental Health Act doesn’t allow for the possibility that a patient might be able to make sound decisions about their own life. It assumes they won’t be able to, and hands that power to doctors.

If a patient with a physical health problem, like diabetes or asthma, has the capacity to make a decision about their own care, then that decision is respected – even if it’s unwise and might lead to them becoming very unwell. We have no right to force capable people with diabetes to come back into hospital if they stop taking their insulin, even though we might drastically reduce the rates of illness that way.

But if the patient has a mental illness, for some reason we can force them, even if they’re utterly capable of considering the situation for themselves. I recognise that mental illness predisposes slightly more readily to violence that most physical illnesses – but this is just another risk that the patient has to demonstrate they can weigh up to be deemed capable of making their own decisions.

A large proportion of patients on CTOs – I’m not sure how many exactly – will be chronically too unwell to be able to make reasonable decisions about their care, so being on a CTO is less of an infringement for them, but this is simply fortunate, and not an excuse for ultimately abusive legislation.

Where from here?

The water is muddy. The research on CTOs appears to state that they don’t work, but it’s hard to be certain because it’s a tough area to study scientifically. Even if they do reduce relapse and readmission rates, in the eyes of many, CTOs represent a blatant infringement of human rights.

The possibility of CTOs being abolished, even in the face of robust scientific evidence of their ineffectiveness, is slim – unless our government is instructed by a higher power. That outcome seems unlikely too.

The best we can do for now is to keep investigating, keep discussing, keep raising the lack of evidence.

CTOs are a law, but they’re also a treatment. For any treatment, a lack of evidence of effectiveness should make us sceptical and cautious.

Promises, promises: Is “Closing the Gap” going to help mental health services?

On Monday, Nick Clegg announced the government’s new mental health action plan, “Closing the Gap”.

In a document of broad scope, twenty five priorities are listed as targets for improvement in mental healthcare provision, including the commissioning of high quality services in all areas, easier access to talking therapies and the establishment of waiting list standards.

The acknowledgement of the need to improve mental health services by any politician is a clear positive step. For such a policy to attract the Deputy Prime Minister himself is even better. But the reaction from mental health professionals has been at best cautiously muted, and at worst dismissive.

This is because many of us have seen this all before. It’s been three years since the government’s last relevant policy, No Health Without Mental Health, which made simple promises like “more people with mental health problems will recover”. Since then, we’ve lost nearly 1 in 10 mental health beds, seen our wards packed to over 100% capacity and seen waiting lists for talking therapy climb to over a year in some areas.

Mental health funding, which was already barely 60% of what the morbidity burden of mental illness dictated it should’ve been, has been cut in real terms for the last two years. Staff from one Mental Health Trust have felt compelled to start a campaign against decimating cuts that they see as overtly dangerous to patients.

So as much as we crave change, we’ve learnt not to expect it, no matter how clear the message.

“Closing the Gap” in itself is well intentioned, but light on detail in places – particularly figures and timeframes. Although the pledge that “no-one experiencing a mental health crisis should ever be turned away from services” is a laudable one, it’s also easier said than planned and paid for. From the frontlines of mental health, where we often have to send patients over 200 miles to access something as straightforward as a hospital bed, it’s not hard to see how revolutionary the changes would have to be. We don’t even have the resources to see half of the people who come to A+E following an episode of self harm, let alone treat them thoroughly. To fix these problems in anything less than decades, with anything less than billions of pounds, would be akin to magic.

The promise of “an information revolution in mental health” seems a touch optimistic too, when the primary method of information transfer between mental health hospitals is still fax.

One action point – that mental health patients should be offered a choice of providers – resonated particularly sardonically. With so many people struggling to obtain an appointment to be seen by the sole local service, the political push to offer a choice seemed sadly out of touch to some. The difference in waiting times between mental and physical health is amongst the most prominent failings of our system, and though instituting waiting list standards is a positive step toward equality, reducing the size of the lists is a far pricier conundrum.

However, the focus of our doubt should not be the lack of clear funding promises accompanying the policy. The common denominator of success in mental health isn’t funding – it is caring. Wise investment would only serve as a means to employ more staff, to train them more comprehensively, and allow them more time to care for their patients. Such a process takes years, is not easy to legislate for, and has been consistently overlooked as a vital part of establishing high quality care. Yet again, the value of good staff with high morale has gone unnoticed.

It’s not that we aren’t grateful for the attention shown to mental health by Clegg. But politicians no longer have the right to expect commendation simply for showing an interest in us and making us promises. We’ve been fooled before, many times, and now we’re not so easily taken in. When we see results, we might begin to warm up.

Time to turn off the Tap: Why Emotional Freedom Technique is dangerous nonsense

“Tapping therapy”, or Emotional Freedom Technique (EFT), has squirmed its way into mainstream media once again.

On Wednesday, BBC Midlands ran a segment on the results of a recent study using the technique, which combines tapping various points on the body with repeating positive statements. Apparently, all but one of the 36 patients in the trial had recovered. Senior doctors in the segment appeared to be pleasantly confused but utterly won over.

So if nearly every patient in the trial got better, why was there such an outpouring of derision on social media? Why exactly is “tapping therapy” a load of nonsense?

There’s no evidence it works

Firstly, as the lead author on the trial, Professor Tony Stewart, was keen to point out, the study was only a service evaluation. All they did was give a group of people with mild mental health difficulties some “tapping therapy”, to see if it was practical to do in a GP surgery. This is a very different thing to testing if a treatment works or not.

Just because the patients got better after some “tapping therapy” doesn’t mean it was the therapy that caused the change. People naturally get better anyway, especially if their problems are relatively mild. This is called regression to the mean. Even half of people with major depression recover completely within a year if you do nothing at all.

And even if the patients getting “tapping therapy” recovered quicker than they might have done without it, that doesn’t mean that there’s something special about the technique. The “tapping therapy” involves the patient saying lots of positive things to themselves while tapping – the nice comments would make you feel pretty good, regardless of whether you were tapping yourself, hopping around on one leg or watching Thomas the Tank.

But we don’t have hopping therapy, and the Fat Controller is thoroughly underqualified.

So we really can’t say whether or not the tapping did any good at all from this trial. What we need are trials that compare a group of patients who get “tapping therapy” to a group of patients who get something that cancels out the effects discussed above – perhaps a few sessions with a friendly counsellor.

 Unsurprisingly, there really aren’t any good studies out there. McCaslin published a fair review of the meagre collection trials of “tapping therapy” in 2009, finding them riddled with basic methodological errors, including:

  • Drawing conclusions from a p value of 0.09
  • Not declaring the number of patients who dropped out
  • Poor, if any, blinding
  • Not controlling for placebo effects
  • Not controlling for demand characteristics
  • Tiny sample sizes
  • Bizarre, or inadequate, control groups

In fact, the biggest study, by Waite and Holder, who used the technique on phobias, found that all four of their groups (including doing nothing, tapping on the wrong places and quite brilliantly, tapping on a doll) did equally well.

Why would it work anyway? 

Before we even ask if something works, we have to ask why we think it might. Is it plausible?

This is where “tapping therapy” really starts to get unhinged. Flicking through the 79-page manual written by Gary Craig, we find choice quotes like:

EFT was originally designed to overhaul the psychotherapy profession. Fortunately, that goal has been reached…” 

The manual states the starting point for the theory behind “tapping therapy”: 

“The cause of all negative emotions is a disruption in the body’s energy system” 

and therefore that 

“Tapping [various points of the body] sends pulses through the meridian lines and fixes the disruption”. 

The tapping is combined with making positive statements, like 

“Even though I still have some of this war memory, I deeply and completely accept myself

until the bad feelings go away.

I don’t know about you, but I sat in lectures at medical school for 5 years. I’ve assisted in countless operations, looked at hundreds of scans, and studied physiology and neurology, but I’ve never seen anything resembling a meridian line in a human being. There is nothing special about the parts of the body “tapping therapy” chooses. In real life, there is simply no rational basis why tapping on arbitrary parts of the body would have any effect – apart from giving you a sore finger if you did it hard enough.

Any benefit really is just down to people saying self-affirming, hopeful things to themselves while they look a bit silly.

Why this is dangerous 

So if “tapping therapy” doesn’t do anything special, how can it do any damage?

It can’t do any damage directly – but it can certainly harm patients who urgently need treatments that do work, by delaying and fooling them. Every second someone spends having “tapping therapy” is time they could be spending seeking effective treatment for their mental illness, or perhaps even worse, for their physical illnesses. The manual claims to be able to cure allergies and respiratory conditions, as well as cancer too – things which can kill quickly if left untreated.

Pursuing “tapping therapy” as a potential therapy, by wasting thousands of pounds on further trials and therapist training, diverts sorely needed resources from interventions that really do have rational, believable promise. Things that could help people.

On top of all of that, lending it credibility in the form of airtime and column inches will only skew the public’s idea of what real science is about – hard work and small steps. Everyone wants a miracle cure, but we can’t delude ourselves into thinking we’ve found one when it makes no sense on any level.

Time to turn off the tap.

Media reporting of suicide: how harmful is it?

This article discusses suicide. Some readers may find it triggering.

Newspapers and websites are currently strewn with the debate over whether the suicide of a well known Coronation Street character could prompt “copycat” suicides. The Mirror seemed particularly happy to lead with it. But by reporting this fear, could the papers actually be adding to it?

Though I don’t watch the programme myself, from the reports it is clear that the storyline depicts Hayley, a woman suffering from the painful effects of terminal cancer, who follows a plan to take her own life and dies peacefully.

The way the media covers mental health can often be insensitive, but it rarely has a direct role in affecting the mental health of individuals. The phenomenon of “copycat” suicides, however, is one area where what the papers write really does have influence.

I’d like to run through the history and research of the area, what we can do to limit the risk, and why the storyline might or might not lead to “copycat” suicides.

The Werther Effect

A “copycat” suicide is an emulation of a recent, highly publicised suicide. The methods someone uses to take their own life will typically be the same as the original suicide, and clusters of suicides can occur.

The first reports of “copycat” suicides originated in response to a collection of deaths that followed the publication of Goethe’s The Sorrows of Young Werther in 1774. In the book, a love-stricken young man shoots himself with a pistol, an action which was then emulated by several young men in reality. The book was banned, and the term Werther effect coined.

The phenomenon has also been seen outside the western world. Due to spectacularly irresponsible reporting and an equally dismal lack of initiative from public health officials, the volcanic summit of Mount Mihara in Japan became a recurring venue for suicides. Until a fence was erected in the 1950s, people would throw themselves from a vantage point directly into the crater. Around 944 people jumped in 1933 alone.

Modern day

Thankfully, since then we’ve become more responsible in our journalism and we’ve also been able to use modern research techniques to further study how publicising a suicide can lead to further suicides.

Studies from Germany and Japan, amongst others, have suggested that rates rise most significantly in the week following a suicide being reported by the press.

In 2002, the analysis of a series of 42 studies was published. Media reports of celebrity suicides were found to be 14 times more likely to lead to “copycat” suicides than those about non-famous people. Reports of real suicides, as opposed to fictional suicides in films or television programmes, were found to be 4 times more likely to lead to deaths.

We also know that the more the media reports a story, the more likely it is to prompt further suicides, and that people of similar race and age to the deceased person have the biggest increase in risk.

And as might be expected, people with pre-existing mental health problems, particularly young people, are at highest risk. Researchers have hypothesised that social learning theory, in which we view other people doing things which seem rewarding or appropriate, and then copy them, might be a useful way of conceptualising things.

Limiting the risk

So what can journalists do to limit the potentially negative effect of reporting suicides?

Different countries have different codes of journalistic ethics on this issue. Norway, for example, advise that suicide and attempted suicide should “never, in general, be given any mention”.

We know that a very large proportion of people who take their own lives have a mental health problem. By highlighting the presence of the disorder in reports, and including information about help lines and support services, newspapers can reduce the risk of “copycat” events.

In fact, the World Health Organisation issued guidelines for media professionals in 2008, which goes into further detail but also issues bullet point advice:

  • Take the opportunity to educate the public about suicide
  • Avoid language which sensationalizes or normalizes suicide, or presents it as a solution to problems
  • Avoid prominent placement and undue repetition of stories about suicide
  • Avoid explicit description of the method used in a completed or attempted suicide
  • Avoid providing detailed information about the site of a completed or attempted suicide
  • Word headlines carefully
  • Exercise caution in using photographs or video footage
  • Take particular care in reporting celebrity suicides
  • Show due consideration for people bereaved by suicide
  • Provide information about where to seek help
  • Recognize that media professionals themselves may be affected by stories about suicide

The Mirror, with its typically forthright headline and hyperbolic story, chooses to ignore at least three of these guidelines.

Every case on its merits 

In these situations, I doubt there will ever be a firm consensus of right or wrong.

So what about this case? Well, the story of Hayley is fictional, and clearly centres around a woman who is terminally unwell. As we’ve seen, fictional cases are less likely to lead to harm than real ones, and people of similar demographics to the case are at highest risk – so younger, fitter people might not be in harm’s way here.

However, the carelessly glib headlines and copious photos are only going to increase the risk, says the evidence. Also, describing the method is known to be a bad idea.

So is this just a harmless reaction to a popular programme, discussing a salient issue that everyone will know about already? Or is it exposing thousands of people to needless risk of suicide? I’m not sure.

If reading this article has led to you needing to talk to someone, The Samaritans are always available.  

Committed: Is it time we stopped ‘sectioning’ people?

Arguably the most important ethical principle in medicine is autonomy – the right of patients to decide for themselves what treatment they want.

However, we also recognise that being very ill sometimes makes it hard for patients to make sound decisions about their own care. Therefore, we have laws that allow doctors to make decisions on behalf of patients who are unwell.

The laws that psychiatry uses in this respect are quite different to the laws the rest of medicine uses – and I’m becoming firmer in my belief that this may be a bad thing.

Mental Capacity Act (2005)

Before 2005, if you developed a ‘physical health’ problem which interfered with your ability to make sound decisions, doctors would have decided whether to treat you against your will using common law – the accumulated results of past legal cases.

People sometimes seem to think psychiatrists are the only doctors who treat people against their will, but many types of doctor do this fairly frequently. Hospitals are packed with semi-conscious and delirious patients who object to crucial treatment in a state of confusion.

In the modern climate of accountability, the uncertainty of common law gradually became unsustainable. The Bournewood case illustrated the difficulties well – it brought to light the case of a young man with autism who was admitted to hospital and not allowed to leave or have visitors for months without any legal recourse.

We needed a more solid framework with which to decide if we could treat someone against their will. In 2005, we found that framework in the Mental Capacity Act.

This Act states that doctors can treat someone against their will if they lack capacity. Everyone is assumed to have capacity to make decisions until you test them.

To lack capacity, a patient has to have “an impairment of, or a disturbance in the functioning of, the mind or brain” and to be unable to do one of the following:

  • to understand the information relevant to the decision
  • to retain that information
  • to use or weigh that information as part of the process of making the decision
  • to communicate his decision (whether by talking, using sign language or any other means)

If a patient is found to lack capacity to make a decision, doctors can treat them in their best interests, by weighing up opinions from different sources (including, of course, the patient).

Capacity is also seen as dynamic. Just because a patient lacks the capacity to make a decision about one thing (i.e. do I want to go into hospital?) it doesn’t mean they don’t have the capacity to make decisions for themselves about other, smaller issues (i.e. do I want to take my tablets?).

An addition to the MCA was made in 2009 to allow patients to be deprived of their liberty (for example, moved into a locked nursing home) for long periods if they lack the capacity to make that decision, if the deprivation is in their best interests. This is called a Deprivation of Liberty Safeguard (DOLS).

Most people see the Mental Capacity Act as a huge forward step in patient-led care.

Mental Health Act (1983)

However, if you have a mental health problem and don’t want treatment, something entirely different may happen. Psychiatrists use the Mental Health Act (1983) to treat people who don’t want to be treated. This is the Act under which we can ‘section’ people.

There are lots of different types of shorter sections, but the most important ones are Section 2 and Section 3. These give the legal power to detain a patient for 28 days and 6 months respectively.

This Act states that to be treated against their will, a patient must have ‘any disorder or disability of the mind’ that is ‘of a nature or degree which warrants the detention of the patient in a hospital’ and detaining him must be in the interests of:

  • The patient’s health or
  • The patient’s safety or
  • The protection of others

A patient’s ability to make a decision about treatment for themselves is not taken into account. If two doctors and a social worker agree that detaining a patient is the best thing for their health or safety, or for the protection of other people, they can detain the patient. That’s all there is to it.

The patient will have the right to appeal and state his case to a tribunal, but even at that point, their ability to make up their own mind isn’t taken into account.

Is using a different legal framework a bad thing for psychiatry?

The question of whether using the Mental Health Act, instead of thinking like the rest of medicine, could be counterproductive for psychiatry has been debated for decades.

There are certainly some good things about the Mental Health Act – it demands documentation in black and white about why the decision to treat a patient against their will was made, and gives the patient a legal right to appeal by tribunal – whereas with the Mental Capacity Act, assessments can be less formal.

The Mental Health Act also demands that three people assess a patient to make a decision, whereas with the Mental Capacity Act it may only be one person making a judgement call.

But in my opinion, these benefits aren’t enough. The use of a different system is harmful.

By using a different framework, I feel psychiatry is stigmatising its own patients. Though people with mental disorders caused by a ‘physical health’ problem can be ‘sectioned’, the vast majority have purely mental health problems. Subjecting them to a law that doesn’t take into account the possiblity that they could make their own decisions, when patients with other types of illness are listened to and facilitated to make their own decisions, is dehumanising.

‘Sectioning’ people with mental health problems also does nothing to further the drive for parity between mental and physical health. Philosophically, the mind is the product of the brain – there is no real difference between mental and physical, except the divide we create in procedures like sections. We need to be more similar to the rest of medicine to promote our cause, not more different.

By using the Mental Health Act, psychiatrists are led to focus on the wrong question when they see a patient. Instead of thinking about what this patient wants and if I can help them get it, they’re thinking is this patient sectionable? This damages potentially therapeutic relationships.

Facilitating the choices of patients who do agree to come into hospital without being ‘sectioned’, but who still lack the capacity to make big decisions about their care, may be forgotten in this atmosphere of paternalism.

Using the Mental Health Act also perpetuates the general myth that psychiatric patients are dangerous, and sometimes need to be removed from the streets at all cost.

Would change be so hard?

If psychiatrists started to use the Mental Capacity Act instead, it wouldn’t actually change the group of patients we admit to hospital against their will all that much. A 2008 study published in the BMJ reported that of 150 patients sectioned to a psychiatric hospital, 86% didn’t have the capacity to make a decision about being in hospital anyway.

In another related study published in the BJPsych in 2009, only 6% of 200 psychiatric inpatients were found to be both under a section and to have the capacity to decide whether to be in hospital or not for themselves. Most of this small group had either been too unwell to decide for themselves at the time of their admission, or had deliberately faked being unwell to get admitted – so they would’ve come into hospital anyway.

This tallies with my own experience. Most of the time, if we think someone needs to come into hospital but they don’t agree, they’re almost always too unwell to make that decision for themselves anyway. By helping patients make choices for themselves, and only making choices for those who can’t, instead of forcing treatment on people we think need it, we wouldn’t be treating different people – but we’d be treating them differently!

Some may say that psychiatrists have a duty to protect the public – that we should be able to remove ‘dangerous’ patients from the streets whether they can decide for themselves or not. Mostly, again, the vast majority of patients who pose a risk to the public are so unwell that they lack the capacity to decide for themselves about admission. They’d still have to come into hospital.

As for the few patients who are a risk to themselves or others, but do retain the capacity to decide for themselves about hospital care – we should allow them to make their own mistakes. We couldn’t section a mentally healthy but dangerous person, like a careless pilot. We couldn’t section a person with ‘physical health’ problems who poses a risk to themselves but can make up their own mind (i.e. a diabetic patient who sometimes drops his blood sugar through erratic insulin use and becomes aggressive). So we shouldn’t be able to do it to mental health patients.

Ability to weigh up the risks of harm befalling yourself or other people if you relapse should form part of the assessment of capacity. Just like it does in the rest of medicine.

The way forward

Adaptations to the Mental Capacity Act might have to be made to make use in psychiatry possible. We don’t have the manpower to check the capacity of every patient who wants to leave every single day, so we’d need a law that says we only have to check it every week, for instance, to make sure the patient hasn’t regained the ability to decide rationally that discharge would be best. The provisions made for DOLS inform us that this kind of legislation is entirely possible.

If we ever want to be seen as truly equal with other branches of medicine, we should start valuing and empowering the choices of our patients as highly as they do.

After publication of this blog I was alerted to a lecture given by Professor George Szmukler, which summarises these issues extremely well. A video of the lecture can be found here.

Mental shortcuts: A psychological explanation of why psychiatrists overestimate risk

In his 2011 book Thinking, Fast and Slow, Nobel Prize winning psychologist Daniel Kahneman explains how humans make decisions. He discusses how we use heuristics, otherwise known as “rules of thumb”, to cut corners in our thinking processes to help save time and effort in making judgments. Usually these heuristics work well, but sometimes, in tricky situations, they can fall seriously short of what we need. Estimating the likelihood of a rare event is one such area.

The power of heuristics

Kahneman gives the example of driving around Israel during a time when suicide bombings on buses were relatively common.  He knew the hard statistics – over the last 3 years a total of 236 people had been killed by bombs, but over 1.3 million people rode the bus every day in Israel. His chances of coming to harm were incredibly small, but he couldn’t help feeling afraid of buses. Why?

Several heuristics will have been interfering with letting Kahneman make a rational decision here. Firstly, the availability heuristic. The easier it is to think of a previous example of a bus bomb, the more likely you are to think it will happen again. This is how terrorism works. With so much media attention publicising the bombs, there will have been copious opportunities for people to conjure up mental images of bus bombs – which will scare them into thinking it will recur more frequently than it actually will.

At the extreme, this can grow into an availability cascade, in which a small or non-existent threat is blown out of all proportion because one person voices concern, creating a highly accessible mental image that others use to voice concern, and so on, until everyone is concerned. Mad cow disease is a classic example of this, or electricity pylons giving people cancer. More topically, Romanian immigrants “flooding into the UK” fits into this category nicely too.

Also involved is the representativeness heuristic. How much a bus looks like a typical bus used for suicide bombs will influence how likely people think it is to blow up. If Israel’s buses were a variety of different models, people would be far more scared of the models in which bombs had already been placed, and less scared of the others, even though in reality the model probably wouldn’t have had much impact on the risk.

The third heuristic to interfere is the affect heuristic. Whether we like it or not, how we feel about something plays a big part in how good we think something is for us. We tend to overestimate the benefits of things we like and the risks of things we don’t like. Obviously, vivid images of the result of suicide bombings are going to inspire some pretty serious levels of dislike in people, so they’ll view the risks of more bombings as even bigger. Incidentally, we use the technique of making people feel bad about something to increase their estimation of it’s risk for good purposes too – putting unpalatable pictures of tumours on cigarette packets, for instance.

This ties in with another cognitive trick we use to save mental effort sometimes, called attribute substitution. When we have trouble answering a tough question, like how likely is this bus to blow up, our minds will often substitute an easier question and answer that instead, like how awful would it be if that bus blow up? We then take our answer and match it an answer of corresponding intensity for the first question. “It would be really awful if that bus blew up” becomes “that bus is really likely to blow up”.

As if those weren’t enough reasons to declare humans flawed at judging risk, it turns out that we’re way keener to avoid a loss than to make a gain. This is called loss aversion. Most people, says Kahneman, will turn down the offer of tossing a coin that would win them $125 or lose them $100. In fact, it’s only when people are offered $200 for winning that coin toss that they start to agree to gamble. We hate to lose, even if it costs us a chance of winning.

People are so keen to avoid losing that they’ll make statistically bad decisions in order to avoid it. Take insurance, for example. People will pay way more to insure their belongings than the risk of them being damaged would suggest is reasonable. The more you stand to lose, the greater the amount over the odds you’ll pay to not lose it.

Risk in mental health

So how does all this tie in to mental health? Well, psychiatrists judge risk all the time. At least we think we do.

Let’s take judging the risk of a mental health patient killing themselves or someone else. We often section people because we think the risks of this are so high.

In fact, on paper, the risks of both those outcomes are incredibly small.

In 2009, for example, 84 patients killed themselves during hospital admission or on a period of trial leave, out of a total of approximately 120,000 admissions involving 108,000 people. That’s a suicide rate of around 1 in 1400 admissions. Even if we take the cases in which risk of suicide was the major contributing factor to admission (21%), that’s still only 1 in 285 admissions.

Likewise for homicide, in the whole of 2011 a miniscule total of 18 people who were convicted of homicide had been given a diagnosis of schizophrenia at any point in their life. That’s on a prevalence of schizophrenia of 0.3-0.7% for the entire country, adding up to a risk quoted by one study as 1 in 9090 cases. The same study found that the homicide rate during a first episode of psychosis, often the time during which a patient is most ill and is most likely to be sectioned, was 1 in 629 patients.

Mental shortcuts

So when we deem someone to be “high risk” – and I do this myself – what do we really mean?

If you take Kahneman’s point of view, it means we’ve used our heuristics to cut corners in our decision making and come up with a heavily biased assessment based on how we feel and our active memories, rather than the facts of the matter.

When we see someone in A+E who feels suicidal, or is so psychotic we feel they might hurt someone, what do we think of? Is it the statistics? No.

We use the availability heuristic to conjure up images of previous patients we’ve had, or even just heard of, that have killed themselves or hurt others. Newspaper headlines like The Sun’s “12,000 killed by mental patients” crops up in our heads.

We use the representativeness heuristic to liken the patient in front of us to what we imagine the typical patient who will kill themselves or someone else to look like – and they usually fit the bill because the image is so vague.

We use our affect heuristic to allow our feelings to judge how risky the situation is – and because there can be no worse feeling that making a wrong decision and seeing someone die because of it, we decide that the risk is high.

Finally, we use our aversion to loss to take more precautions than the statistics deem necessary to ensure what’s important – our patient’s safety.

None of these heuristics would cause too much of a problem if we were dealing with small decisions, or dealing with a moderate decision only once. But that’s not what we do; we make big decisions about bringing people into hospital because they’re “high risk” all the time. Potentially hundreds of people, to prevent just one or two bad outcomes.

Why use risk as a basis for treatment?

On this evidence, I don’t see admitting or treating someone purely for reasons of risk to be a reasonable way of practicing psychiatry. I certainly don’t see it as a good enough reason for taking away someone’s liberty by sectioning them. If we were any good at forecasting risk there might be a case for it, but as it stands, we’re so poor at predicting bad outcomes that the amount of treating and coercing we do to stop one averse event is far too large.

Fortunately, almost all the people that are brought into hospital as “high risk” also need to be there for their own health – because they are so unwell. The illness and the risk often come together.

Improving people’s health is a far more rational use for a hospital, and I’m sure that by focusing on treating illnesses, the tiny risks our patients do pose will shrink even further – not that our heuristics would be able to tell.

Being thankful for less thanks

A few weeks ago, I found myself in a discussion with a handful of other psychiatry trainees about the earliest stage of our careers – the first few years after medical school that we all spent in general hospitals, mastering the very basics of doctoring. Taking blood, listening to chests, scurrying to write in the notes during the ward round as the consultant whisked from bed to bed. Though the work was relentless and unforgiving, we remembered it fondly.

The discussion turned to the differences we’d noticed between working as psychiatrists, and working back then, as general medical doctors. One difference, we realised, is that we don’t tend to get thanked as much. Now don’t get me wrong – we certainly weren’t complaining, nor were we incinuating that our patients are thoughtless or unappreciative – they are anything but. In addition, though I find thanks as heartening as anyone, it’s never been one of my reasons for being a doctor. All we were doing was musing on an observation.

The thing is, in general medicine, doctors sometimes find themselves drowning in praise from patients. Stereotypically from little old ladies who can remember healthcare before the days of the NHS, the hefty thanks can be quite disproportionate to the amount of genuine effort the doctor has put in, or the effect he has had. We got thank you cards, chocolates, and presents at Christmas. I once knew a GP who had three cupboards rammed full of whisky from patients.

In psychiatry, this level of adulation is entirely less common. We seem to lack the aura of assumed benvolence, omniscience and trustworthiness that doctors from other specialities possess.

I pondered why it is that psychiatrists don’t tend to get thanked a tremendous amount in comparison to doctors in others specialities. I could think of a few reasons – and here’s the crux – each of these reasons served as a reminder to me of my real motivations for being a doctor – of things far more important to me than getting the occasional thank you.

Firstly and most obviously, many of our patients are often far too unwell to even consider showing us gratitude. Torn by sadness or engulfed by suspicion and perplexity, the last thing they want to do is thank the mental health professional who is asking them a lot of strange questions and admitting them to an often imposing, unfamiliar hospital. This serves as a constant reminder to me that what my patients go through is not easy; that the suffering of the people I treat is as profound as any I will ever see and is as worthy of as much help.

Secondly, the families and friends of our patients, who in other medical specialities are so frequently the ones giving thanks in lieu of the incapacitated patient, are often nowhere to be found – long since harried away by illnesses that are hard enough to understand, let alone cope with in a loved one. After two straight weeks of psychiatric night shifts in A+E last year, I could count on the fingers of one hand the amount of people I saw who had brought someone to accompany them on their trip to the hospital. Though many amazing families weather the storm, many more aren’t be able to get through it. This reminds me that our patients have been shorn of the social support that most people would take for granted when they fall ill – and that helping rebuild those bridges is an important tool for recovery.

Thirdly – and for me, most powerfully – I think patients are reticent to thank psychiatrists because some of them have suffered badly in the hands of mental health services in the past. Some of our more coercive and invasive practices feel violating even when performed by caring, thoughtful professionals who have the best interests of the patient at heart – but I’d be naïve to believe that there aren’t impersonal, malignant doctors and nurses out there who make being mentally unwell a nightmare. Having had their illnesses for decades, many of the patients I see will no doubt have been exposed at some point to practice so shameful that to even trust another doctor again would be hard – let alone to feel like thanking one. Recognising that fact, and building that trust back by listening to our patients and facilitating the choice of care they want as far as possible, is a far more worthwhile goal than appreciation.

So, although being thanked for what I do has never been a guiding focus, thinking about why it doesn’t happen so much now I’m a psychiatrist can help me appreciate far more meaningful motivations for doing my job – and hopefully, to do it better.

%d bloggers like this: