Category: Psychotherapy

  • The memory bank

    You’ll have heard about the usual methods for improving memory, like using imagery, chunking and building associations with other memories. If not Google it and you’ll find millions of websites with the same information.

    The problem with most of these methods is they involve a fair amount of mental effort.

    So here are seven easy ways to boost your memory that are backed up by psychological research. None require you to train hard, spend any money or take illegal drugs. All free, all pretty easy, all natural!

    1. Write about your problems

    To do complex tasks we rely on our ‘working memory’. This is our ability to shuttle information in and out of consciousness and manipulate it. A more efficient working memory contributes to better learning, planning, reasoning and more.

    One way to increase working memory capacity indirectly is through expressive writing. You sit down for 20 minutes a few times a month and write about something traumatic that has happened to you. Yogo and Fujihara (2008) found that it improved working memory after 5 weeks.

    Psychologists aren’t exactly sure why this works, but it does have a measurable effect.

    2. Look at a natural scene

    Nature has a magical effect on us. It’s something we’ve always known, but psychologists are only just getting around to measuring it.

    One of nature’s beneficial effects is improving memory. In one study people who walked around an arboretum did 20% better on a memory test than those who went for a walk around busy streets.

    In fact you don’t even need to leave the house. Although the effects aren’t as powerful, you can just look at pictures of nature and that also has a beneficial effect.

    3. Say words aloud

    This is surely the easiest of all methods for improving memory: if you want to remember something in particular from a load of other things, just say it out loud. A study  found memory improvements of 10% for words said out loud, or even just mouthed: a relatively small gain, but at a tiny cost.

    4. Meditate (a bit)

    Meditation has been consistently found to improve cognitive functioning, including memory. But meditation takes time doesn’t it? Long, hard hours of practice? Well, maybe not.

    In one recent study, participants who meditated for 4 sessions of only 20 minutes, once a day, saw boosts to their working memory and other cognitive functions.

    5. Predict your performance

    Simply asking ourselves whether or not we’ll remember something has a beneficial effect on memory. This works for both recalling things that have happened in the past and trying to remember to do things in the future.

    When Meier et al. (2011) tested people’s prospective memory (remembering to do something in the future), they found that trying to predict performance was beneficial. On some tasks people’s performance increased by almost 50%.

    6. Use your body to encode memories

    We don’t just think with our minds, we also use our bodies. For example, research has shown that we understand language better if it’s accompanied by gestures.

    We can also use gestures to encode memories. Researchers trying to teach Japanese verbs to English speakers found that gesturing while learning helped encode the memory (Kelly et al., 2009). Participants who used hand gestures which suggested the word were able to recall almost twice as many Japanese words a week later.

    7. Use your body to remember

    Since our bodies are important in encoding a memory, they can also help in retrieving it. Psychologists have found that we recall past episodes better when we are in the same mood or our body is in the same position (Dijkstra et al., 2007).

    This works to a remarkably abstract degree. In one study by Cassasanto and Dijkstra (2010), participants were better able to retrieve positive memories when they moved marbles upwards and negative memories when they moved marbles downwards. This seems to be because we associate up with happy and down with sad.

    More effort?

    If all these methods seem a bit lazy, then you can always put in a bit more effort.

    Probably the best way of improving your overall cognitive health is exercise. Studies regularly find that increasing aerobic fitness is particularly good for executive function and working memory.

    Conversely, stay in bed all the time and your working memory gets worse (Lipnicki et al., 2009).

    Take your memory training to the limit and an incredible study by Ericsson et al. (1980) shows what can be achieved. Our typical short-term memory span is about 7 things. In other words we can hold around seven things in mind at the same time. These researchers, though, increased one person’s memory span to 79 digits after 230 hours of practice, mostly using mnemonic systems.

    Shows what you can do if you put in the hours. That said, I’ll be sticking to a nice walk around the park.

     

  • Meditation and mindfulness

    How Meditation Improves Attention

    The science of meditation and attention, including a beginner’s guide to meditation.

    green_buddha2

    William James wrote that controlling attention is at “the very root of judgement, character and will”. He also noted that controlling attention is much easier said than done. This is unfortunate because almost every impressive human achievement is, at heart, a feat of attention. Art, science, technology — you name it — someone, somewhere had to concentrate, and concentrate hard.

    Wouldn’t it be fantastic to be able to concentrate without effort? Not to feel the strain of directing attention, just to experience a relaxed, intense, deep focus? So naturally the million dollar question is: how can attention be improved?

    Psychologists are fascinated by the sometimes fantastical claims made for meditation, particularly in its promise of improving attention. It certainly seems intuitively right that meditation should improve attention — after all meditation is essentially concentration practice — but what does the scientific evidence tell us?

    Does meditation improve attention?

    The problem with attention is that it naturally likes to jump around from one thing to another: attention is antsy, it won’t settle — this is not in itself a bad thing, just the way it is. Attention’s fidgety nature can be clearly seen in the phenomenon of ‘binocular rivalry’. If you show one picture to one eye and a different picture to the other eye, attention shuttles between them, wondering which is more interesting.

    A simple lab version of this presents a set of vertical lines to one eye and a set of horizontal lines to the other. What people see is the brain flipping between the horizontal and the vertical lines and occasionally merging them both together, seemingly at random. People usually find it difficult to see either the horizontal or the vertical lines — or even the merged version — for an extended period because attention naturally flicks between them.

    If the binocular rivalry test is a kind of index of the antsy-ness of attention, then those with more focused attention should see fewer changes. So reasoned Carter et al. (2005) who had 76 Tibetan Buddhists in their mountain retreats meditate before taking a binocular rivalry test. They sat, wearing display goggles and staring at the lines, pressing a button each time the dominant view changed between horizontal, vertical and merged. The more button presses, the more times their attention switched.

    meditation2

    In one condition their meditation was ‘compassionate’, thinking about all the suffering in the world while in the other it was ‘one-point’ meditation focusing completely on one aspect of their experience, for example their breath going in and out. Although the ‘compassionate’ form of meditation had no effect, the ‘one-point’ meditation reduced the rate of switching in half the participants.

    The results were even more dramatic when the Buddhists carried out the one-point meditation while looking through the goggles. Some of the most experienced monks reported complete image stability: they saw just the horizontal or vertical lines for a full 5 minutes. When compared to people who do not meditate, these results are exceptional.

    Quicker results

    Of course we don’t all have 20 years to pass in a mountain retreat learning how to concentrate, so is there any hope for the rest of us? A recent study by Dr. Amishi Jha and colleagues at Pennsylvania University suggests there is (Jha, Krompinger & Baime, 2007). Rather than recruiting people who were already superstar concentrators, they sent people who had not practised meditation before on an 8-week training course in mindfulness-based stress reduction, a type of meditation. This consisted of a series of 3-hour classes, with at least 30 minutes of meditation practice per day.

    meditation3

    These 17 participants were then compared with a further 17 from a control group on a series of attentional measures. The results showed that those who had received training were better at focusing their attention than the control group. This certainly suggests that meditation was improving people’s attention.

    Dr. Jha and colleagues were also interested in how practice beyond beginner level would affect people’s powers of attention. To test this they sent participants who were already meditators on a mindfulness retreat for one month. Afterwards they were given the same series of attention measures and were found to have improved in their reactions to new stimuli. In other words they seemed to have become more receptive.

    Attentional improvements from meditation, though, have recently been reported even quicker than 8 weeks. A study carried out by Yi-Yuan Tang and colleagues gave participants just 20 minutes instruction every day for five days (Tang et al., 2007). Participants practised a Chinese form of meditation called ‘integrative body-mind training’, which uses similar techniques to other types of meditation. They found that after only this relatively short introduction participants demonstrated improved attention compared to a control group, along with other benefits such as lower levels of stress and higher energy levels.

    There is even evidence that meditation can improve a major limitation of the brain’s attentional system. Attentional blink is the finding that our attention ‘blinks’ for about half a second right after we focus on something (follow the link for the full story). Meditation, however, seems to be able to increase our minds’ attentional bandwidth. Slagter et al. (2007) gave participants 3 months of intensive meditation training and found that afterwards the attentional blink was seriously curtailed. In other words people were capable of processing information more quickly and accurately. Perhaps, then, meditation really can open the doors of perception…

    meditation4

    This research on meditation’s effect on attention is just the tip of the iceberg. Other studies have also suggested that meditation can benefit motivation, cognition, emotional intelligence and may even sharpen awareness to such an extent that we can control our dreams (Walsh & Shapiro, 2006). And these are just the psychological benefits, there also appear to be considerable physical benefits.

    Beginner’s guide to meditation

    Since it is so beneficial here is a quick primer on how to meditate. Meditation is like chess: the rules are relatively easy to explain, but the game itself is infinitely complex. And like chess the names and techniques of meditation are many and varied but the fundamentals are much the same:

    1. Relax the body and the mind. This can be done through body posture, mental imagery, mantras, music, progressive muscle relaxation, any old trick that works. Take your pick. This step is relatively easy as most of us have some experience of relaxing, even if we don’t get much opportunity.
    2. Be mindful. Bit cryptic this one but it means something like this: don’t pass judgement on your thoughts, let them come and go as they will (and boy will they come and go!) but try to nudge your attention back to its primary aim, whatever that is. Turns out this is quite difficult because we’re used to mentally travelling backwards and forwards while making judgements on everything (e.g. worrying, dreading, anticipating, regretting etc.). The key is to notice in a detached way what’s happening but not to get involved with it. This way of thinking often doesn’t come that naturally.
    3. Concentrate on something. Often meditators concentrate on their breath, the feel of it going in and out, but it could be anything: your feet, a potato, a stone. The breath is handy because we carry it around with us. But whatever it is try to focus all your attention onto it. When your attention wavers, and it will almost immediately, gently bring it back. Don’t chide yourself, be good to yourself, be nice. The act of concentrating on one thing is surprisingly difficult: you will feel the mental burn almost immediately. Experienced practitioners say this eases with practice.
    4. Concentrate on nothing. Most say this can’t be achieved without a lot of practice, so I’ll say no more about it here. Master the basics first.
    5. Zzzzz Zzzzz. That’s not meditating, that’s sleeping.

    This is just a quick introduction but does give you enough to get started. It’s important not to get too caught up in techniques but to remember the main goal: exercising attention by relaxing and focusing on something. Try these things out first, see what happens, then explore further.

    New ways of being

    As William James pointed out attention is so fundamental to our daily lives that sharpening it up is bound to spill over into many different areas of everyday life. This series of articles on attention shows that when attention goes wrong people are frequently beset by unsettling experiences, but when it goes right we are capable of all sorts of incredible abilities, like multitasking the cocktail party effect, and even curtailing the attentional blink.

    In fact attention is so fundamental to consciousness that it’s no exaggeration to say that what we pay attention to makes us who we are. Potentially, then, meditation offers a way to remake ourselves, leaving behind damaging or limiting habits and discovering new ways of being.

     

  • Mind-fields

    How to Live With an Unknowable Mind

    Post image for How to Live With an Unknowable Mind

    We know surprisingly little about our own personalities, attitudes and even self-esteem. How do we live with that?

    How do you imagine your own mind?

    I sometimes picture mine as a difficult and contrary child; the kind that throws a stone at you for no reason and can’t explain itself. Or while at the beach it sits silent, looking miserable. But, at a wedding is determined to scream at the top of its lungs through all the quiet bits.

    One reason minds can be frustrating is that we only have access to part of them, by definition the conscious part. The rest, the unconscious, lies there mysteriously, doing things we don’t understand and often don’t seem to have requested.

    Except we don’t know it’s doing things we haven’t asked it to, because we can’t interrogate it. It turns out that the unconscious is mostly inaccessible (Wilson & Dunn, 2004).

    This is quite a different view of the mind than Freud had. He thought you could rummage around and dig things up that would help you understand yourself. Modern theorists, though, see large parts of the mind as being completely closed off. You can’t peer in and see what’s going on, it’s like the proverbial black box.

    The idea that large parts of our minds can’t be accessed is fine for basic processes like movement, seeing or hearing. Generally I’m not bothered how I work out which muscles to contract to pedal my bicycle; neither do I want access to how I perceive sound.

    Other parts would be extremely interesting to know about. Why do certain memories come back to me more strongly than others? How extraverted am I really? Why do I really vote this way rather than that?

    Here are three examples of areas in which our self-knowledge is relatively low:

    1. Personality

    You’d be pretty sure that you could describe your personality to someone else, right? You know how extroverted you are, how conscientious, how optimistic?

    Don’t be so sure.

    When people’s personalities are measured implicitly, i.e. by seeing what they do, rather than what they say they do, the correlations are sometimes quite low (e.g.Asendorpf et al., 2002). We seem to know something about our own personalities, but not as much as we’d like to think.

    2. Attitudes

    Just like in personality, people’s conscious and unconscious attitudes also diverge.

    We sometimes lie about our attitudes to make ourselves look better, but this is more than that. This difference between our conscious and unconscious attitudes occurs on subjects where we couldn’t possibly be trying to make ourselves look better (Wilson et al., 2000). Rather we seem to have unconscious attitudes that consciously we know little about (I’ve written about this previously in: Our secret attitude changes)

    Once again we say we think one thing, but we act in a way that suggests we believe something different.

    3. Self-esteem

    Perhaps this is the oddest one of all. Surely we know how high our own self-esteem is?

    Well, psychologists have used sneaky methods of measuring self-esteem indirectly and then compared them with what we explicitly say. They’ve found only very weak connections between the two (e.g. Spalding & Hardin, 1999). Amazingly some studies find no connection at all.

    It seems almost unbelievable that we aren’t aware of how high our own self-esteem is, but there it is. It’s another serious gap between what we think we know about ourselves and what we actually know.

    Road to self-knowledge

    So, what if we want to get more accurate information about ourselves without submitting to psychological testing?

    It’s not easy because according to modern theories, there is no way to directly access large parts of the unconscious mind. The only way we can find out is indirectly, by trying to piece it together from various bits of evidence we do have access to.

    As you can imagine, this is a very hit-and-miss affair, which is part of the reason we find it so difficult to understand ourselves. The result of trying to piece things together is often that we end up worse off than when we started.

    Take the emotions. Studies show that when people try to analyse the reasons for their feelings, they end up feeling less satisfied (Wilson et al., 1993). Focusing too much on negative feelings can make them worse and reduce our ability to find solutions.

    Perhaps the best way to gain self-knowledge is to carefully watch our own thoughts and behaviour. Ultimately what we do is not only how others judge us but also how we should judge ourselves.

    How to live with an unknowable mind

    Taking all this together, here are my rough-draft principles for living with an unknowable mind:

    1. The mind is a tremendous story-teller and will try to make up pleasing stories about your thoughts and behaviour. These aren’t necessarily true.
    2. Using introspection you can’t always (ever?) know what you really think or who you really are.
    3. Using introspection to work out what you are or what you think can be damaging, encouraging rumination and depressive thoughts.
    4. This isn’t depressing, it’s liberating: now you know it’s perfectly normal not to understand some/most aspects of yourself, you can relax.
    5. If you must push for greater self-knowledge, try to become a better observer of your own thoughts and behaviour. Notice what you do and when, then try to infer the why. But don’t push it, always remember points 1-4

     

  • Taming the self

    Top 10 Self-Control Strategies

    Post image for Top 10 Self-Control Strategies

    The science of self-control: use rewards, commitments, self-affirmation, adjust values, fight the unconscious and more…

    Self-control is vital to our success.

    People who have good self-control tend to be both more popular and more successful in many areas of life. Those with low self-control, though, are at risk of overeating, addictions and underachievement.

    Unfortunately, as we all know to our cost, self-control frequently fails. Part of the problem is we overestimate our ability to resist temptation (Nordgren et al., 2009).

    Self-control can be built up, like a muscle (Baumeister et al., 2006). But you need to do the right types of mental exercises. So, here are ten techniques to boost your self-control that are based on psychological research.

    1. Respect low ego

    Research has found that self-control is a limited resource (Vohs et al., 2000). Exercising it has clear physiological effects, like lower glucose levels (Gailliot et al., 2007).

    At any one time we only have so much self-control in the tank. When you’ve been tightly controlling yourself, the tank is low and you become more likely to give in to temptation. Psychologists call this ‘ego-depletion’.

    Recognise when your levels of self-control are low and make sure you find a way to avoid temptation during those times. The first step to greater self-control is acknowledging when you’re at your weakest.

    2. Pre-commit

    Make the decision before you’re in the tempting situation. Pre-committing yourself to difficult goals can lead to increased performance. In one study by Ariely and Wertenbroch (2002) students who imposed strict deadlines on themselves performed better than those who didn’t.

    Only take a limited amount of money with you to curtail spending, or only have healthy foods at home to avoid the temptation to go astray.

    It’s difficult to pre-commit because normally we like to leave our options open. But if you’re harsh on you future self, you’re less likely to regret it.

    3. Use rewards

    Rewards can really work to help strengthen self-control. Trope and Fishbach (2000)found that participants were better able to make short-term sacrifices for long-term gains when they had a self-imposed reward in mind. So setting ourselves rewards does work, even when it’s self-imposed.

    4. …and penalties

    Just like the carrot, the stick also works. Not only should we promise ourselves a reward for good behaviour, we should also give ourselves a penalty for bad behaviour.

    When Trope and Fishbach (2000) tested self-imposed penalties experimentally, they found the threat of punishment encouraged people to act in service of their long-term goals.

    5. Fight the unconscious

    Part of the reason we’re easily led into temptation is that our unconscious is always ready to undermine our best intentions.

    Fishbach et al. (2003) found that participants were easily tempted outside their conscious awareness by the mere suggestions of temptation. On the other hand, the same was also true of goals. When goals were unconsciously triggered, participants turned towards their higher-order goals.

    The practical upshot is simple. Try to keep away from temptations—both physically and mentally—and stay close to things that promote your goals. Each unconsciously activates the associated behaviour.

    6. Adjust expectations

    Even if it doesn’t come naturally, try to be optimistic about your ability to avoid temptations.

    Studies like Zhang and Fishbach (2010) suggest that being optimistic about avoiding temptation and reaching goals can be beneficial. Participants who were optimistic stuck at their task longer than those who had been asked to make accurate predictions about reaching a goal.

    Allow yourself to overestimate how easy it will be to reach your goal. As long as it doesn’t spill over into fantasy-land, being fuzzy on the tricky bits can motivate.

    7. Adjust values

    Just as you can try to think more optimistically, you can also change how you value both goals and temptations. Research suggests that devaluing temptations and increasing the value of goals increases performance (Fishbach et al., 2009).

    When we value our goal more we automatically orient ourselves towards it. In the same way devaluing temptations helps us automatically avoid them.

    8. Use your heart

    The heart often rules the head, so use your emotions to increase self-control.

    In one study children were able to resist eating marshmallows by thinking of them as ‘white clouds’ (Mischel & Baker, 1975). This is one way of avoiding temptations: by cooling down the emotions associated with them.

    You can increase the pull towards your goal in the same way: think about the positive emotional aspects of achieving it; say, the pride, or excitement.

    9. Self-affirmation

    Sometimes exercising self-control means avoiding a bad habit. One way of doing this is by using self-affirmations. This means reaffirming the core things you believe in. This could be family, creativity or anything really, as long as it’s a core belief of yours.

    When participants in one study did this, their self-control was replenished (the study is described here: self-affirmation in self-control). Thinking about core values can help top-up your self-control when it’s been depleted.

    10. Think abstract

    Part of the reason self-affirmations work is that they make us think in the abstract. And abstract thinking has been shown to boost self-control.

    In research described hereFujita et al. (2006) found that people thinking in the abstract (versus concrete) were more likely to avoid temptation and better able to persist at difficult tasks.

    We are more likely to think abstract if we think about the reasons why we’re doing something, rather than just how we’re doing it.

    Another good reason not to give in…

    There’s a comforting thought that if we give in to temptation just this once, we’ll come back stronger afterwards.

    However psychological research has suggested this isn’t true. Students who had a good (versus mediocre) break from studying to ‘replenish’ themselves didn’t show increased motivation when they returned (Converse & Fishbach, 2008, described inFishbach et al., 2010).

    If all else fails, know that giving in won’t bring you back stronger. Worse, giving in to temptation may well just increase your tendency to crumble again in the future.

     

  • Not so sleeping beauty

    6 Easy Steps to Falling Asleep Fast

    Post image for 6 Easy Steps to Falling Asleep Fast

    Psychological research over three decades demonstrates the power of Stimulus Control Therapy.

    Can’t get a good night’s sleep? You’re not alone. In surveys of what would improve people’s lives, a good night’s sleep frequently comes near the top of the list.

    Poor sleep results in worse cognitive performance, including degraded memory, attention, performance and alertness. And in the long term insomnia is also associated with anxiety and depression. And people’s sleep gets worse as they get older. After 65 years old, between 12% and 40% of people have insomnia.

    All sorts of methods have been tried to combat poor sleep, from drugs through psychological remedies to more outlandish treatments.

    The problem with drugs is that they have side-effects and are often addictive. The problem with the more outlandish treatments is that although they tend not to have side-effects, we don’t know if they have any effect at all. Psychological remedies, though, combine the best of both worlds: studies show they work without side-effects.

    Stimulus Control Therapy

    Professor Richard R. Bootzin has been researching sleep disorders for many years at the University of Arizona Sleep Research Lab. Writing in the Annual Review of Clinical Psychology, he describes the different psychological approaches that have been used to treat insomnia (Bootzin & Epstein, 2011).

    Of these the most successful single intervention is called Stimulus Control Therapy (Morin et al., 2006). You’ll be happy to hear it consists of six very straightforward steps. If you follow these it should improve your sleep. After the list I’ll explain the thinking behind them. First, here are their six steps:

    1. Lie down to go to sleep only when you are sleepy.
    2. Do not use your bed for anything except sleep; that is, do not read, watch television, eat, or worry in bed. Sexual activity is the only exception to this rule. On such occasions, the instructions are to be followed afterwards, when you intend to go to sleep.
    3. If you find yourself unable to fall asleep, get up and go into another room. Stay up as long as you wish and then return to the  bedroom to sleep. Although we do not want you to watch the clock, we want you to get out of bed if you do not fall asleep immediately. Remember the goal is to associate your bed with falling asleep quickly! If you are in bed more than about 10 minutes without falling asleep and have not gotten up, you are not following this instruction.
    4. If you still cannot fall asleep, repeat step 3. Do this as often as is necessary throughout the night.
    5. Set your alarm and get up at the same time every morning irrespective of how much sleep you got during the night. This will help your body acquire a consistent sleep rhythm.
    6. Do not nap during the day.

    Why it works

    This method is based on the idea that we are like Pavlov’s drooling dog. We attach certain stimuli in the environment to certain thoughts and behaviours. Famously Pavlov’s dogs would start drooling when a bell rang, because they associated hearing the bell with getting food. Eventually the dogs would drool at the sound of the bell even when they didn’t get any food. Replace the bell with a bed and food with sleep and conceptually you’re there.

    If we learn to do all kinds of things in bed that aren’t sleep, then when we do want to use it for sleep, it’s harder because of those other associations.

    This is just as true of thoughts as it is of actions. It’s important to avoid watching TV in bed, but it’s also important to avoid lying in bed worrying about not being able to get to sleep. Because then you learn to associate bed with worry. Worse, you suffer anticipatory anxiety: anxiety about the anxiety you’ll feel when you are trying to get to sleep.

    So, this therapy works by strengthening the association between bed and sleep and weakening the association between bed and everything else (apart from sex!).

    Other treatments supported by the research are progressive muscle relaxation, which is exactly what it sounds like, and paradoxical intention. This latter technique involves stopping people trying so hard to get to sleep. The paradox being that when people stop trying so hard, they find it easier to fall asleep.

    All this assumes you don’t live next door to a late night drummer and you’re not downing a double espresso before hitting the sack, but those sorts of things are pretty obvious. Everything else being equal, though, Stimulus Control Therapy seems the easiest for most people to implement.

     

  • Sleepless in Cape Town.

    Outside the wind is howling and the rain lashes down, my eyes spring open and my mind begins to roam. At first it goes to the poor  living in the informal settlement about a kilometer from my warm home, it peeks inside their leaking shacks and watches as the occupants huddle beneath their black plastic refuse bags trying to escape the freezing rain. My mind then unpacks the previous day like an airport customs agent looking for contraband, it rifles through my mental pockets looking for sharp objects, relational conflict, things left undone, these are all grist to the mind’s mill. In an effort to repair any inconsistency it finds, my mind then begins to construct a to do list for the following day, taking mental notes, commenting inanely like an aging monarch waving at the passing crowd. Somewhere buried deep in my consciousness, is an awareness that my mind shouldn’t be doing this at 2:37 a.m. the recognition flickers and is then subsumed by the next pale thought.

    Why do i continually do this to myself? I went looking for answers (at 2:53 a.m.)

    Current research about insomnia falls into different categories, for example:

    Psychophysiologic Insomnia :

    In many cases, it is unclear if chronic insomnia is a symptom of some physical or psychological condition or if it is a primary disorder of its own. In most instances, a mix of psychological and physical conditions appears to cause insomnia.

    Psychophysiologic insomnia occurs when:

    Transient insomnia disrupts the person’s circadian rhythm. The poor sod then begins to associate bed not with rest and relaxation but with a struggle to sleep. A pattern of sleep failure emerges. Over time, this repeats, and bedtime becomes a source of anxiety. Once in bed, the now harrowed individual broods over the inability to sleep…”but i was tired when i went to bed!” , the consequences of sleep loss “…and i have 10 clients tomorrow!” , and the lack of mental control …”OHM….AUUUMMM….BUGGER!!…OOOHHHMMM”. All attempts at sleep fail…”F*&%$%….OHM…BUGGER!”

    Eventually excessive worry about sleep loss becomes persistent and provides an automatic nightly trigger for anxiety and arousal. Unsuccessful attempts to control thoughts, images, and emotions only worsen the situation. After such a cycle is established, insomnia becomes a self-fulfilling prophecy that can persist indefinitely.

    Medical Conditions and Their Treatments

    Among the many medical problems that can cause chronic insomnia are allergies, benign prostatic hyperplasia (BPH), arthritis, cancer, heart disease, gastroesophageal reflux disease (GERD), hypertension, asthma, emphysema, rheumatologic conditions, Alzheimer’s disease, Parkinson’s disease, hyperthyroidism, epilepsy, and fibromyalgia. . Other types of sleep disorders, such as restless legs syndrome and sleep apnea, can cause insomnia. Many patients with chronic pain also sleep poorly.

    Medications. Among the many medications that can cause insomnia are antidepressants (fluoxetine, bupropion), theophylline, lamotrigine, felbamate, beta-blockers, and beta-agonists.

    Substance Abuse

    About 10 – 15% of chronic insomnia cases result from substance abuse, especially alcohol, cocaine, and sedatives. One or two drinks at dinner, for most people, pose little danger of alcoholism and may help reduce stress and initiate sleep. Excess alcohol or alcohol used to promote sleep (normally >3 glasses) tends to fragment sleep and cause wakefulness a few hours later. It also increases the risk for other sleep disorders, including sleep apnea and restless legs. Alcoholics often suffer insomnia during withdrawal and, in some cases, for several years during recovery.

    Ok, so I am currently not an alcoholic in recovery, do not have Alzheimer’s (to my knowledge), have not got a nostril full of cocaine, nor restless leg syndrome…hmmm, could it be that i am just a little anxious about an academic paper I have to write and have been avoiding at all costs (including my sleep)? Probably, so i think i’ll avoid it for a moment longer and go and make myself a warm cuppa.

  • Snake oils and sanity…

    The Epidemic of Mental Illness: Why?

    JUNE 23, 2011

    Marcia Angell

    Font Size: A A A
    angell_1-062311.jpgAn advertisement for Prozac, from The American Journal of Psychiatry, 1995 

    It seems that Americans are in the midst of a raging epidemic of mental illness, at least as judged by the increase in the numbers treated for it. The tally of those who are so disabled by mental disorders that they qualify for Supplemental Security Income (SSI) or Social Security Disability Insurance (SSDI) increased nearly two and a half times between 1987 and 2007—from one in 184 Americans to one in seventy-six. For children, the rise is even more startling—a thirty-five-fold increase in the same two decades. Mental illness is now the leading cause of disability in children, well ahead of physical disabilities like cerebral palsy or Down syndrome, for which the federal programs were created.

    A large survey of randomly selected adults, sponsored by the National Institute of Mental Health (NIMH) and conducted between 2001 and 2003, found that an astonishing 46 percent met criteria established by the American Psychiatric Association (APA) for having had at least one mental illness within four broad categories at some time in their lives. The categories were “anxiety disorders,” including, among other subcategories, phobias and post-traumatic stress disorder (PTSD); “mood disorders,” including major depression and bipolar disorders; “impulse-control disorders,” including various behavioral problems and attention-deficit/hyperactivity disorder (ADHD); and “substance use disorders,” including alcohol and drug abuse. Most met criteria for more than one diagnosis. Of a subgroup affected within the previous year, a third were under treatment—up from a fifth in a similar survey ten years earlier.

    Nowadays treatment by medical doctors nearly always means psychoactive drugs, that is, drugs that affect the mental state. In fact, most psychiatrists treat only with drugs, and refer patients to psychologists or social workers if they believe psychotherapy is also warranted. The shift from “talk therapy” to drugs as the dominant mode of treatment coincides with the emergence over the past four decades of the theory that mental illness is caused primarily by chemical imbalances in the brain that can be corrected by specific drugs. That theory became broadly accepted, by the media and the public as well as by the medical profession, after Prozac came to market in 1987 and was intensively promoted as a corrective for a deficiency of serotonin in the brain. The number of people treated for depression tripled in the following ten years, and about 10 percent of Americans over age six now take antidepressants. The increased use of drugs to treat psychosis is even more dramatic. The new generation of antipsychotics, such as Risperdal, Zyprexa, and Seroquel, has replaced cholesterol-lowering agents as the top-selling class of drugs in the US.

    What is going on here? Is the prevalence of mental illness really that high and still climbing? Particularly if these disorders are biologically determined and not a result of environmental influences, is it plausible to suppose that such an increase is real? Or are we learning to recognize and diagnose mental disorders that were always there? On the other hand, are we simply expanding the criteria for mental illness so that nearly everyone has one? And what about the drugs that are now the mainstay of treatment? Do they work? If they do, shouldn’t we expect the prevalence of mental illness to be declining, not rising?

    These are the questions, among others, that concern the authors of the three provocative books under review here. They come at the questions from different backgrounds—Irving Kirsch is a psychologist at the University of Hull in the UK, Robert Whitaker a journalist and previously the author of a history of the treatment of mental illness called Mad in America (2001), and Daniel Carlat a psychiatrist who practices in a Boston suburb and publishes a newsletter and blog about his profession.

    The authors emphasize different aspects of the epidemic of mental illness. Kirsch is concerned with whether antidepressants work. Whitaker, who has written an angrier book, takes on the entire spectrum of mental illness and asks whether psychoactive drugs create worse problems than they solve. Carlat, who writes more in sorrow than in anger, looks mainly at how his profession has allied itself with, and is manipulated by, the pharmaceutical industry. But despite their differences, all three are in remarkable agreement on some important matters, and they have documented their views well.

    First, they agree on the disturbing extent to which the companies that sell psychoactive drugs—through various forms of marketing, both legal and illegal, and what many people would describe as bribery—have come to determine what constitutes a mental illness and how the disorders should be diagnosed and treated. This is a subject to which I’ll return.

    Second, none of the three authors subscribes to the popular theory that mental illness is caused by a chemical imbalance in the brain. As Whitaker tells the story, that theory had its genesis shortly after psychoactive drugs were introduced in the 1950s. The first was Thorazine (chlorpromazine), which was launched in 1954 as a “major tranquilizer” and quickly found widespread use in mental hospitals to calm psychotic patients, mainly those with schizophrenia. Thorazine was followed the next year by Miltown (meprobamate), sold as a “minor tranquilizer” to treat anxiety in outpatients. And in 1957, Marsilid (iproniazid) came on the market as a “psychic energizer” to treat depression.

    In the space of three short years, then, drugs had become available to treat what at that time were regarded as the three major categories of mental illness—psychosis, anxiety, and depression—and the face of psychiatry was totally transformed. These drugs, however, had not initially been developed to treat mental illness. They had been derived from drugs meant to treat infections, and were found only serendipitously to alter the mental state. At first, no one had any idea how they worked. They simply blunted disturbing mental symptoms. But over the next decade, researchers found that these drugs, and the newer psychoactive drugs that quickly followed, affected the levels of certain chemicals in the brain.

    Some brief—and necessarily quite simplified—background: the brain contains billions of nerve cells, called neurons, arrayed in immensely complicated networks and communicating with one another constantly. The typical neuron has multiple filamentous extensions, one called an axon and the others called dendrites, through which it sends and receives signals from other neurons. For one neuron to communicate with another, however, the signal must be transmitted across the tiny space separating them, called a synapse. To accomplish that, the axon of the sending neuron releases a chemical, called a neurotransmitter, into the synapse. The neurotransmitter crosses the synapse and attaches to receptors on the second neuron, often a dendrite, thereby activating or inhibiting the receiving cell. Axons have multiple terminals, so each neuron has multiple synapses. Afterward, the neurotransmitter is either reabsorbed by the first neuron or metabolized by enzymes so that the status quo ante is restored. There are exceptions and variations to this story, but that is the usual way neurons communicate with one another.

    When it was found that psychoactive drugs affect neurotransmitter levels in the brain, as evidenced mainly by the levels of their breakdown products in the spinal fluid, the theory arose that the cause of mental illness is an abnormality in the brain’s concentration of these chemicals that is specifically countered by the appropriate drug. For example, because Thorazine was found to lower dopamine levels in the brain, it was postulated that psychoses like schizophrenia are caused by too much dopamine. Or later, because certain antidepressants increase levels of the neurotransmitter serotonin in the brain, it was postulated that depression is caused by too little serotonin. (These antidepressants, like Prozac or Celexa, are called selective serotonin reuptake inhibitors (SSRIs) because they prevent the reabsorption of serotonin by the neurons that release it, so that more remains in the synapses to activate other neurons.) Thus, instead of developing a drug to treat an abnormality, an abnormality was postulated to fit a drug.

    That was a great leap in logic, as all three authors point out. It was entirely possible that drugs that affected neurotransmitter levels could relieve symptoms even if neurotransmitters had nothing to do with the illness in the first place (and even possible that they relieved symptoms through some other mode of action entirely). As Carlat puts it, “By this same logic one could argue that the cause of all pain conditions is a deficiency of opiates, since narcotic pain medications activate opiate receptors in the brain.” Or similarly, one could argue that fevers are caused by too little aspirin.

    But the main problem with the theory is that after decades of trying to prove it, researchers have still come up empty-handed. All three authors document the failure of scientists to find good evidence in its favor. Neurotransmitter function seems to be normal in people with mental illness before treatment. In Whitaker’s words:

    Prior to treatment, patients diagnosed with schizophrenia, depression, and other psychiatric disorders do not suffer from any known “chemical imbalance.” However, once a person is put on a psychiatric medication, which, in one manner or another, throws a wrench into the usual mechanics of a neuronal pathway, his or her brain begins to function…abnormally.

    Carlat refers to the chemical imbalance theory as a “myth” (which he calls “convenient” because it destigmatizes mental illness), and Kirsch, whose book focuses on depression, sums up this way: “It now seems beyond question that the traditional account of depression as a chemical imbalance in the brain is simply wrong.” Why the theory persists despite the lack of evidence is a subject I’ll come to.

    Do the drugs work? After all, regardless of the theory, that is the practical question. In his spare, remarkably engrossing book, The Emperor’s New Drugs, Kirsch describes his fifteen-year scientific quest to answer that question about antidepressants. When he began his work in 1995, his main interest was in the effects of placebos. To study them, he and a colleague reviewed thirty-eight published clinical trials that compared various treatments for depression with placebos, or compared psychotherapy with no treatment. Most such trials last for six to eight weeks, and during that time, patients tend to improve somewhat even without any treatment. But Kirsch found that placebos were three times as effective as no treatment. That didn’t particularly surprise him. What did surprise him was the fact that antidepressants were only marginally better than placebos. As judged by scales used to measure depression, placebos were 75 percent as effective as antidepressants. Kirsch then decided to repeat his study by examining a more complete and standardized data set.

    The data he used were obtained from the US Food and Drug Administration (FDA) instead of the published literature. When drug companies seek approval from the FDA to market a new drug, they must submit to the agency all clinical trials they have sponsored. The trials are usually double-blind and placebo-controlled, that is, the participating patients are randomly assigned to either drug or placebo, and neither they nor their doctors know which they have been assigned. The patients are told only that they will receive an active drug or a placebo, and they are also told of any side effects they might experience. If two trials show that the drug is more effective than a placebo, the drug is generally approved. But companies may sponsor as many trials as they like, most of which could be negative—that is, fail to show effectiveness. All they need is two positive ones. (The results of trials of the same drug can differ for many reasons, including the way the trial is designed and conducted, its size, and the types of patients studied.)

    angell_2-062311.jpgEdward Gorey Charitable Trust 

    For obvious reasons, drug companies make very sure that their positive studies are published in medical journals and doctors know about them, while the negative ones often languish unseen within the FDA, which regards them as proprietary and therefore confidential. This practice greatly biases the medical literature, medical education, and treatment decisions.

    Kirsch and his colleagues used the Freedom of Information Act to obtain FDA reviews of all placebo-controlled clinical trials, whether positive or negative, submitted for the initial approval of the six most widely used antidepressant drugs approved between 1987 and 1999—Prozac, Paxil, Zoloft, Celexa, Serzone, and Effexor. This was a better data set than the one used in his previous study, not only because it included negative studies but because the FDA sets uniform quality standards for the trials it reviews and not all of the published research in Kirsch’s earlier study had been submitted to the FDA as part of a drug approval application.

    Altogether, there were forty-two trials of the six drugs. Most of them were negative. Overall, placebos were 82 percent as effective as the drugs, as measured by the Hamilton Depression Scale (HAM-D), a widely used score of symptoms of depression. The average difference between drug and placebo was only 1.8 points on the HAM-D, a difference that, while statistically significant, was clinically meaningless. The results were much the same for all six drugs: they were all equally unimpressive. Yet because the positive studies were extensively publicized, while the negative ones were hidden, the public and the medical profession came to believe that these drugs were highly effective antidepressants.

    Kirsch was also struck by another unexpected finding. In his earlier study and in work by others, he observed that even treatments that were not considered to be antidepressants—such as synthetic thyroid hormone, opiates, sedatives, stimulants, and some herbal remedies—were as effective as antidepressants in alleviating the symptoms of depression. Kirsch writes, “When administered as antidepressants, drugs that increase, decrease or have no effect on serotonin all relieve depression to about the same degree.” What all these “effective” drugs had in common was that they produced side effects, which participating patients had been told they might experience.

    It is important that clinical trials, particularly those dealing with subjective conditions like depression, remain double-blind, with neither patients nor doctors knowing whether or not they are getting a placebo. That prevents both patients and doctors from imagining improvements that are not there, something that is more likely if they believe the agent being administered is an active drug instead of a placebo. Faced with his findings that nearly any pill with side effects was slightly more effective in treating depression than an inert placebo, Kirsch speculated that the presence of side effects in individuals receiving drugs enabled them to guess correctly that they were getting active treatment—and this was borne out by interviews with patients and doctors—which made them more likely to report improvement. He suggests that the reason antidepressants appear to work better in relieving severe depression than in less severe cases is that patients with severe symptoms are likely to be on higher doses and therefore experience more side effects.

    To further investigate whether side effects bias responses, Kirsch looked at some trials that employed “active” placebos instead of inert ones. An active placebo is one that itself produces side effects, such as atropine—a drug that selectively blocks the action of certain types of nerve fibers. Although not an antidepressant, atropine causes, among other things, a noticeably dry mouth. In trials using atropine as the placebo, there was no difference between the antidepressant and the active placebo. Everyone had side effects of one type or another, and everyone reported the same level of improvement. Kirsch reported a number of other odd findings in clinical trials of antidepressants, including the fact that there is no dose-response curve—that is, high doses worked no better than low ones—which is extremely unlikely for truly effective drugs. “Putting all this together,” writes Kirsch,

    leads to the conclusion that the relatively small difference between drugs and placebos might not be a real drug effect at all. Instead, it might be an enhanced placebo effect, produced by the fact that some patients have broken [the] blind and have come to realize whether they were given drug or placebo. If this is the case, then there is no real antidepressant drug effect at all. Rather than comparing placebo to drug, we have been comparing “regular” placebos to “extra-strength” placebos.

    That is a startling conclusion that flies in the face of widely accepted medical opinion, but Kirsch reaches it in a careful, logical way. Psychiatrists who use antidepressants—and that’s most of them—and patients who take them might insist that they know from clinical experience that the drugs work. But anecdotes are known to be a treacherous way to evaluate medical treatments, since they are so subject to bias; they can suggest hypotheses to be studied, but they cannot prove them. That is why the development of the double-blind, randomized, placebo-controlled clinical trial in the middle of the past century was such an important advance in medical science. Anecdotes about leeches or laetrile or megadoses of vitamin C, or any number of other popular treatments, could not stand up to the scrutiny of well-designed trials. Kirsch is a faithful proponent of the scientific method, and his voice therefore brings a welcome objectivity to a subject often swayed by anecdotes, emotions, or, as we will see, self-interest.

    Whitaker’s book is broader and more polemical. He considers all mental illness, not just depression. Whereas Kirsch concludes that antidepressants are probably no more effective than placebos, Whitaker concludes that they and most of the other psychoactive drugs are not only ineffective but harmful. He begins by observing that even as drug treatment for mental illness has skyrocketed, so has the prevalence of the conditions treated:

    The number of disabled mentally ill has risen dramatically since 1955, and during the past two decades, a period when the prescribing of psychiatric medications has exploded, the number of adults and children disabled by mental illness has risen at a mind-boggling rate. Thus we arrive at an obvious question, even though it is heretical in kind: Could our drug-based paradigm of care, in some unforeseen way, be fueling this modern-day plague?

    Moreover, Whitaker contends, the natural history of mental illness has changed. Whereas conditions such as schizophrenia and depression were once mainly self-limited or episodic, with each episode usually lasting no more than six months and interspersed with long periods of normalcy, the conditions are now chronic and lifelong. Whitaker believes that this might be because drugs, even those that relieve symptoms in the short term, cause long-term mental harms that continue after the underlying illness would have naturally resolved.

    The evidence he marshals for this theory varies in quality. He doesn’t sufficiently acknowledge the difficulty of studying the natural history of any illness over a fifty-some-year time span during which many circumstances have changed, in addition to drug use. It is even more difficult to compare long-term outcomes in treated versus untreated patients, since treatment may be more likely in those with more severe disease at the outset. Nevertheless, Whitaker’s evidence is suggestive, if not conclusive.

    If psychoactive drugs do cause harm, as Whitaker contends, what is the mechanism? The answer, he believes, lies in their effects on neurotransmitters. It is well understood that psychoactive drugs disturb neurotransmitter function, even if that was not the cause of the illness in the first place. Whitaker describes a chain of effects. When, for example, an SSRI antidepressant like Celexa increases serotonin levels in synapses, it stimulates compensatory changes through a process called negative feedback. In response to the high levels of serotonin, the neurons that secrete it (presynaptic neurons) release less of it, and the postsynaptic neurons become desensitized to it. In effect, the brain is trying to nullify the drug’s effects. The same is true for drugs that block neurotransmitters, except in reverse. For example, most antipsychotic drugs block dopamine, but the presynaptic neurons compensate by releasing more of it, and the postsynaptic neurons take it up more avidly. (This explanation is necessarily oversimplified, since many psychoactive drugs affect more than one of the many neurotransmitters.)

    With long-term use of psychoactive drugs, the result is, in the words of Steve Hyman, a former director of the NIMH and until recently provost of Harvard University, “substantial and long-lasting alterations in neural function.” As quoted by Whitaker, the brain, Hyman wrote, begins to function in a manner “qualitatively as well as quantitatively different from the normal state.” After several weeks on psychoactive drugs, the brain’s compensatory efforts begin to fail, and side effects emerge that reflect the mechanism of action of the drugs. For example, the SSRIs may cause episodes of mania, because of the excess of serotonin. Antipsychotics cause side effects that resemble Parkinson’s disease, because of the depletion of dopamine (which is also depleted in Parkinson’s disease). As side effects emerge, they are often treated by other drugs, and many patients end up on a cocktail of psychoactive drugs prescribed for a cocktail of diagnoses. The episodes of mania caused by antidepressants may lead to a new diagnosis of “bipolar disorder” and treatment with a “mood stabilizer,” such as Depokote (an anticonvulsant) plus one of the newer antipsychotic drugs. And so on.

    Some patients take as many as six psychoactive drugs daily. One well- respected researcher, Nancy Andreasen, and her colleagues published evidence that the use of antipsychotic drugs is associated with shrinkage of the brain, and that the effect is directly related to the dose and duration of treatment. As Andreasen explained to The New York Times, “The prefrontal cortex doesn’t get the input it needs and is being shut down by drugs. That reduces the psychotic symptoms. It also causes the prefrontal cortex to slowly atrophy.”*

    Getting off the drugs is exceedingly difficult, according to Whitaker, because when they are withdrawn the compensatory mechanisms are left unopposed. When Celexa is withdrawn, serotonin levels fall precipitously because the presynaptic neurons are not releasing normal amounts and the postsynaptic neurons no longer have enough receptors for it. Similarly, when an antipsychotic is withdrawn, dopamine levels may skyrocket. The symptoms produced by withdrawing psychoactive drugs are often confused with relapses of the original disorder, which can lead psychiatrists to resume drug treatment, perhaps at higher doses.

    Unlike the cool Kirsch, Whitaker is outraged by what he sees as an iatrogenic (i.e., inadvertent and medically introduced) epidemic of brain dysfunction, particularly that caused by the widespread use of the newer (“atypical”) antipsychotics, such as Zyprexa, which cause serious side effects. Here is what he calls his “quick thought experiment”:

    Imagine that a virus suddenly appears in our society that makes people sleep twelve, fourteen hours a day. Those infected with it move about somewhat slowly and seem emotionally disengaged. Many gain huge amounts of weight—twenty, forty, sixty, and even one hundred pounds. Often their blood sugar levels soar, and so do their cholesterol levels. A number of those struck by the mysterious illness—including young children and teenagers—become diabetic in fairly short order…. The federal government gives hundreds of millions of dollars to scientists at the best universities to decipher the inner workings of this virus, and they report that the reason it causes such global dysfunction is that it blocks a multitude of neurotransmitter receptors in the brain—dopaminergic, serotonergic, muscarinic, adrenergic, and histaminergic. All of those neuronal pathways in the brain are compromised. Meanwhile, MRI studies find that over a period of several years, the virus shrinks the cerebral cortex, and this shrinkage is tied to cognitive decline. A terrified public clamors for a cure.

    Now such an illness has in fact hit millions of American children and adults. We have just described the effects of Eli Lilly’s best-selling antipsychotic, Zyprexa.

    If psychoactive drugs are useless, as Kirsch believes about antidepressants, or worse than useless, as Whitaker believes, why are they so widely prescribed by psychiatrists and regarded by the public and the profession as something akin to wonder drugs? Why is the current against which Kirsch and Whitaker and, as we will see, Carlat are swimming so powerful? I discuss these questions in Part II of this review.

    —This is the first part of a two-part article.

    1. *See Claudia Dreifus, “Using Imaging to Look at Changes in the Brain,” The New York Times , September 15, 2008. 

     

  • Smoke and mirrors…

    The Illusions of Psychiatry

    JULY 14, 2011

    Marcia Angell

    Font Size: A A A

    The Emperor’s New Drugs: Exploding the Antidepressant Myth
    by Irving Kirsch
    Basic Books, 226 pp., $15.99 (paper)

    Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America
    by Robert Whitaker
    Crown, 404 pp., $26.00

    Unhinged: The Trouble with Psychiatry—A Doctor’s Revelations About a Profession in Crisis
    by Daniel Carlat
    Free Press, 256 pp., $25.00

    Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR)
    by American Psychiatric Association
    American Psychiatric Publishing, 992 pp., $135.00; $115.00 (paper)

    angell_1-071411.jpgUnited Artists/Photofest 

    Mimi Sarkisian, Louise Fletcher, and Jack Nicholson in One Flew Over the Cuckoo’s Nest, 1975

    In my article in the last issue, I focused mainly on the recent books by psychologist Irving Kirsch and journalist Robert Whitaker, and what they tell us about the epidemic of mental illness and the drugs used to treat it.1 Here I discuss the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM)—often referred to as the bible of psychiatry, and now heading for its fifth edition—and its extraordinary influence within American society. I also examine Unhinged, the recent book by Daniel Carlat, a psychiatrist, who provides a disillusioned insider’s view of the psychiatric profession. And I discuss the widespread use of psychoactive drugs in children, and the baleful influence of the pharmaceutical industry on the practice of psychiatry.

    One of the leaders of modern psychiatry, Leon Eisenberg, a professor at Johns Hopkins and then Harvard Medical School, who was among the first to study the effects of stimulants on attention deficit disorder in children, wrote that American psychiatry in the late twentieth century moved from a state of “brainlessness” to one of “mindlessness.”2 By that he meant that before psychoactive drugs (drugs that affect the mental state) were introduced, the profession had little interest in neurotransmitters or any other aspect of the physical brain. Instead, it subscribed to the Freudian view that mental illness had its roots in unconscious conflicts, usually originating in childhood, that affected the mind as though it were separate from the brain.

    But with the introduction of psychoactive drugs in the 1950s, and sharply accelerating in the 1980s, the focus shifted to the brain. Psychiatrists began to refer to themselves as psychopharmacologists, and they had less and less interest in exploring the life stories of their patients. Their main concern was to eliminate or reduce symptoms by treating sufferers with drugs that would alter brain function. An early advocate of this biological model of mental illness, Eisenberg in his later years became an outspoken critic of what he saw as the indiscriminate use of psychoactive drugs, driven largely by the machinations of the pharmaceutical industry.

    When psychoactive drugs were first introduced, there was a brief period of optimism in the psychiatric profession, but by the 1970s, optimism gave way to a sense of threat. Serious side effects of the drugs were becoming apparent, and an antipsychiatry movement had taken root, as exemplified by the writings of Thomas Szasz and the movie One Flew Over the Cuckoo’s Nest. There was also growing competition for patients from psychologists and social workers. In addition, psychiatrists were plagued by internal divisions: some embraced the new biological model, some still clung to the Freudian model, and a few saw mental illness as an essentially sane response to an insane world. Moreover, within the larger medical profession, psychiatrists were regarded as something like poor relations; even with their new drugs, they were seen as less scientific than other specialists, and their income was generally lower.

    In the late 1970s, the psychiatric profession struck back—hard. As Robert Whitaker tells it in Anatomy of an Epidemic, the medical director of the American Psychiatric Association (APA), Melvin Sabshin, declared in 1977 that “a vigorous effort to remedicalize psychiatry should be strongly supported,” and he launched an all-out media and public relations campaign to do exactly that. Psychiatry had a powerful weapon that its competitors lacked. Since psychiatrists must qualify as MDs, they have the legal authority to write prescriptions. By fully embracing the biological model of mental illness and the use of psychoactive drugs to treat it, psychiatry was able to relegate other mental health care providers to ancillary positions and also to identify itself as a scientific discipline along with the rest of the medical profession. Most important, by emphasizing drug treatment, psychiatry became the darling of the pharmaceutical industry, which soon made its gratitude tangible.

    These efforts to enhance the status of psychiatry were undertaken deliberately. The APA was then working on the third edition of the DSM, which provides diagnostic criteria for all mental disorders. The president of the APA had appointed Robert Spitzer, a much-admired professor of psychiatry at Columbia University, to head the task force overseeing the project. The first two editions, published in 1952 and 1968, reflected the Freudian view of mental illness and were little known outside the profession. Spitzer set out to make the DSM-IIIsomething quite different. He promised that it would be “a defense of the medical model as applied to psychiatric problems,” and the president of the APA in 1977, Jack Weinberg, said it would “clarify to anyone who may be in doubt that we regard psychiatry as a specialty of medicine.”

    When Spitzer’s DSM-III was published in 1980, it contained 265 diagnoses (up from 182 in the previous edition), and it came into nearly universal use, not only by psychiatrists, but by insurance companies, hospitals, courts, prisons, schools, researchers, government agencies, and the rest of the medical profession. Its main goal was to bring consistency (usually referred to as “reliability”) to psychiatric diagnosis, that is, to ensure that psychiatrists who saw the same patient would agree on the diagnosis. To do that, each diagnosis was defined by a list of symptoms, with numerical thresholds. For example, having at least five of nine particular symptoms got you a full-fledged diagnosis of a major depressive episode within the broad category of “mood disorders.” But there was another goal—to justify the use of psychoactive drugs. The president of the APA last year, Carol Bernstein, in effect acknowledged that. “It became necessary in the 1970s,” she wrote, “to facilitate diagnostic agreement among clinicians, scientists, and regulatory authorities given the need to match patients with newly emerging pharmacologic treatments.”3

    The DSM-III was almost certainly more “reliable” than the earlier versions, but reliability is not the same thing as validity. Reliability, as I have noted, is used to mean consistency; validity refers to correctness or soundness. If nearly all physicians agreed that freckles were a sign of cancer, the diagnosis would be “reliable,” but not valid. The problem with the DSM is that in all of its editions, it has simply reflected the opinions of its writers, and in the case of the DSM-III mainly of Spitzer himself, who has been justly called one of the most influential psychiatrists of the twentieth century.4 In his words, he “picked everybody that [he] was comfortable with” to serve with him on the fifteen-member task force, and there were complaints that he called too few meetings and generally ran the process in a haphazard but high-handed manner. Spitzer said in a 1989 interview, “I could just get my way by sweet talking and whatnot.” In a 1984 article entitled “The Disadvantages of DSM-III Outweigh Its Advantages,” George Vaillant, a professor of psychiatry at Harvard Medical School, wrote that theDSM-III represented “a bold series of choices based on guess, taste, prejudice, and hope,” which seems to be a fair description.

    Not only did the DSM become the bible of psychiatry, but like the real Bible, it depended a lot on something akin to revelation. There are no citations of scientific studies to support its decisions. That is an astonishing omission, because in all medical publications, whether journal articles or textbooks, statements of fact are supposed to be supported by citations of published scientific studies. (There are four separate “sourcebooks” for the current edition of the DSM that present the rationale for some decisions, along with references, but that is not the same thing as specific references.) It may be of much interest for a group of experts to get together and offer their opinions, but unless these opinions can be buttressed by evidence, they do not warrant the extraordinary deference shown to the DSM. The DSM-III was supplanted by the DSM-III-R in 1987, the DSM-IV in 1994, and the current version, the DSM-IV-TR (text revised) in 2000, which contains 365 diagnoses. “With each subsequent edition,” writes Daniel Carlat in his absorbing book, “the number of diagnostic categories multiplied, and the books became larger and more expensive. Each became a best seller for the APA, and DSM is now one of the major sources of income for the organization.” The DSM-IV sold over a million copies.

    As psychiatry became a drug-intensive specialty, the pharmaceutical industry was quick to see the advantages of forming an alliance with the psychiatric profession. Drug companies began to lavish attention and largesse on psychiatrists, both individually and collectively, directly and indirectly. They showered gifts and free samples on practicing psychiatrists, hired them as consultants and speakers, bought them meals, helped pay for them to attend conferences, and supplied them with “educational” materials. When Minnesota and Vermont implemented “sunshine laws” that require drug companies to report all payments to doctors, psychiatrists were found to receive more money than physicians in any other specialty. The pharmaceutical industry also subsidizes meetings of the APA and other psychiatric conferences. About a fifth of APA funding now comes from drug companies.

    Drug companies are particularly eager to win over faculty psychiatrists at prestigious academic medical centers. Called “key opinion leaders” (KOLs) by the industry, these are the people who through their writing and teaching influence how mental illness will be diagnosed and treated. They also publish much of the clinical research on drugs and, most importantly, largely determine the content of the DSM. In a sense, they are the best sales force the industry could have, and are worth every cent spent on them. Of the 170 contributors to the current version of the DSM (the DSM-IV-TR), almost all of whom would be described as KOLs, ninety-five had financial ties to drug companies, including all of the contributors to the sections on mood disorders and schizophrenia.5

    The drug industry, of course, supports other specialists and professional societies, too, but Carlat asks, “Why do psychiatrists consistently lead the pack of specialties when it comes to taking money from drug companies?” His answer: “Our diagnoses are subjective and expandable, and we have few rational reasons for choosing one treatment over another.” Unlike the conditions treated in most other branches of medicine, there are no objective signs or tests for mental illness—no lab data or MRI findings—and the boundaries between normal and abnormal are often unclear. That makes it possible to expand diagnostic boundaries or even create new diagnoses, in ways that would be impossible, say, in a field like cardiology. And drug companies have every interest in inducing psychiatrists to do just that.

    In addition to the money spent on the psychiatric profession directly, drug companies heavily support many related patient advocacy groups and educational organizations. Whitaker writes that in the first quarter of 2009 alone,

    Eli Lilly gave $551,000 to NAMI [National Alliance on Mental Illness] and its local chapters, $465,000 to the National Mental Health Association, $130,000 to CHADD (an ADHD [attention deficit/hyperactivity disorder] patient-advocacy group), and $69,250 to the American Foundation for Suicide Prevention.

    And that’s just one company in three months; one can imagine what the yearly total would be from all companies that make psychoactive drugs. These groups ostensibly exist to raise public awareness of psychiatric disorders, but they also have the effect of promoting the use of psychoactive drugs and influencing insurers to cover them. Whitaker summarizes the growth of industry influence after the publication of theDSM-III as follows:

    In short, a powerful quartet of voices came together during the 1980’s eager to inform the public that mental disorders were brain diseases. Pharmaceutical companies provided the financial muscle. The APA and psychiatrists at top medical schools conferred intellectual legitimacy upon the enterprise. The NIMH [National Institute of Mental Health] put the government’s stamp of approval on the story. NAMI provided a moral authority.

    Like most other psychiatrists, Carlat treats his patients only with drugs, not talk therapy, and he is candid about the advantages of doing so. If he sees three patients an hour for psychopharmacology, he calculates, he earns about $180 per hour from insurers. In contrast, he would be able to see only one patient an hour for talk therapy, for which insurers would pay him less than $100. Carlat does not believe that psychopharmacology is particularly complicated, let alone precise, although the public is led to believe that it is:

    Patients often view psychiatrists as wizards of neurotransmitters, who can choose just the right medication for whatever chemical imbalance is at play. This exaggerated conception of our capabilities has been encouraged by drug companies, by psychiatrists ourselves, and by our patients’ understandable hopes for cures.

    His work consists of asking patients a series of questions about their symptoms to see whether they match up with any of the disorders in theDSM. This matching exercise, he writes, provides “the illusion that we understand our patients when all we are doing is assigning them labels.” Often patients meet criteria for more than one diagnosis, because there is overlap in symptoms. For example, difficulty concentrating is a criterion for more than one disorder. One of Carlat’s patients ended up with seven separate diagnoses. “We target discrete symptoms with treatments, and other drugs are piled on top to treat side effects.” A typical patient, he says, might be taking Celexa for depression, Ativan for anxiety, Ambien for insomnia, Provigil for fatigue (a side effect of Celexa), and Viagra for impotence (another side effect of Celexa).

    As for the medications themselves, Carlat writes that “there are only a handful of umbrella categories of psychotropic drugs,” within which the drugs are not very different from one another. He doesn’t believe there is much basis for choosing among them. “To a remarkable degree, our choice of medications is subjective, even random. Perhaps your psychiatrist is in a Lexapro mood this morning, because he was just visited by an attractive Lexapro drug rep.” And he sums up:

    Such is modern psychopharmacology. Guided purely by symptoms, we try different drugs, with no real conception of what we are trying to fix, or of how the drugs are working. I am perpetually astonished that we are so effective for so many patients.

    While Carlat believes that psychoactive drugs are sometimes effective, his evidence is anecdotal. What he objects to is their overuse and what he calls the “frenzy of psychiatric diagnoses.” As he puts it, “if you ask any psychiatrist in clinical practice, including me, whether antidepressants work for their patients, you will hear an unambiguous ‘yes.’ We see people getting better all the time.” But then he goes on to speculate, like Irving Kirsch in The Emperor’s New Drugs, that what they are really responding to could be an activated placebo effect. If psychoactive drugs are not all they’re cracked up to be—and the evidence is that they’re not—what about the diagnoses themselves? As they multiply with each edition of the DSM, what are we to make of them?

    In 1999, the APA began work on its fifth revision of the DSM, which is scheduled to be published in 2013. The twenty-seven-member task force is headed by David Kupfer, a professor of psychiatry at the University of Pittsburgh, assisted by Darrel Regier of the APA’s American Psychiatric Institute for Research and Education. As with the earlier editions, the task force is advised by multiple work groups, which now total some 140 members, corresponding to the major diagnostic categories. Ongoing deliberations and proposals have been extensively reported on the APA website (www.DSM5.org) and in the media, and it appears that the already very large constellation of mental disorders will grow still larger.

    In particular, diagnostic boundaries will be broadened to include precursors of disorders, such as “psychosis risk syndrome” and “mild cognitive impairment” (possible early Alzheimer’s disease). The term “spectrum” is used to widen categories, for example, “obsessive-compulsive disorder spectrum,” “schizophrenia spectrum disorder,” and “autism spectrum disorder.” And there are proposals for entirely new entries, such as “hypersexual disorder,” “restless legs syndrome,” and “binge eating.”

    Even Allen Frances, chairman of the DSM-IV task force, is highly critical of the expansion of diagnoses in the DSM-V. In the June 26, 2009, issue of Psychiatric Times, he wrote that the DSM-V will be a “bonanza for the pharmaceutical industry but at a huge cost to the new false positive patients caught in the excessively wide DSM-V net.” As if to underscore that judgment, Kupfer and Regier wrote in a recent article in the Journal of the American Medical Association (JAMA), entitled “Why All of Medicine Should Care About DSM-5,” that “in primary care settings, approximately 30 percent to 50 percent of patients have prominent mental health symptoms or identifiable mental disorders, which have significant adverse consequences if left untreated.”6 It looks as though it will be harder and harder to be normal.

    At the end of the article by Kupfer and Regier is a small-print “financial disclosure” that reads in part:

    Prior to being appointed as chair, DSM-5 Task Force, Dr. Kupfer reports having served on advisory boards for Eli Lilly & Co, Forest Pharmaceuticals Inc, Solvay/Wyeth Pharmaceuticals, and Johnson & Johnson; and consulting for Servier and Lundbeck.

    Regier oversees all industry-sponsored research grants for the APA. The DSM-V (used interchangeably with DSM-5) is the first edition to establish rules to limit financial conflicts of interest in members of the task force and work groups. According to these rules, once members were appointed, which occurred in 2006–2008, they could receive no more than $10,000 per year in aggregate from drug companies or own more than $50,000 in company stock. The website shows their company ties for three years before their appointments, and that is what Kupfer disclosed in the JAMA article and what is shown on the APA website, where 56 percent of members of the work groups disclosed significant industry interests.

    angell_2-071411.png‘Give me the first thing that comes to hand’; lithograph by Grandville, 1832 

    The pharmaceutical industry influences psychiatrists to prescribe psychoactive drugs even for categories of patients in whom the drugs have not been found safe and effective. What should be of greatest concern for Americans is the astonishing rise in the diagnosis and treatment of mental illness in children, sometimes as young as two years old. These children are often treated with drugs that were never approved by the FDA for use in this age group and have serious side effects. The apparent prevalence of “juvenile bipolar disorder” jumped forty-fold between 1993 and 2004, and that of “autism” increased from one in five hundred children to one in ninety over the same decade. Ten percent of ten-year-old boys now take daily stimulants forADHD—”attention deficit/hyperactivity disorder”—and 500,000 children take antipsychotic drugs.

    There seem to be fashions in childhood psychiatric diagnoses, with one disorder giving way to the next. At first, ADHD, manifested by hyperactivity, inattentiveness, and impulsivity usually in school-age children, was the fastest-growing diagnosis. But in the mid-1990s, two highly influential psychiatrists at the Massachusetts General Hospital proposed that many children with ADHD really had bipolar disorder that could sometimes be diagnosed as early as infancy. They proposed that the manic episodes characteristic of bipolar disorder in adults might be manifested in children as irritability. That gave rise to a flood of diagnoses of juvenile bipolar disorder. Eventually this created something of a backlash, and the DSM-V now proposes partly to replace the diagnosis with a brand-new one, called “temper dysregulation disorder with dysphoria,” or TDD, which Allen Frances calls “a new monster.”7

    One would be hard pressed to find a two-year-old who is not sometimes irritable, a boy in fifth grade who is not sometimes inattentive, or a girl in middle school who is not anxious. (Imagine what taking a drug that causes obesity would do to such a girl.) Whether such children are labeled as having a mental disorder and treated with prescription drugs depends a lot on who they are and the pressures their parents face.8 As low-income families experience growing economic hardship, many are finding that applying for Supplemental Security Income (SSI) payments on the basis of mental disability is the only way to survive. It is more generous than welfare, and it virtually ensures that the family will also qualify for Medicaid. According to MIT economics professor David Autor, “This has become the new welfare.” Hospitals and state welfare agencies also have incentives to encourage uninsured families to apply for SSI payments, since hospitals will get paid and states will save money by shifting welfare costs to the federal government.

    Growing numbers of for-profit firms specialize in helping poor families apply for SSI benefits. But to qualify nearly always requires that applicants, including children, be taking psychoactive drugs. According to a New York Times story, a Rutgers University study found that children from low-income families are four times as likely as privately insured children to receive antipsychotic medicines.

    In December 2006 a four-year-old child named Rebecca Riley died in a small town near Boston from a combination of Clonidine and Depakote, which she had been prescribed, along with Seroquel, to treat “ADHD” and “bipolar disorder”—diagnoses she received when she was two years old. Clonidine was approved by the FDA for treating high blood pressure. Depakote was approved for treating epilepsy and acute mania in bipolar disorder. Seroquel was approved for treating schizophrenia and acute mania. None of the three was approved to treatADHD or for long-term use in bipolar disorder, and none was approved for children Rebecca’s age. Rebecca’s two older siblings had been given the same diagnoses and were each taking three psychoactive drugs. The parents had obtained SSI benefits for the siblings and for themselves, and were applying for benefits for Rebecca when she died. The family’s total income from SSI was about $30,000 per year.9

    Whether these drugs should ever have been prescribed for Rebecca in the first place is the crucial question. The FDA approves drugs only for specified uses, and it is illegal for companies to market them for any other purpose—that is, “off-label.” Nevertheless, physicians are permitted to prescribe drugs for any reason they choose, and one of the most lucrative things drug companies can do is persuade physicians to prescribe drugs off-label, despite the law against it. In just the past four years, five firms have admitted to federal charges of illegally marketing psychoactive drugs. AstraZeneca marketed Seroquel off-label for children and the elderly (another vulnerable population, often administered antipsychotics in nursing homes); Pfizer faced similar charges for Geodon (an antipsychotic); Eli Lilly for Zyprexa (an antipsychotic); Bristol-Myers Squibb for Abilify (another antipsychotic); and Forest Labs for Celexa (an antidepressant).

    Despite having to pay hundreds of millions of dollars to settle the charges, the companies have probably come out well ahead. The original purpose of permitting doctors to prescribe drugs off-label was to enable them to treat patients on the basis of early scientific reports, without having to wait for FDA approval. But that sensible rationale has become a marketing tool. Because of the subjective nature of psychiatric diagnosis, the ease with which diagnostic boundaries can be expanded, the seriousness of the side effects of psychoactive drugs, and the pervasive influence of their manufacturers, I believe doctors should be prohibited from prescribing psychoactive drugs off-label, just as companies are prohibited from marketing them off-label.

    The books by Irving Kirsch, Robert Whitaker, and Daniel Carlat are powerful indictments of the way psychiatry is now practiced. They document the “frenzy” of diagnosis, the overuse of drugs with sometimes devastating side effects, and widespread conflicts of interest. Critics of these books might argue, as Nancy Andreasen implied in her paper on the loss of brain tissue with long-term antipsychotic treatment, that the side effects are the price that must be paid to relieve the suffering caused by mental illness. If we knew that the benefits of psychoactive drugs outweighed their harms, that would be a strong argument, since there is no doubt that many people suffer grievously from mental illness. But as Kirsch, Whitaker, and Carlat argue convincingly, that expectation may be wrong.

    At the very least, we need to stop thinking of psychoactive drugs as the best, and often the only, treatment for mental illness or emotional distress. Both psychotherapy and exercise have been shown to be as effective as drugs for depression, and their effects are longer-lasting, but unfortunately, there is no industry to push these alternatives and Americans have come to believe that pills must be more potent. More research is needed to study alternatives to psychoactive drugs, and the results should be included in medical education.

    In particular, we need to rethink the care of troubled children. Here the problem is often troubled families in troubled circumstances. Treatment directed at these environmental conditions—such as one-on-one tutoring to help parents cope or after-school centers for the children—should be studied and compared with drug treatment. In the long run, such alternatives would probably be less expensive. Our reliance on psychoactive drugs, seemingly for all of life’s discontents, tends to close off other options. In view of the risks and questionable long-term effectiveness of drugs, we need to do better. Above all, we should remember the time-honored medical dictum: first, do no harm (primum non nocere).

    This is the second part of a two-part article.

    1. 1See Marcia Angell, ” The Epidemic of Mental Illness: Why? ,” The New York Review , June 23, 2011. 
    2. 2Eisenberg wrote about this transition in “Mindlessness and Brainlessness,” British Journal of Psychiatry , No. 148 (1986). His last paper, completed by his stepson, was published after his death in 2009. See Eisenberg and L.B. Guttmacher, “Were We All Asleep at the Switch? A Personal Reminiscence of Psychiatry from 1940 to 2010,” Acta Psychiatrica Scand. , No. 122 (2010). 
    3. 3Carol A. Bernstein, “Meta-Structure in DSM-5 Process,” Psychiatric News , March 4, 2011, p. 7. 
    4. 4The history of the DSM is recounted in Christopher Lane’s informative book Shyness: How Normal Behavior Became a Sickness ” (Yale University Press, 2007). Lane was given access to the American Psychiatric Association’s archive of unpublished letters, transcripts, and memoranda, and he also interviewed Robert Spitzer. His book was reviewed by Frederick Crews in The New York ReviewDecember 6, 2007 , and by me, January 15, 2009
    5. 5See L. Cosgrove et al., “Financial Ties Between DSM-IV Panel Members and the Pharmaceutical Industry,” Psychotherapy and Psychosomatics , Vol. 75 (2006). 
    6. 6David J. Kupfer and Darrel A. Regier, “Why All of Medicine Should Care About DSM-5,” JAMA, May 19, 2010. 
    7. 7Greg Miller, “Anything But Child’s Play,” Science , March 5, 2010. 
    8. 8Duff Wilson, “Child’s Ordeal Reveals Risks of Psychiatric Drugs in Young,” The New York Times , September 2, 2010. 
    9. 9Patricia Wen, “A Legacy of Unintended Side-Effects: Call It the Other Welfare,” The Boston Globe , December 12, 2010. 

     

  • Wounded healers

    LIVES RESTORED

    Expert on Mental Illness Reveals Her Own Fight

    The Power of Rescuing Others: Marsha Linehan, a therapist and researcher at the University of Washington who suffered from borderline personality disorder, recalls the religious experience that transformed her as a young woman.

    Lives Restored

    Damon Winter/The New York Times

    “So many people have begged me to come forward, and I just thought — well, I have to do this. I owe it to them. I cannot die a coward,” said Marsha M. Linehan, a psychologist at the University of Washington.

    The patient wanted to know, and her therapist — Marsha M. Linehan of theUniversity of Washington, creator of a treatment used worldwide for severely suicidal people — had a ready answer. It was the one she always used to cut the question short, whether a patient asked it hopefully, accusingly or knowingly, having glimpsed the macramé of faded burns, cuts andwelts on Dr. Linehan’s arms:

    “You mean, have I suffered?”

    “No, Marsha,” the patient replied, in an encounter last spring. “I mean one of us. Like us. Because if you were, it would give all of us so much hope.”

    “That did it,” said Dr. Linehan, 68, who told her story in public for the first time last week before an audience of friends, family and doctors at the Institute of Living, the Hartford clinic where she was first treated for extreme social withdrawal at age 17. “So many people have begged me to come forward, and I just thought — well, I have to do this. I owe it to them. I cannot die a coward.”

    No one knows how many people with severe mental illness live what appear to be normal, successful lives, because such people are not in the habit of announcing themselves. They are too busy juggling responsibilities, paying the bills, studying, raising families — all while weathering gusts of dark emotions or delusions that would quickly overwhelm almost anyone else.

    Now, an increasing number of them are risking exposure of their secret, saying that the time is right. The nation’s mental health system is a shambles, they say, criminalizing many patients and warehousing some of the most severe in nursing and group homes where they receive care from workers with minimal qualifications.

    Moreover, the enduring stigma of mental illness teaches people with such a diagnosis to think of themselves as victims, snuffing out the one thing that can motivate them to find treatment: hope.

    “There’s a tremendous need to implode the myths of mental illness, to put a face on it, to show people that a diagnosis does not have to lead to a painful and oblique life,” said Elyn R. Saks, a professor at the University of Southern California School of Law who chronicles her own struggles with schizophrenia in “The Center Cannot Hold: My Journey Through Madness.” “We who struggle with these disorders can lead full, happy, productive lives, if we have the right resources.”

    These include medication (usually), therapy (often), a measure of good luck (always) — and, most of all, the inner strength to manage one’s demons, if not banish them. That strength can come from any number of places, these former patients say: love, forgiveness, faith in God, a lifelong friendship.

    But Dr. Linehan’s case shows there is no recipe. She was driven by a mission to rescue people who are chronically suicidal, often as a result of borderline personality disorder, an enigmatic condition characterized in part by self-destructive urges.

    “I honestly didn’t realize at the time that I was dealing with myself,” she said. “But I suppose it’s true that I developed a therapy that provides the things I needed for so many years and never got.”

    ‘I Was in Hell’

    She learned the central tragedy of severe mental illness the hard way, banging her head against the wall of a locked room.

    Marsha Linehan arrived at the Institute of Living on March 9, 1961, at age 17, and quickly became the sole occupant of the seclusion room on the unit known as Thompson Two, for the most severely ill patients. The staff saw no alternative: The girl attacked herself habitually, burning her wrists with cigarettes, slashing her arms, her legs, her midsection, using any sharp object she could get her hands on.

    The seclusion room, a small cell with a bed, a chair and a tiny, barred window, had no such weapon. Yet her urge to die only deepened. So she did the only thing that made any sense to her at the time: banged her head against the wall and, later, the floor. Hard.

    “My whole experience of these episodes was that someone else was doing it; it was like ‘I know this is coming, I’m out of control, somebody help me; where are you, God?’ ” she said. “I felt totally empty, like the Tin Man; I had no way to communicate what was going on, no way to understand it.”

    Her childhood, in Tulsa, Okla., provided few clues. An excellent student from early on, a natural on the piano, she was the third of six children of an oilman and his wife, an outgoing woman who juggled child care with the Junior League and Tulsa social events.

    People who knew the Linehans at that time remember that their precocious third child was often in trouble at home, and Dr. Linehan recalls feeling deeply inadequate compared with her attractive and accomplished siblings. But whatever currents of distress ran under the surface, no one took much notice until she was bedridden with headaches in her senior year of high school.

    Her younger sister, Aline Haynes, said: “This was Tulsa in the 1960s, and I don’t think my parents had any idea what to do with Marsha. No one really knew what mental illness was.”

    Soon, a local psychiatrist recommended a stay at the Institute of Living, to get to the bottom of the problem. There, doctors gave her a diagnosis of schizophrenia; dosed her with ThorazineLibrium and other powerful drugs, as well as hours of Freudian analysis; and strapped her down for electroshock treatments, 14 shocks the first time through and 16 the second, according to her medical records. Nothing changed, and soon enough the patient was back in seclusion on the locked ward.

    “Everyone was terrified of ending up in there,” said Sebern Fisher, a fellow patient who became a close friend. But whatever her surroundings, Ms. Fisher added, “Marsha was capable of caring a great deal about another person; her passion was as deep as her loneliness.”

    Damon Winter/The New York Times

    The door to the room where as a teenager Dr. Linehan was put in seclusion. The room has since been turned into a small office.

    Damon Winter/The New York Times

    “My whole experience of these episodes was that someone else was doing it; it was like ‘I know this is coming, I’m out of control, somebody help me; where are you, God?’”   -Marsha M. Linehan

    A discharge summary, dated May 31, 1963, noted that “during 26 months of hospitalization, Miss Linehan was, for a considerable part of this time, one of the most disturbed patients in the hospital.”

    A verse the troubled girl wrote at the time reads:

    They put me in a four-walled room

    But left me really out

    My soul was tossed somewhere askew

    My limbs were tossed here about

    Bang her head where she would, the tragedy remained: no one knew what was happening to her, and as a result medical care only made it worse. Any real treatment would have to be based not on some theory, she later concluded, but on facts: which precise emotion led to which thought led to the latest gruesome act. It would have to break that chain — and teach a new behavior.

    “I was in hell,” she said. “And I made a vow: when I get out, I’m going to come back and get others out of here.”

    Radical Acceptance

    She sensed the power of another principle while praying in a small chapel in Chicago.

    It was 1967, several years after she left the institute as a desperate 20-year-old whom doctors gave little chance of surviving outside the hospital. Survive she did, barely: there was at least one suicide attempt in Tulsa, when she first arrived home; and another episode after she moved to a Y.M.C.A. in Chicago to start over.

    She was hospitalized again and emerged confused, lonely and more committed than ever to her Catholic faith. She moved into another Y, found a job as a clerk in an insurance company, started taking night classes at Loyola University — and prayed, often, at a chapel in the Cenacle Retreat Center.

    “One night I was kneeling in there, looking up at the cross, and the whole place became gold — and suddenly I felt something coming toward me,” she said. “It was this shimmering experience, and I just ran back to my room and said, ‘I love myself.’ It was the first time I remember talking to myself in the first person. I felt transformed.”

    The high lasted about a year, before the feelings of devastation returned in the wake of a romance that ended. But something was different. She could now weather her emotional storms without cutting or harming herself.

    What had changed?

    It took years of study in psychology — she earned a Ph.D. at Loyola in 1971 — before she found an answer. On the surface, it seemed obvious: She had accepted herself as she was. She had tried to kill herself so many times because the gulf between the person she wanted to be and the person she was left her desperate, hopeless, deeply homesick for a life she would never know. That gulf was real, and unbridgeable.

    That basic idea — radical acceptance, she now calls it — became increasingly important as she began working with patients, first at a suicide clinic in Buffalo and later as a researcher. Yes, real change was possible. The emerging discipline of behaviorism taught that people could learn new behaviors — and that acting differently can in time alter underlying emotions from the top down.

    But deeply suicidal people have tried to change a million times and failed. The only way to get through to them was to acknowledge that their behavior made sense: Thoughts of death were sweet release given what they were suffering.

    “She was very creative with people. I saw that right away,” said Gerald C. Davison, who in 1972 admitted Dr. Linehan into a postdoctoral program in behavioral therapy at Stony Brook University. (He is now a psychologist at the University of Southern California.) “She could get people off center, challenge them with things they didn’t want to hear without making them feel put down.”

    No therapist could promise a quick transformation or even sudden “insight,” much less a shimmering religious vision. But now Dr. Linehan was closing in on two seemingly opposed principles that could form the basis of a treatment: acceptance of life as it is, not as it is supposed to be; and the need to change, despite that reality and because of it. The only way to know for sure whether she had something more than a theory was to test it scientifically in the real world — and there was never any doubt where to start.

    Getting Through the Day

    “I decided to get supersuicidal people, the very worst cases, because I figured these are the most miserable people in the world — they think they’re evil, that they’re bad, bad, bad — and I understood that they weren’t,” she said. “I understood their suffering because I’d been there, in hell, with no idea how to get out.”

    In particular she chose to treat people with a diagnosis that she would have given her young self: borderline personality disorder, a poorly understood condition characterized by neediness, outbursts and self-destructive urges, often leading to cutting or burning. In therapy, borderline patients can be terrors — manipulative, hostile, sometimes ominously mute, and notorious for storming out threatening suicide.

    Dr. Linehan found that the tension of acceptance could at least keep people in the room: patients accept who they are, that they feel the mental squalls of rage, emptiness and anxiety far more intensely than most people do. In turn, the therapist accepts that given all this, cutting, burning and suicide attempts make some sense.

    Finally, the therapist elicits a commitment from the patient to change his or her behavior, a verbal pledge in exchange for a chance to live: “Therapy does not work for people who are dead” is one way she puts it.

    Yet even as she climbed the academic ladder, moving from the Catholic University of America to the University of Washington in 1977, she understood from her own experience that acceptance and change were hardly enough. During those first years in Seattle she sometimes felt suicidal while driving to work; even today, she can feel rushes of panic, most recently while driving through tunnels. She relied on therapists herself, off and on over the years, for support and guidance (she does not remember taking medication after leaving the institute).

    Dr. Linehan’s own emerging approach to treatment — now called dialectical behavior therapy, or D.B.T. — would also have to include day-to-day skills. A commitment means very little, after all, if people do not have the tools to carry it out. She borrowed some of these from other behavioral therapies and added elements, like opposite action, in which patients act opposite to the way they feel when an emotion is inappropriate; and mindfulness meditation, a Zen technique in which people focus on their breath and observe their emotions come and go without acting on them. (Mindfulness is now a staple of many kinds of psychotherapy.)

    In studies in the 1980s and ’90s, researchers at the University of Washington and elsewhere tracked the progress of hundreds of borderline patients at high risk of suicide who attended weekly dialectical therapy sessions. Compared with similar patients who got other experts’ treatments, those who learned Dr. Linehan’s approach made far fewer suicide attempts, landed in the hospital less often and were much more likely to stay in treatment. D.B.T. is now widely used for a variety of stubborn clients, including juvenile offenders, people with eating disorders and those with drug addictions.

    “I think the reason D.B.T. has made such a splash is that it addresses something that couldn’t be treated before; people were just at a loss when it came to borderline,” said Lisa Onken, chief of the behavioral and integrative treatment branch of the National Institutes of Health. “But I think the reason it has resonated so much with community therapists has a lot to do with Marsha Linehan’s charisma, her ability to connect with clinical people as well as a scientific audience.”

    Most remarkably, perhaps, Dr. Linehan has reached a place where she can stand up and tell her story, come what will. “I’m a very happy person now,” she said in an interview at her house near campus, where she lives with her adopted daughter, Geraldine, and Geraldine’s husband, Nate. “I still have ups and downs, of course, but I think no more than anyone else.”

    After her coming-out speech last week, she visited the seclusion room, which has since been converted to a small office. “Well, look at that, they changed the windows,” she said, holding her palms up. “There’s so much more light.”

     

     

     

  • National intelligence?

    The Intelligence of Nations

    globe22Modern Japan has very few of the world’s natural resources—oil, forests, precious metals. Yet this archipelago has given rise to the world’s third largest economy. Nigeria, by contrast, is blessed with ample natural resources, including lots of land, yet it is one of the planet’s poorer nations. Why is that? Why is there not a simple link between natural bounty and prosperity?

    The short answer is national intelligence. A nation’s cognitive resources amplify its natural resources. That’s the view of University of Washington psychological scientist Earl Hunt, who argues that, given equal national intelligence, Nigeria would be richer than Japan. But where does national intelligence come from, and why does Nigeria lack it?

    Hunt sketched out an answer to that question in his James McKeen Cattell Award address, delivered this week at the 23rd annual convention of the Association for Psychological Science. According to his model, intelligence is not what IQ tests measure, but rather the ability to solve social problems using “cultural artifacts”—computers, books, the scientific method and rule of law, for example. All countries start off with the same genetic potential for intelligence—there is no evidence otherwise—but this raw potential is developed much more effectively in some nations than in others, because of dramatic differences in physical and social environments.

    A detrimental physical environment consists of malnutrition, disease and environment pollutants—all of which can directly affect the developing nervous system—and thus working memory and attention—and also create a social burden that interferes with education and learning. The social environment also shapes individual and national intelligence. This includes the sheer amount of schooling, because practicing thinking makes people better thinkers. It also includes the existence of a “cognitive elite”—people with enough advanced education to familiarize them with the cognitive artifacts needed for problem solving. And it includes family, which plays the role of motivator, encouraging children to learn things like trigonometry even when they can’t see the value. Small families are better; large families are associated with drops in both cognitive and economic well-being.

    National intelligence also requires a national “willingness to listen,” Hunt argues. No nation can come up with all of its own cognitive tools, but nations can borrow if they are open to new cognitive advances elsewhere. When Japan’s leaders decided to isolate the country from the world in the 17th century, the intelligence of its people declined. It’s not that they were unaware of modernization; they rejected it. When the nation reopened its cultural borders in the 19th century, national intelligence bloomed.

    The simple fact is, it’s good to be intelligent—for nations no less than individuals. Various studies have linked a country’s cognitive resources positively not only with economic prosperity, but also with rule of law, the quality of bureaucracy, and successful homicide prosecutions. The same studies have linked low national intelligence with HIV infection, fertility rate, homicide rate, and income inequality. What’s more, national intelligence and prosperity appear to interact and reinforce one another: In one study, national intelligence in the 1970s influenced wealth in the year 2000, and wealth in the 70s influenced intelligence in the year 2000. As Hunt concludes: “The smart got richer and the rich got smarter.”