Tilting at windmills

The Return of the King

Psychological work with men

Jamie Elkon
As men we are particularly prone to the construction of fragile (though often quite stupendous) narcissistic defenses, which wait in ambush for unsuspecting travelers upon our life’s journey. Whether conscious or not, we maneuver others onto the traps we have laid and when they snap shut, we puff up with righteous indignation at the injustice of it all, thereby inexorably repeating and reinforcing the alienation with ourselves and particularly with those we claim to love.

The philosopher Schopenhauer claimed that mankind was doomed to vacillate eternally between distress and boredom. Indeed, many men believe themselves cursed to living half-lives, wandering the periphery of their own awareness, stumbling blindly through destructive old behavioural patterns. Our attempts to nourish our stale, brittle selves with repetitive self defeating strategies savagely limits our growth and a deeper connection with our core selves.

As a clinician who has the honour of working with men, I have borne witness to many pale deaths, depression, anxiety, addiction, whether it be to power, sex or even servitude, men continue to remain entangled in their Shadows. Many of us have long struggled with how to understand and make use of the emotional wounds we have gathered, emotional wounds we all bear.
As men we are particularly prone to the construction of fragile (though often quite stupendous) narcissistic defenses, which wait in ambush for unsuspecting travellers upon our life’s journey. Whether conscious or not, we maneuver others onto the traps we have laid and when they snap shut, we puff up with righteous indignation at the injustice of it all, thereby inexorably repeating and reinforcing the alienation with ourselves and particularly with those we claim to love.

What, you may ask is the alternative to this aimless wandering? Is it to be a good citizen and provider? To pay your taxes, to be reassured and soothed by your congruence with those around you? To slowly become lulled by the security of the known, while your heart aches and your dreams of adventure and passion slowly fade, replaced by your numbed daily rituals of work, docile husbandry and that after dinner drink. Maybe the automatic lifewas the freedom you sought, the release from the discordance of being, from colliding emotions, from the burden of growth.
Men’s work requires courage before the abyss of possibility. You will have to deal with your core issues (for the rest of your life). By not examining your core issues, you will be doomed to repeat them and you will live at the mercy of your defenses, for it is your defenses, not your wound, that arrests your growth.

Men’s work will challenge you to face your shadow, that which you do not want to be, that which you find frightening and threatening to your self image, that which follows you wherever you go. Through an examination of your shadow defenses and by keeping that shadow in front of you where you can see it, a man can stop defending so exhaustively, so unconsciously and begin to develop the capacity to live with integrity and in alignment with his core self.

Imagine if you will, that just before you were incarnated on this planet, your God, your higher self, whatever you wish to call it, leant forward and whispered the varied meanings of your life into your heart. The very centre of your being absorbed these, was imbued with their power and since then…you have been doing everything else. But now and again, whether it be in traffic, or while you are making love to your wife, or looking at the face of your sleeping child, they flutter deeply within you, calling you. If only you would be just be still enough to hear them.

Read more

Steps to finding someone to love…

Online Dating: 10 Psychological Insights

 

Psychological research reveals who uses internet dating and why, which strategies work, and uncovers the truth about lying online.

Somewhere between one-third and three-quarters of single people with internet access have used it to try and meet someone new. But, over the years, we’ve heard conflicting stories about how successful it is.

Believe the internet dating companies and it’s all sweetness and light, with wedding bells ringing in the distance; believe the media scare stories and it’s all lying, cheating, perverted social misfits. The truth is somewhere in between, but where?

Fortunately, now there’s enough research to suggest what’s really going on. So, here are my 10 favourite psychological insights on internet dating.

1. Internet daters are not losers

Contrary to the stereotype, there’s little evidence that internet dating is the last resort of social misfits or weirdos.

In fact, quite the reverse. Internet daters are more likely to be sociable, have high self-esteem and be low in dating anxiety (Kim et al., 2009; Valkenburg, 2007). These studies found no evidence that people use online dating because they can’t hack it face-to-face. It’s just one more way to meet new people.

People’s motivations to start online dating are many and various, typically involving a triggering event like a break-up, but overall Barraket and Henry-Waring (2008) have found that people’s motivations are less individual and more social. People aren’t using online dating because they are shy but because they have moved to a new city, are working long hours or don’t have time to meet anyone new.

2. Online daters do lie (but only a little)

Although 94% deny their internet dating profiles contain any fibs (Gibbs et al., 2006), psychologists are a suspicious lot. Toma et al. (2008) measured the heights and weights of 80 internet daters, as well as checking their driving licences for their real age.

When this data was compared with their profiles, it showed that nine out of ten had lied on at least one of the attributes measured, but the lies were only small ones. The most frequent offender was weight, with daters either adding or shaving off an average of 5%. Daters were more truthful about their age (1.5% deviation) and height (1.1% deviation). As expected women tended to shave off the pounds, while men gave themselves a boost in height.

These lies make little difference in the real world because the vast majority of fibbing would have been difficult to detect in person. Most people want to meet up eventually so they know big lies are going to be caught.

3. Photo fallacies

The saying ‘the camera never lies’ is bunk. Even without Photoshop to iron out the wrinkles, camera angles and lighting can easily change perceived attractiveness.

People instinctively understand this when choosing their profile photo so Toma and Hancock (2010) took photographs of internet daters, then judges compared these to the real profile photos.

Although less physically attractive people were the most likely to choose a self-enhancing photo, overall the differences were tiny. The lab photos were only a little less attractive than those chosen for online dating profiles (about 5% for women and 4% for men). Once again, internet daters weren’t lying much…

4. Your best look

Clues to which types of profile photos work come from one online dating site which has analysed 7,000 photographs in its database (oktrends, 2010):

  • Women had higher response-rates when they made eye-contact with the camera and looked flirty. Conversely the least successful pictures for women were looking away with a flirty face.
  • Men’s best look was away from the camera, not smiling. But guys should avoid a flirty face, which was associated with a drastic reduction in messages.

They then looked at which photos were associated with the longest online conversations. These were where it showed the dater:

  • Doing something interesting
  • With an animal
  • In an interesting location (travel photo)

The photos associated with shorter than average conversations were (in increasing order of conversational deterrent):

  • In bed (associated with slightly shorter conversations)
  • Taken outdoors
  • Having fun with friends
  • And the most likely to deter interactions: drinking! (associated with the shortest conversations)

(Remember, these are all associations so we can’t be sure about causality.)

5. Opposites (still) don’t attract

Even amongst a diverse population of online daters, people still prefer someone who is similar to themselves.

When Fiore and Donath (2005) examined data from 65,000 online daters, they found that people were choosing based on similarity to themselves.

In this respect online dating is no different from offline dating. On average people are looking for someone about the same as themselves. Indeed there are now many dating sites aimed at narrower demographics such as sports fans, Jewish people or those with particular medical conditions.

6. Internet dating encourages some diversity

To examine internet dating diversity, Dutton et al. (2009) surveyed 2,670 married couples in the UK, Australia and Spain. In this sample internet daters were more likely to have a greater disparity in age and educational background compared with those who had met in more traditional ways.

Although opposites don’t tend to attract, by its nature internet dating does encourage diverse matches. The authors argue that it is changing the face of marriage by bring together types of people who previously never would have met.

7. Keep the first message short

Getting a response online can be a hit-and-miss affair. An online dating site has gauged the response rate by analysing more than 500,000 initial contacts sent by their members (oktrends, 2009). Recipients answered only 30% of men’s messages to women and 45% of women’s messages to men. The percentage that lead to conversations is even lower (around 20% and 30% respectively).

The one-third response rate, which is backed up by academic research (Rosen et al., 2008), is partly because many internet dating accounts are dead.

oktrends also found that longer messages only yield a small improvement in response rate for men and nothing for women. So, don’t waste your time writing an essay. Say hi and let them check out your profile.

8. Emotionality is attractive

In a study of online dating, Rosen et al., (2008) found evidence that more intense emotionality, e.g. using words like ‘excited’ and ‘wonderful’, made a better impression on both men and women.

This study also looked at the impact of self-disclosure. While the results were more variable, overall people preferred relatively low-levels of self-disclosure.

9. After screening, 51% meet face-to-face

For many, but not all internet daters, the aim is to meet someone new in the flesh. In a survey of 759 internet daters, Rosen et al. (2008) found that 51% of people had made a face-to-face date within one week and one month of receiving replies to their online overtures.

This first meeting is often treated by internet daters as the final part of the screening process (Whitty & Carr, 2006). Is this person really who they say they are? And, if so, is there any chemistry? It’s only after this stage is complete that people can get to know each other.

10. Relationshopping

Despite all the positive things the research has to say about internet dating, there’s no doubt that it can be unsatisfying and aversive. 132 online daters surveyed byFrost et al. (2008) reported that they spent 7 times as long screening other people’s profiles and sending emails than they did interacting face-to-face on real dates.

Part of the problem is that people are encouraged by online dating to think in consumerist terms (Heino et al., 2010). Users are ‘relationshopping’: looking at other people’s features, weighing them up, then choosing potential partners, as though from a catalogue; it’s human relationships reduced to check-boxes.

This is more of a criticism of the technology currently available than it is of the general idea of internet dating. Frost et al. (2008) argue that this will change as online dating services move towards more experiential methods, such as virtual dates.

How well does it work?

There’s only limited data about how well internet dating works and most of this research examined heterosexual daters. Still, Rosen et al. (2008) found that 29% of their sample had found serious relationships through internet dating. Dutton et al. (2009) found that about 6% of married couples had met online in the UK, 5% in Spain and 9% in Australia. Looking at just younger people the percentages were much higher:

  • In the US, 42% of couples between 26 and 35 first met online.
  • In the UK, 21% of married couples between 19 and 25 first met online.

If a long-term relationship is what you’re after, we can certainly say that it’s working for some people.

Many are no doubt put off internet dating by the scare stories, especially because these stick in the mind. Some will find the box-ticking, relationshopping aspects off-putting, or get caught out by the tensions between representing their actual and idealised selves online. Still others will find that low levels of response kills their enthusiasm.

The research, however, suggests that most internet daters are relatively honest and, for some at least, it can be successful.

Read more

Learning to connect…

Are You Just Shy or Do You Have a Social Phobia?

 

[

According to research, there’s a 50% chance that you consider yourself shy. But is this ‘just’ shyness or is it a mental disorder? Since 1980 the Diagnostic and Statistical Manual of Mental Disorders used by psychiatrists in diagnosis has included the categories of ‘social phobia’ and ‘social anxiety disorder’. This suggests that what would previously have been your particular way of being, has become a ‘disorder’ with a biological cause which needs some medication…

 

No one would dispute the fact that shyness is on a continuum, but in his new book, ‘Shyness: How Normal Behavior Became a Sickness‘, Christopher Lane argues that the bar has been set way too low:

The problem, Lane argues, is that DSM-defined symptoms of impairment in 1980 included fear of eating alone in restaurants, concern about hand trembling while writing checks, fear of public speaking and avoidance of public restrooms.

By 1987 the DSM had removed the key phrase “a compelling desire to avoid,” requiring instead only “marked distress,” and signs of that could include concern about saying the wrong thing. “Impairment became something largely in the eye of the beholder, and anticipated embarrassment was enough to meet the diagnostic threshold,” says Lane.

“That’s a ridiculous way to assess a serious mental disorder, with implications for the way we also view childhood traits and development,” Lane adds. “But that didn’t stop SAD from becoming what Psychology Today dubbed ‘the disorder of the 1990s.'”

 

Privately shy

Where, though, are all these shy people hiding and what causes it? Bernardo Carducci, Director of the Shyness Research Institute and Phillip Zimbardo explain:

  • Many people are shy without appearing ill-at-ease. Only a small percentage (15-20%) are visibly shy to the casual observer.
  • Shyness is mostly the result of parenting and life experiences although it does have a small genetic component.
  • Levels of shyness vary across cultures with Israelis being the least shy and those from Japan and Taiwan being the most shy.
  • Levels of shyness in the US have increased by about 10% to the current figure of 50% in the last three decades.
  • Some people are shy extroverts – US talk-show host David Letterman is a good example of someone who has learned to ‘act’ extroverted.

Costs of shyness

Shy people are at risk of losing out in many situations:

  • Shy children may self-select solitary activities which fail to boost their social skills.
  • Shy children are the easiest targets for bullies at school as they are usually highly reactive.
  • Shyness leads to loneliness. Loneliness isn’t good for anyone.
  • Shyness leads to a lack of social support. We all need someone to give us a bit of perspective. Without it we can easily hold onto unrealistic beliefs about ourselves and others.
  • Shy people find it difficult to live in the present in social situations – they will tend to hesitate while they review what are perceived as past failures.

Carducci and Zimbardo only mention one ray of hope for the shy: they make good listeners. It’s not much, though, set against this litany of disadvantages.

Overcoming shyness

John Wesley, who explains his shyness is a major weakness, has some useful suggestions about how to overcome shyness:

  • ‘It’s Not You It’s Them’ – Realising that the perceived slights from others shouldn’t be taken personally.
  • ‘Other People Aren’t So Different’ – Well now you know 50% of people consider themselves shy – that’s a lot of people who feel the same as you.
  • ‘Realizing Self-Worth’ – Get used to sharing your thoughts with others by forcing yourself to speak up.
  • ‘The Duty to Contribute’ – Shyness can limit your own growth and your ability to contribute.

These are useful suggestions and most of them involve what shyness expert Dr Carducci sees as the central issue (Carducci, 2000). For the shy, he argues, the key is to become more other-directed.

A group identified in the research as the ‘successfully shy’ recognise their own shyness and take particular steps to combat it. They plan ahead for gaps in the conversation, they arrive early to parties to get the lie of the land, they rehearse conversational opening gambits. They use any trick to move their focus of attention from themselves and their own self-consciousness and outwards to the other people.

Dr Carducci argues that what our society needs is not less shy people but actually more ‘successfully shy’ people.

Are you shy?

If you consider yourself shy do you agree with the research findings discussed above. If not, what is your experience of shyness? What strategies do you use to combat your shyness?

Reference

Carducci, B. (2000). Shyness: The New Solution. Psychology Today, 33(1), 38-40

Read more

The memory bank

You’ll have heard about the usual methods for improving memory, like using imagery, chunking and building associations with other memories. If not Google it and you’ll find millions of websites with the same information.

The problem with most of these methods is they involve a fair amount of mental effort.

So here are seven easy ways to boost your memory that are backed up by psychological research. None require you to train hard, spend any money or take illegal drugs. All free, all pretty easy, all natural!

1. Write about your problems

To do complex tasks we rely on our ‘working memory’. This is our ability to shuttle information in and out of consciousness and manipulate it. A more efficient working memory contributes to better learning, planning, reasoning and more.

One way to increase working memory capacity indirectly is through expressive writing. You sit down for 20 minutes a few times a month and write about something traumatic that has happened to you. Yogo and Fujihara (2008) found that it improved working memory after 5 weeks.

Psychologists aren’t exactly sure why this works, but it does have a measurable effect.

2. Look at a natural scene

Nature has a magical effect on us. It’s something we’ve always known, but psychologists are only just getting around to measuring it.

One of nature’s beneficial effects is improving memory. In one study people who walked around an arboretum did 20% better on a memory test than those who went for a walk around busy streets.

In fact you don’t even need to leave the house. Although the effects aren’t as powerful, you can just look at pictures of nature and that also has a beneficial effect.

3. Say words aloud

This is surely the easiest of all methods for improving memory: if you want to remember something in particular from a load of other things, just say it out loud. A study  found memory improvements of 10% for words said out loud, or even just mouthed: a relatively small gain, but at a tiny cost.

4. Meditate (a bit)

Meditation has been consistently found to improve cognitive functioning, including memory. But meditation takes time doesn’t it? Long, hard hours of practice? Well, maybe not.

In one recent study, participants who meditated for 4 sessions of only 20 minutes, once a day, saw boosts to their working memory and other cognitive functions.

5. Predict your performance

Simply asking ourselves whether or not we’ll remember something has a beneficial effect on memory. This works for both recalling things that have happened in the past and trying to remember to do things in the future.

When Meier et al. (2011) tested people’s prospective memory (remembering to do something in the future), they found that trying to predict performance was beneficial. On some tasks people’s performance increased by almost 50%.

6. Use your body to encode memories

We don’t just think with our minds, we also use our bodies. For example, research has shown that we understand language better if it’s accompanied by gestures.

We can also use gestures to encode memories. Researchers trying to teach Japanese verbs to English speakers found that gesturing while learning helped encode the memory (Kelly et al., 2009). Participants who used hand gestures which suggested the word were able to recall almost twice as many Japanese words a week later.

7. Use your body to remember

Since our bodies are important in encoding a memory, they can also help in retrieving it. Psychologists have found that we recall past episodes better when we are in the same mood or our body is in the same position (Dijkstra et al., 2007).

This works to a remarkably abstract degree. In one study by Cassasanto and Dijkstra (2010), participants were better able to retrieve positive memories when they moved marbles upwards and negative memories when they moved marbles downwards. This seems to be because we associate up with happy and down with sad.

More effort?

If all these methods seem a bit lazy, then you can always put in a bit more effort.

Probably the best way of improving your overall cognitive health is exercise. Studies regularly find that increasing aerobic fitness is particularly good for executive function and working memory.

Conversely, stay in bed all the time and your working memory gets worse (Lipnicki et al., 2009).

Take your memory training to the limit and an incredible study by Ericsson et al. (1980) shows what can be achieved. Our typical short-term memory span is about 7 things. In other words we can hold around seven things in mind at the same time. These researchers, though, increased one person’s memory span to 79 digits after 230 hours of practice, mostly using mnemonic systems.

Shows what you can do if you put in the hours. That said, I’ll be sticking to a nice walk around the park.

 

Read more

Meditation and mindfulness

How Meditation Improves Attention

The science of meditation and attention, including a beginner’s guide to meditation.

green_buddha2

William James wrote that controlling attention is at “the very root of judgement, character and will”. He also noted that controlling attention is much easier said than done. This is unfortunate because almost every impressive human achievement is, at heart, a feat of attention. Art, science, technology — you name it — someone, somewhere had to concentrate, and concentrate hard.

Wouldn’t it be fantastic to be able to concentrate without effort? Not to feel the strain of directing attention, just to experience a relaxed, intense, deep focus? So naturally the million dollar question is: how can attention be improved?

Psychologists are fascinated by the sometimes fantastical claims made for meditation, particularly in its promise of improving attention. It certainly seems intuitively right that meditation should improve attention — after all meditation is essentially concentration practice — but what does the scientific evidence tell us?

Does meditation improve attention?

The problem with attention is that it naturally likes to jump around from one thing to another: attention is antsy, it won’t settle — this is not in itself a bad thing, just the way it is. Attention’s fidgety nature can be clearly seen in the phenomenon of ‘binocular rivalry’. If you show one picture to one eye and a different picture to the other eye, attention shuttles between them, wondering which is more interesting.

A simple lab version of this presents a set of vertical lines to one eye and a set of horizontal lines to the other. What people see is the brain flipping between the horizontal and the vertical lines and occasionally merging them both together, seemingly at random. People usually find it difficult to see either the horizontal or the vertical lines — or even the merged version — for an extended period because attention naturally flicks between them.

If the binocular rivalry test is a kind of index of the antsy-ness of attention, then those with more focused attention should see fewer changes. So reasoned Carter et al. (2005) who had 76 Tibetan Buddhists in their mountain retreats meditate before taking a binocular rivalry test. They sat, wearing display goggles and staring at the lines, pressing a button each time the dominant view changed between horizontal, vertical and merged. The more button presses, the more times their attention switched.

meditation2

In one condition their meditation was ‘compassionate’, thinking about all the suffering in the world while in the other it was ‘one-point’ meditation focusing completely on one aspect of their experience, for example their breath going in and out. Although the ‘compassionate’ form of meditation had no effect, the ‘one-point’ meditation reduced the rate of switching in half the participants.

The results were even more dramatic when the Buddhists carried out the one-point meditation while looking through the goggles. Some of the most experienced monks reported complete image stability: they saw just the horizontal or vertical lines for a full 5 minutes. When compared to people who do not meditate, these results are exceptional.

Quicker results

Of course we don’t all have 20 years to pass in a mountain retreat learning how to concentrate, so is there any hope for the rest of us? A recent study by Dr. Amishi Jha and colleagues at Pennsylvania University suggests there is (Jha, Krompinger & Baime, 2007). Rather than recruiting people who were already superstar concentrators, they sent people who had not practised meditation before on an 8-week training course in mindfulness-based stress reduction, a type of meditation. This consisted of a series of 3-hour classes, with at least 30 minutes of meditation practice per day.

meditation3

These 17 participants were then compared with a further 17 from a control group on a series of attentional measures. The results showed that those who had received training were better at focusing their attention than the control group. This certainly suggests that meditation was improving people’s attention.

Dr. Jha and colleagues were also interested in how practice beyond beginner level would affect people’s powers of attention. To test this they sent participants who were already meditators on a mindfulness retreat for one month. Afterwards they were given the same series of attention measures and were found to have improved in their reactions to new stimuli. In other words they seemed to have become more receptive.

Attentional improvements from meditation, though, have recently been reported even quicker than 8 weeks. A study carried out by Yi-Yuan Tang and colleagues gave participants just 20 minutes instruction every day for five days (Tang et al., 2007). Participants practised a Chinese form of meditation called ‘integrative body-mind training’, which uses similar techniques to other types of meditation. They found that after only this relatively short introduction participants demonstrated improved attention compared to a control group, along with other benefits such as lower levels of stress and higher energy levels.

There is even evidence that meditation can improve a major limitation of the brain’s attentional system. Attentional blink is the finding that our attention ‘blinks’ for about half a second right after we focus on something (follow the link for the full story). Meditation, however, seems to be able to increase our minds’ attentional bandwidth. Slagter et al. (2007) gave participants 3 months of intensive meditation training and found that afterwards the attentional blink was seriously curtailed. In other words people were capable of processing information more quickly and accurately. Perhaps, then, meditation really can open the doors of perception…

meditation4

This research on meditation’s effect on attention is just the tip of the iceberg. Other studies have also suggested that meditation can benefit motivation, cognition, emotional intelligence and may even sharpen awareness to such an extent that we can control our dreams (Walsh & Shapiro, 2006). And these are just the psychological benefits, there also appear to be considerable physical benefits.

Beginner’s guide to meditation

Since it is so beneficial here is a quick primer on how to meditate. Meditation is like chess: the rules are relatively easy to explain, but the game itself is infinitely complex. And like chess the names and techniques of meditation are many and varied but the fundamentals are much the same:

  1. Relax the body and the mind. This can be done through body posture, mental imagery, mantras, music, progressive muscle relaxation, any old trick that works. Take your pick. This step is relatively easy as most of us have some experience of relaxing, even if we don’t get much opportunity.
  2. Be mindful. Bit cryptic this one but it means something like this: don’t pass judgement on your thoughts, let them come and go as they will (and boy will they come and go!) but try to nudge your attention back to its primary aim, whatever that is. Turns out this is quite difficult because we’re used to mentally travelling backwards and forwards while making judgements on everything (e.g. worrying, dreading, anticipating, regretting etc.). The key is to notice in a detached way what’s happening but not to get involved with it. This way of thinking often doesn’t come that naturally.
  3. Concentrate on something. Often meditators concentrate on their breath, the feel of it going in and out, but it could be anything: your feet, a potato, a stone. The breath is handy because we carry it around with us. But whatever it is try to focus all your attention onto it. When your attention wavers, and it will almost immediately, gently bring it back. Don’t chide yourself, be good to yourself, be nice. The act of concentrating on one thing is surprisingly difficult: you will feel the mental burn almost immediately. Experienced practitioners say this eases with practice.
  4. Concentrate on nothing. Most say this can’t be achieved without a lot of practice, so I’ll say no more about it here. Master the basics first.
  5. Zzzzz Zzzzz. That’s not meditating, that’s sleeping.

This is just a quick introduction but does give you enough to get started. It’s important not to get too caught up in techniques but to remember the main goal: exercising attention by relaxing and focusing on something. Try these things out first, see what happens, then explore further.

New ways of being

As William James pointed out attention is so fundamental to our daily lives that sharpening it up is bound to spill over into many different areas of everyday life. This series of articles on attention shows that when attention goes wrong people are frequently beset by unsettling experiences, but when it goes right we are capable of all sorts of incredible abilities, like multitasking the cocktail party effect, and even curtailing the attentional blink.

In fact attention is so fundamental to consciousness that it’s no exaggeration to say that what we pay attention to makes us who we are. Potentially, then, meditation offers a way to remake ourselves, leaving behind damaging or limiting habits and discovering new ways of being.

 

Read more

Mind-fields

How to Live With an Unknowable Mind

Post image for How to Live With an Unknowable Mind

We know surprisingly little about our own personalities, attitudes and even self-esteem. How do we live with that?

How do you imagine your own mind?

I sometimes picture mine as a difficult and contrary child; the kind that throws a stone at you for no reason and can’t explain itself. Or while at the beach it sits silent, looking miserable. But, at a wedding is determined to scream at the top of its lungs through all the quiet bits.

One reason minds can be frustrating is that we only have access to part of them, by definition the conscious part. The rest, the unconscious, lies there mysteriously, doing things we don’t understand and often don’t seem to have requested.

Except we don’t know it’s doing things we haven’t asked it to, because we can’t interrogate it. It turns out that the unconscious is mostly inaccessible (Wilson & Dunn, 2004).

This is quite a different view of the mind than Freud had. He thought you could rummage around and dig things up that would help you understand yourself. Modern theorists, though, see large parts of the mind as being completely closed off. You can’t peer in and see what’s going on, it’s like the proverbial black box.

The idea that large parts of our minds can’t be accessed is fine for basic processes like movement, seeing or hearing. Generally I’m not bothered how I work out which muscles to contract to pedal my bicycle; neither do I want access to how I perceive sound.

Other parts would be extremely interesting to know about. Why do certain memories come back to me more strongly than others? How extraverted am I really? Why do I really vote this way rather than that?

Here are three examples of areas in which our self-knowledge is relatively low:

1. Personality

You’d be pretty sure that you could describe your personality to someone else, right? You know how extroverted you are, how conscientious, how optimistic?

Don’t be so sure.

When people’s personalities are measured implicitly, i.e. by seeing what they do, rather than what they say they do, the correlations are sometimes quite low (e.g.Asendorpf et al., 2002). We seem to know something about our own personalities, but not as much as we’d like to think.

2. Attitudes

Just like in personality, people’s conscious and unconscious attitudes also diverge.

We sometimes lie about our attitudes to make ourselves look better, but this is more than that. This difference between our conscious and unconscious attitudes occurs on subjects where we couldn’t possibly be trying to make ourselves look better (Wilson et al., 2000). Rather we seem to have unconscious attitudes that consciously we know little about (I’ve written about this previously in: Our secret attitude changes)

Once again we say we think one thing, but we act in a way that suggests we believe something different.

3. Self-esteem

Perhaps this is the oddest one of all. Surely we know how high our own self-esteem is?

Well, psychologists have used sneaky methods of measuring self-esteem indirectly and then compared them with what we explicitly say. They’ve found only very weak connections between the two (e.g. Spalding & Hardin, 1999). Amazingly some studies find no connection at all.

It seems almost unbelievable that we aren’t aware of how high our own self-esteem is, but there it is. It’s another serious gap between what we think we know about ourselves and what we actually know.

Road to self-knowledge

So, what if we want to get more accurate information about ourselves without submitting to psychological testing?

It’s not easy because according to modern theories, there is no way to directly access large parts of the unconscious mind. The only way we can find out is indirectly, by trying to piece it together from various bits of evidence we do have access to.

As you can imagine, this is a very hit-and-miss affair, which is part of the reason we find it so difficult to understand ourselves. The result of trying to piece things together is often that we end up worse off than when we started.

Take the emotions. Studies show that when people try to analyse the reasons for their feelings, they end up feeling less satisfied (Wilson et al., 1993). Focusing too much on negative feelings can make them worse and reduce our ability to find solutions.

Perhaps the best way to gain self-knowledge is to carefully watch our own thoughts and behaviour. Ultimately what we do is not only how others judge us but also how we should judge ourselves.

How to live with an unknowable mind

Taking all this together, here are my rough-draft principles for living with an unknowable mind:

  1. The mind is a tremendous story-teller and will try to make up pleasing stories about your thoughts and behaviour. These aren’t necessarily true.
  2. Using introspection you can’t always (ever?) know what you really think or who you really are.
  3. Using introspection to work out what you are or what you think can be damaging, encouraging rumination and depressive thoughts.
  4. This isn’t depressing, it’s liberating: now you know it’s perfectly normal not to understand some/most aspects of yourself, you can relax.
  5. If you must push for greater self-knowledge, try to become a better observer of your own thoughts and behaviour. Notice what you do and when, then try to infer the why. But don’t push it, always remember points 1-4

 

Read more

Taming the self

Top 10 Self-Control Strategies

Post image for Top 10 Self-Control Strategies

The science of self-control: use rewards, commitments, self-affirmation, adjust values, fight the unconscious and more…

Self-control is vital to our success.

People who have good self-control tend to be both more popular and more successful in many areas of life. Those with low self-control, though, are at risk of overeating, addictions and underachievement.

Unfortunately, as we all know to our cost, self-control frequently fails. Part of the problem is we overestimate our ability to resist temptation (Nordgren et al., 2009).

Self-control can be built up, like a muscle (Baumeister et al., 2006). But you need to do the right types of mental exercises. So, here are ten techniques to boost your self-control that are based on psychological research.

1. Respect low ego

Research has found that self-control is a limited resource (Vohs et al., 2000). Exercising it has clear physiological effects, like lower glucose levels (Gailliot et al., 2007).

At any one time we only have so much self-control in the tank. When you’ve been tightly controlling yourself, the tank is low and you become more likely to give in to temptation. Psychologists call this ‘ego-depletion’.

Recognise when your levels of self-control are low and make sure you find a way to avoid temptation during those times. The first step to greater self-control is acknowledging when you’re at your weakest.

2. Pre-commit

Make the decision before you’re in the tempting situation. Pre-committing yourself to difficult goals can lead to increased performance. In one study by Ariely and Wertenbroch (2002) students who imposed strict deadlines on themselves performed better than those who didn’t.

Only take a limited amount of money with you to curtail spending, or only have healthy foods at home to avoid the temptation to go astray.

It’s difficult to pre-commit because normally we like to leave our options open. But if you’re harsh on you future self, you’re less likely to regret it.

3. Use rewards

Rewards can really work to help strengthen self-control. Trope and Fishbach (2000)found that participants were better able to make short-term sacrifices for long-term gains when they had a self-imposed reward in mind. So setting ourselves rewards does work, even when it’s self-imposed.

4. …and penalties

Just like the carrot, the stick also works. Not only should we promise ourselves a reward for good behaviour, we should also give ourselves a penalty for bad behaviour.

When Trope and Fishbach (2000) tested self-imposed penalties experimentally, they found the threat of punishment encouraged people to act in service of their long-term goals.

5. Fight the unconscious

Part of the reason we’re easily led into temptation is that our unconscious is always ready to undermine our best intentions.

Fishbach et al. (2003) found that participants were easily tempted outside their conscious awareness by the mere suggestions of temptation. On the other hand, the same was also true of goals. When goals were unconsciously triggered, participants turned towards their higher-order goals.

The practical upshot is simple. Try to keep away from temptations—both physically and mentally—and stay close to things that promote your goals. Each unconsciously activates the associated behaviour.

6. Adjust expectations

Even if it doesn’t come naturally, try to be optimistic about your ability to avoid temptations.

Studies like Zhang and Fishbach (2010) suggest that being optimistic about avoiding temptation and reaching goals can be beneficial. Participants who were optimistic stuck at their task longer than those who had been asked to make accurate predictions about reaching a goal.

Allow yourself to overestimate how easy it will be to reach your goal. As long as it doesn’t spill over into fantasy-land, being fuzzy on the tricky bits can motivate.

7. Adjust values

Just as you can try to think more optimistically, you can also change how you value both goals and temptations. Research suggests that devaluing temptations and increasing the value of goals increases performance (Fishbach et al., 2009).

When we value our goal more we automatically orient ourselves towards it. In the same way devaluing temptations helps us automatically avoid them.

8. Use your heart

The heart often rules the head, so use your emotions to increase self-control.

In one study children were able to resist eating marshmallows by thinking of them as ‘white clouds’ (Mischel & Baker, 1975). This is one way of avoiding temptations: by cooling down the emotions associated with them.

You can increase the pull towards your goal in the same way: think about the positive emotional aspects of achieving it; say, the pride, or excitement.

9. Self-affirmation

Sometimes exercising self-control means avoiding a bad habit. One way of doing this is by using self-affirmations. This means reaffirming the core things you believe in. This could be family, creativity or anything really, as long as it’s a core belief of yours.

When participants in one study did this, their self-control was replenished (the study is described here: self-affirmation in self-control). Thinking about core values can help top-up your self-control when it’s been depleted.

10. Think abstract

Part of the reason self-affirmations work is that they make us think in the abstract. And abstract thinking has been shown to boost self-control.

In research described hereFujita et al. (2006) found that people thinking in the abstract (versus concrete) were more likely to avoid temptation and better able to persist at difficult tasks.

We are more likely to think abstract if we think about the reasons why we’re doing something, rather than just how we’re doing it.

Another good reason not to give in…

There’s a comforting thought that if we give in to temptation just this once, we’ll come back stronger afterwards.

However psychological research has suggested this isn’t true. Students who had a good (versus mediocre) break from studying to ‘replenish’ themselves didn’t show increased motivation when they returned (Converse & Fishbach, 2008, described inFishbach et al., 2010).

If all else fails, know that giving in won’t bring you back stronger. Worse, giving in to temptation may well just increase your tendency to crumble again in the future.

 

Read more

Not so sleeping beauty

6 Easy Steps to Falling Asleep Fast

Post image for 6 Easy Steps to Falling Asleep Fast

Psychological research over three decades demonstrates the power of Stimulus Control Therapy.

Can’t get a good night’s sleep? You’re not alone. In surveys of what would improve people’s lives, a good night’s sleep frequently comes near the top of the list.

Poor sleep results in worse cognitive performance, including degraded memory, attention, performance and alertness. And in the long term insomnia is also associated with anxiety and depression. And people’s sleep gets worse as they get older. After 65 years old, between 12% and 40% of people have insomnia.

All sorts of methods have been tried to combat poor sleep, from drugs through psychological remedies to more outlandish treatments.

The problem with drugs is that they have side-effects and are often addictive. The problem with the more outlandish treatments is that although they tend not to have side-effects, we don’t know if they have any effect at all. Psychological remedies, though, combine the best of both worlds: studies show they work without side-effects.

Stimulus Control Therapy

Professor Richard R. Bootzin has been researching sleep disorders for many years at the University of Arizona Sleep Research Lab. Writing in the Annual Review of Clinical Psychology, he describes the different psychological approaches that have been used to treat insomnia (Bootzin & Epstein, 2011).

Of these the most successful single intervention is called Stimulus Control Therapy (Morin et al., 2006). You’ll be happy to hear it consists of six very straightforward steps. If you follow these it should improve your sleep. After the list I’ll explain the thinking behind them. First, here are their six steps:

  1. Lie down to go to sleep only when you are sleepy.
  2. Do not use your bed for anything except sleep; that is, do not read, watch television, eat, or worry in bed. Sexual activity is the only exception to this rule. On such occasions, the instructions are to be followed afterwards, when you intend to go to sleep.
  3. If you find yourself unable to fall asleep, get up and go into another room. Stay up as long as you wish and then return to the  bedroom to sleep. Although we do not want you to watch the clock, we want you to get out of bed if you do not fall asleep immediately. Remember the goal is to associate your bed with falling asleep quickly! If you are in bed more than about 10 minutes without falling asleep and have not gotten up, you are not following this instruction.
  4. If you still cannot fall asleep, repeat step 3. Do this as often as is necessary throughout the night.
  5. Set your alarm and get up at the same time every morning irrespective of how much sleep you got during the night. This will help your body acquire a consistent sleep rhythm.
  6. Do not nap during the day.

Why it works

This method is based on the idea that we are like Pavlov’s drooling dog. We attach certain stimuli in the environment to certain thoughts and behaviours. Famously Pavlov’s dogs would start drooling when a bell rang, because they associated hearing the bell with getting food. Eventually the dogs would drool at the sound of the bell even when they didn’t get any food. Replace the bell with a bed and food with sleep and conceptually you’re there.

If we learn to do all kinds of things in bed that aren’t sleep, then when we do want to use it for sleep, it’s harder because of those other associations.

This is just as true of thoughts as it is of actions. It’s important to avoid watching TV in bed, but it’s also important to avoid lying in bed worrying about not being able to get to sleep. Because then you learn to associate bed with worry. Worse, you suffer anticipatory anxiety: anxiety about the anxiety you’ll feel when you are trying to get to sleep.

So, this therapy works by strengthening the association between bed and sleep and weakening the association between bed and everything else (apart from sex!).

Other treatments supported by the research are progressive muscle relaxation, which is exactly what it sounds like, and paradoxical intention. This latter technique involves stopping people trying so hard to get to sleep. The paradox being that when people stop trying so hard, they find it easier to fall asleep.

All this assumes you don’t live next door to a late night drummer and you’re not downing a double espresso before hitting the sack, but those sorts of things are pretty obvious. Everything else being equal, though, Stimulus Control Therapy seems the easiest for most people to implement.

 

Read more

Sleepless in Cape Town.

Outside the wind is howling and the rain lashes down, my eyes spring open and my mind begins to roam. At first it goes to the poor  living in the informal settlement about a kilometer from my warm home, it peeks inside their leaking shacks and watches as the occupants huddle beneath their black plastic refuse bags trying to escape the freezing rain. My mind then unpacks the previous day like an airport customs agent looking for contraband, it rifles through my mental pockets looking for sharp objects, relational conflict, things left undone, these are all grist to the mind’s mill. In an effort to repair any inconsistency it finds, my mind then begins to construct a to do list for the following day, taking mental notes, commenting inanely like an aging monarch waving at the passing crowd. Somewhere buried deep in my consciousness, is an awareness that my mind shouldn’t be doing this at 2:37 a.m. the recognition flickers and is then subsumed by the next pale thought.

Why do i continually do this to myself? I went looking for answers (at 2:53 a.m.)

Current research about insomnia falls into different categories, for example:

Psychophysiologic Insomnia :

In many cases, it is unclear if chronic insomnia is a symptom of some physical or psychological condition or if it is a primary disorder of its own. In most instances, a mix of psychological and physical conditions appears to cause insomnia.

Psychophysiologic insomnia occurs when:

Transient insomnia disrupts the person’s circadian rhythm. The poor sod then begins to associate bed not with rest and relaxation but with a struggle to sleep. A pattern of sleep failure emerges. Over time, this repeats, and bedtime becomes a source of anxiety. Once in bed, the now harrowed individual broods over the inability to sleep…”but i was tired when i went to bed!” , the consequences of sleep loss “…and i have 10 clients tomorrow!” , and the lack of mental control …”OHM….AUUUMMM….BUGGER!!…OOOHHHMMM”. All attempts at sleep fail…”F*&%$%….OHM…BUGGER!”

Eventually excessive worry about sleep loss becomes persistent and provides an automatic nightly trigger for anxiety and arousal. Unsuccessful attempts to control thoughts, images, and emotions only worsen the situation. After such a cycle is established, insomnia becomes a self-fulfilling prophecy that can persist indefinitely.

Medical Conditions and Their Treatments

Among the many medical problems that can cause chronic insomnia are allergies, benign prostatic hyperplasia (BPH), arthritis, cancer, heart disease, gastroesophageal reflux disease (GERD), hypertension, asthma, emphysema, rheumatologic conditions, Alzheimer’s disease, Parkinson’s disease, hyperthyroidism, epilepsy, and fibromyalgia. . Other types of sleep disorders, such as restless legs syndrome and sleep apnea, can cause insomnia. Many patients with chronic pain also sleep poorly.

Medications. Among the many medications that can cause insomnia are antidepressants (fluoxetine, bupropion), theophylline, lamotrigine, felbamate, beta-blockers, and beta-agonists.

Substance Abuse

About 10 – 15% of chronic insomnia cases result from substance abuse, especially alcohol, cocaine, and sedatives. One or two drinks at dinner, for most people, pose little danger of alcoholism and may help reduce stress and initiate sleep. Excess alcohol or alcohol used to promote sleep (normally >3 glasses) tends to fragment sleep and cause wakefulness a few hours later. It also increases the risk for other sleep disorders, including sleep apnea and restless legs. Alcoholics often suffer insomnia during withdrawal and, in some cases, for several years during recovery.

Ok, so I am currently not an alcoholic in recovery, do not have Alzheimer’s (to my knowledge), have not got a nostril full of cocaine, nor restless leg syndrome…hmmm, could it be that i am just a little anxious about an academic paper I have to write and have been avoiding at all costs (including my sleep)? Probably, so i think i’ll avoid it for a moment longer and go and make myself a warm cuppa.

Read more
,

Snake oils and sanity…

The Epidemic of Mental Illness: Why?

JUNE 23, 2011

Marcia Angell

Font Size: A A A
angell_1-062311.jpgAn advertisement for Prozac, from The American Journal of Psychiatry, 1995 

It seems that Americans are in the midst of a raging epidemic of mental illness, at least as judged by the increase in the numbers treated for it. The tally of those who are so disabled by mental disorders that they qualify for Supplemental Security Income (SSI) or Social Security Disability Insurance (SSDI) increased nearly two and a half times between 1987 and 2007—from one in 184 Americans to one in seventy-six. For children, the rise is even more startling—a thirty-five-fold increase in the same two decades. Mental illness is now the leading cause of disability in children, well ahead of physical disabilities like cerebral palsy or Down syndrome, for which the federal programs were created.

A large survey of randomly selected adults, sponsored by the National Institute of Mental Health (NIMH) and conducted between 2001 and 2003, found that an astonishing 46 percent met criteria established by the American Psychiatric Association (APA) for having had at least one mental illness within four broad categories at some time in their lives. The categories were “anxiety disorders,” including, among other subcategories, phobias and post-traumatic stress disorder (PTSD); “mood disorders,” including major depression and bipolar disorders; “impulse-control disorders,” including various behavioral problems and attention-deficit/hyperactivity disorder (ADHD); and “substance use disorders,” including alcohol and drug abuse. Most met criteria for more than one diagnosis. Of a subgroup affected within the previous year, a third were under treatment—up from a fifth in a similar survey ten years earlier.

Nowadays treatment by medical doctors nearly always means psychoactive drugs, that is, drugs that affect the mental state. In fact, most psychiatrists treat only with drugs, and refer patients to psychologists or social workers if they believe psychotherapy is also warranted. The shift from “talk therapy” to drugs as the dominant mode of treatment coincides with the emergence over the past four decades of the theory that mental illness is caused primarily by chemical imbalances in the brain that can be corrected by specific drugs. That theory became broadly accepted, by the media and the public as well as by the medical profession, after Prozac came to market in 1987 and was intensively promoted as a corrective for a deficiency of serotonin in the brain. The number of people treated for depression tripled in the following ten years, and about 10 percent of Americans over age six now take antidepressants. The increased use of drugs to treat psychosis is even more dramatic. The new generation of antipsychotics, such as Risperdal, Zyprexa, and Seroquel, has replaced cholesterol-lowering agents as the top-selling class of drugs in the US.

What is going on here? Is the prevalence of mental illness really that high and still climbing? Particularly if these disorders are biologically determined and not a result of environmental influences, is it plausible to suppose that such an increase is real? Or are we learning to recognize and diagnose mental disorders that were always there? On the other hand, are we simply expanding the criteria for mental illness so that nearly everyone has one? And what about the drugs that are now the mainstay of treatment? Do they work? If they do, shouldn’t we expect the prevalence of mental illness to be declining, not rising?

These are the questions, among others, that concern the authors of the three provocative books under review here. They come at the questions from different backgrounds—Irving Kirsch is a psychologist at the University of Hull in the UK, Robert Whitaker a journalist and previously the author of a history of the treatment of mental illness called Mad in America (2001), and Daniel Carlat a psychiatrist who practices in a Boston suburb and publishes a newsletter and blog about his profession.

The authors emphasize different aspects of the epidemic of mental illness. Kirsch is concerned with whether antidepressants work. Whitaker, who has written an angrier book, takes on the entire spectrum of mental illness and asks whether psychoactive drugs create worse problems than they solve. Carlat, who writes more in sorrow than in anger, looks mainly at how his profession has allied itself with, and is manipulated by, the pharmaceutical industry. But despite their differences, all three are in remarkable agreement on some important matters, and they have documented their views well.

First, they agree on the disturbing extent to which the companies that sell psychoactive drugs—through various forms of marketing, both legal and illegal, and what many people would describe as bribery—have come to determine what constitutes a mental illness and how the disorders should be diagnosed and treated. This is a subject to which I’ll return.

Second, none of the three authors subscribes to the popular theory that mental illness is caused by a chemical imbalance in the brain. As Whitaker tells the story, that theory had its genesis shortly after psychoactive drugs were introduced in the 1950s. The first was Thorazine (chlorpromazine), which was launched in 1954 as a “major tranquilizer” and quickly found widespread use in mental hospitals to calm psychotic patients, mainly those with schizophrenia. Thorazine was followed the next year by Miltown (meprobamate), sold as a “minor tranquilizer” to treat anxiety in outpatients. And in 1957, Marsilid (iproniazid) came on the market as a “psychic energizer” to treat depression.

In the space of three short years, then, drugs had become available to treat what at that time were regarded as the three major categories of mental illness—psychosis, anxiety, and depression—and the face of psychiatry was totally transformed. These drugs, however, had not initially been developed to treat mental illness. They had been derived from drugs meant to treat infections, and were found only serendipitously to alter the mental state. At first, no one had any idea how they worked. They simply blunted disturbing mental symptoms. But over the next decade, researchers found that these drugs, and the newer psychoactive drugs that quickly followed, affected the levels of certain chemicals in the brain.

Some brief—and necessarily quite simplified—background: the brain contains billions of nerve cells, called neurons, arrayed in immensely complicated networks and communicating with one another constantly. The typical neuron has multiple filamentous extensions, one called an axon and the others called dendrites, through which it sends and receives signals from other neurons. For one neuron to communicate with another, however, the signal must be transmitted across the tiny space separating them, called a synapse. To accomplish that, the axon of the sending neuron releases a chemical, called a neurotransmitter, into the synapse. The neurotransmitter crosses the synapse and attaches to receptors on the second neuron, often a dendrite, thereby activating or inhibiting the receiving cell. Axons have multiple terminals, so each neuron has multiple synapses. Afterward, the neurotransmitter is either reabsorbed by the first neuron or metabolized by enzymes so that the status quo ante is restored. There are exceptions and variations to this story, but that is the usual way neurons communicate with one another.

When it was found that psychoactive drugs affect neurotransmitter levels in the brain, as evidenced mainly by the levels of their breakdown products in the spinal fluid, the theory arose that the cause of mental illness is an abnormality in the brain’s concentration of these chemicals that is specifically countered by the appropriate drug. For example, because Thorazine was found to lower dopamine levels in the brain, it was postulated that psychoses like schizophrenia are caused by too much dopamine. Or later, because certain antidepressants increase levels of the neurotransmitter serotonin in the brain, it was postulated that depression is caused by too little serotonin. (These antidepressants, like Prozac or Celexa, are called selective serotonin reuptake inhibitors (SSRIs) because they prevent the reabsorption of serotonin by the neurons that release it, so that more remains in the synapses to activate other neurons.) Thus, instead of developing a drug to treat an abnormality, an abnormality was postulated to fit a drug.

That was a great leap in logic, as all three authors point out. It was entirely possible that drugs that affected neurotransmitter levels could relieve symptoms even if neurotransmitters had nothing to do with the illness in the first place (and even possible that they relieved symptoms through some other mode of action entirely). As Carlat puts it, “By this same logic one could argue that the cause of all pain conditions is a deficiency of opiates, since narcotic pain medications activate opiate receptors in the brain.” Or similarly, one could argue that fevers are caused by too little aspirin.

But the main problem with the theory is that after decades of trying to prove it, researchers have still come up empty-handed. All three authors document the failure of scientists to find good evidence in its favor. Neurotransmitter function seems to be normal in people with mental illness before treatment. In Whitaker’s words:

Prior to treatment, patients diagnosed with schizophrenia, depression, and other psychiatric disorders do not suffer from any known “chemical imbalance.” However, once a person is put on a psychiatric medication, which, in one manner or another, throws a wrench into the usual mechanics of a neuronal pathway, his or her brain begins to function…abnormally.

Carlat refers to the chemical imbalance theory as a “myth” (which he calls “convenient” because it destigmatizes mental illness), and Kirsch, whose book focuses on depression, sums up this way: “It now seems beyond question that the traditional account of depression as a chemical imbalance in the brain is simply wrong.” Why the theory persists despite the lack of evidence is a subject I’ll come to.

Do the drugs work? After all, regardless of the theory, that is the practical question. In his spare, remarkably engrossing book, The Emperor’s New Drugs, Kirsch describes his fifteen-year scientific quest to answer that question about antidepressants. When he began his work in 1995, his main interest was in the effects of placebos. To study them, he and a colleague reviewed thirty-eight published clinical trials that compared various treatments for depression with placebos, or compared psychotherapy with no treatment. Most such trials last for six to eight weeks, and during that time, patients tend to improve somewhat even without any treatment. But Kirsch found that placebos were three times as effective as no treatment. That didn’t particularly surprise him. What did surprise him was the fact that antidepressants were only marginally better than placebos. As judged by scales used to measure depression, placebos were 75 percent as effective as antidepressants. Kirsch then decided to repeat his study by examining a more complete and standardized data set.

The data he used were obtained from the US Food and Drug Administration (FDA) instead of the published literature. When drug companies seek approval from the FDA to market a new drug, they must submit to the agency all clinical trials they have sponsored. The trials are usually double-blind and placebo-controlled, that is, the participating patients are randomly assigned to either drug or placebo, and neither they nor their doctors know which they have been assigned. The patients are told only that they will receive an active drug or a placebo, and they are also told of any side effects they might experience. If two trials show that the drug is more effective than a placebo, the drug is generally approved. But companies may sponsor as many trials as they like, most of which could be negative—that is, fail to show effectiveness. All they need is two positive ones. (The results of trials of the same drug can differ for many reasons, including the way the trial is designed and conducted, its size, and the types of patients studied.)

angell_2-062311.jpgEdward Gorey Charitable Trust 

For obvious reasons, drug companies make very sure that their positive studies are published in medical journals and doctors know about them, while the negative ones often languish unseen within the FDA, which regards them as proprietary and therefore confidential. This practice greatly biases the medical literature, medical education, and treatment decisions.

Kirsch and his colleagues used the Freedom of Information Act to obtain FDA reviews of all placebo-controlled clinical trials, whether positive or negative, submitted for the initial approval of the six most widely used antidepressant drugs approved between 1987 and 1999—Prozac, Paxil, Zoloft, Celexa, Serzone, and Effexor. This was a better data set than the one used in his previous study, not only because it included negative studies but because the FDA sets uniform quality standards for the trials it reviews and not all of the published research in Kirsch’s earlier study had been submitted to the FDA as part of a drug approval application.

Altogether, there were forty-two trials of the six drugs. Most of them were negative. Overall, placebos were 82 percent as effective as the drugs, as measured by the Hamilton Depression Scale (HAM-D), a widely used score of symptoms of depression. The average difference between drug and placebo was only 1.8 points on the HAM-D, a difference that, while statistically significant, was clinically meaningless. The results were much the same for all six drugs: they were all equally unimpressive. Yet because the positive studies were extensively publicized, while the negative ones were hidden, the public and the medical profession came to believe that these drugs were highly effective antidepressants.

Kirsch was also struck by another unexpected finding. In his earlier study and in work by others, he observed that even treatments that were not considered to be antidepressants—such as synthetic thyroid hormone, opiates, sedatives, stimulants, and some herbal remedies—were as effective as antidepressants in alleviating the symptoms of depression. Kirsch writes, “When administered as antidepressants, drugs that increase, decrease or have no effect on serotonin all relieve depression to about the same degree.” What all these “effective” drugs had in common was that they produced side effects, which participating patients had been told they might experience.

It is important that clinical trials, particularly those dealing with subjective conditions like depression, remain double-blind, with neither patients nor doctors knowing whether or not they are getting a placebo. That prevents both patients and doctors from imagining improvements that are not there, something that is more likely if they believe the agent being administered is an active drug instead of a placebo. Faced with his findings that nearly any pill with side effects was slightly more effective in treating depression than an inert placebo, Kirsch speculated that the presence of side effects in individuals receiving drugs enabled them to guess correctly that they were getting active treatment—and this was borne out by interviews with patients and doctors—which made them more likely to report improvement. He suggests that the reason antidepressants appear to work better in relieving severe depression than in less severe cases is that patients with severe symptoms are likely to be on higher doses and therefore experience more side effects.

To further investigate whether side effects bias responses, Kirsch looked at some trials that employed “active” placebos instead of inert ones. An active placebo is one that itself produces side effects, such as atropine—a drug that selectively blocks the action of certain types of nerve fibers. Although not an antidepressant, atropine causes, among other things, a noticeably dry mouth. In trials using atropine as the placebo, there was no difference between the antidepressant and the active placebo. Everyone had side effects of one type or another, and everyone reported the same level of improvement. Kirsch reported a number of other odd findings in clinical trials of antidepressants, including the fact that there is no dose-response curve—that is, high doses worked no better than low ones—which is extremely unlikely for truly effective drugs. “Putting all this together,” writes Kirsch,

leads to the conclusion that the relatively small difference between drugs and placebos might not be a real drug effect at all. Instead, it might be an enhanced placebo effect, produced by the fact that some patients have broken [the] blind and have come to realize whether they were given drug or placebo. If this is the case, then there is no real antidepressant drug effect at all. Rather than comparing placebo to drug, we have been comparing “regular” placebos to “extra-strength” placebos.

That is a startling conclusion that flies in the face of widely accepted medical opinion, but Kirsch reaches it in a careful, logical way. Psychiatrists who use antidepressants—and that’s most of them—and patients who take them might insist that they know from clinical experience that the drugs work. But anecdotes are known to be a treacherous way to evaluate medical treatments, since they are so subject to bias; they can suggest hypotheses to be studied, but they cannot prove them. That is why the development of the double-blind, randomized, placebo-controlled clinical trial in the middle of the past century was such an important advance in medical science. Anecdotes about leeches or laetrile or megadoses of vitamin C, or any number of other popular treatments, could not stand up to the scrutiny of well-designed trials. Kirsch is a faithful proponent of the scientific method, and his voice therefore brings a welcome objectivity to a subject often swayed by anecdotes, emotions, or, as we will see, self-interest.

Whitaker’s book is broader and more polemical. He considers all mental illness, not just depression. Whereas Kirsch concludes that antidepressants are probably no more effective than placebos, Whitaker concludes that they and most of the other psychoactive drugs are not only ineffective but harmful. He begins by observing that even as drug treatment for mental illness has skyrocketed, so has the prevalence of the conditions treated:

The number of disabled mentally ill has risen dramatically since 1955, and during the past two decades, a period when the prescribing of psychiatric medications has exploded, the number of adults and children disabled by mental illness has risen at a mind-boggling rate. Thus we arrive at an obvious question, even though it is heretical in kind: Could our drug-based paradigm of care, in some unforeseen way, be fueling this modern-day plague?

Moreover, Whitaker contends, the natural history of mental illness has changed. Whereas conditions such as schizophrenia and depression were once mainly self-limited or episodic, with each episode usually lasting no more than six months and interspersed with long periods of normalcy, the conditions are now chronic and lifelong. Whitaker believes that this might be because drugs, even those that relieve symptoms in the short term, cause long-term mental harms that continue after the underlying illness would have naturally resolved.

The evidence he marshals for this theory varies in quality. He doesn’t sufficiently acknowledge the difficulty of studying the natural history of any illness over a fifty-some-year time span during which many circumstances have changed, in addition to drug use. It is even more difficult to compare long-term outcomes in treated versus untreated patients, since treatment may be more likely in those with more severe disease at the outset. Nevertheless, Whitaker’s evidence is suggestive, if not conclusive.

If psychoactive drugs do cause harm, as Whitaker contends, what is the mechanism? The answer, he believes, lies in their effects on neurotransmitters. It is well understood that psychoactive drugs disturb neurotransmitter function, even if that was not the cause of the illness in the first place. Whitaker describes a chain of effects. When, for example, an SSRI antidepressant like Celexa increases serotonin levels in synapses, it stimulates compensatory changes through a process called negative feedback. In response to the high levels of serotonin, the neurons that secrete it (presynaptic neurons) release less of it, and the postsynaptic neurons become desensitized to it. In effect, the brain is trying to nullify the drug’s effects. The same is true for drugs that block neurotransmitters, except in reverse. For example, most antipsychotic drugs block dopamine, but the presynaptic neurons compensate by releasing more of it, and the postsynaptic neurons take it up more avidly. (This explanation is necessarily oversimplified, since many psychoactive drugs affect more than one of the many neurotransmitters.)

With long-term use of psychoactive drugs, the result is, in the words of Steve Hyman, a former director of the NIMH and until recently provost of Harvard University, “substantial and long-lasting alterations in neural function.” As quoted by Whitaker, the brain, Hyman wrote, begins to function in a manner “qualitatively as well as quantitatively different from the normal state.” After several weeks on psychoactive drugs, the brain’s compensatory efforts begin to fail, and side effects emerge that reflect the mechanism of action of the drugs. For example, the SSRIs may cause episodes of mania, because of the excess of serotonin. Antipsychotics cause side effects that resemble Parkinson’s disease, because of the depletion of dopamine (which is also depleted in Parkinson’s disease). As side effects emerge, they are often treated by other drugs, and many patients end up on a cocktail of psychoactive drugs prescribed for a cocktail of diagnoses. The episodes of mania caused by antidepressants may lead to a new diagnosis of “bipolar disorder” and treatment with a “mood stabilizer,” such as Depokote (an anticonvulsant) plus one of the newer antipsychotic drugs. And so on.

Some patients take as many as six psychoactive drugs daily. One well- respected researcher, Nancy Andreasen, and her colleagues published evidence that the use of antipsychotic drugs is associated with shrinkage of the brain, and that the effect is directly related to the dose and duration of treatment. As Andreasen explained to The New York Times, “The prefrontal cortex doesn’t get the input it needs and is being shut down by drugs. That reduces the psychotic symptoms. It also causes the prefrontal cortex to slowly atrophy.”*

Getting off the drugs is exceedingly difficult, according to Whitaker, because when they are withdrawn the compensatory mechanisms are left unopposed. When Celexa is withdrawn, serotonin levels fall precipitously because the presynaptic neurons are not releasing normal amounts and the postsynaptic neurons no longer have enough receptors for it. Similarly, when an antipsychotic is withdrawn, dopamine levels may skyrocket. The symptoms produced by withdrawing psychoactive drugs are often confused with relapses of the original disorder, which can lead psychiatrists to resume drug treatment, perhaps at higher doses.

Unlike the cool Kirsch, Whitaker is outraged by what he sees as an iatrogenic (i.e., inadvertent and medically introduced) epidemic of brain dysfunction, particularly that caused by the widespread use of the newer (“atypical”) antipsychotics, such as Zyprexa, which cause serious side effects. Here is what he calls his “quick thought experiment”:

Imagine that a virus suddenly appears in our society that makes people sleep twelve, fourteen hours a day. Those infected with it move about somewhat slowly and seem emotionally disengaged. Many gain huge amounts of weight—twenty, forty, sixty, and even one hundred pounds. Often their blood sugar levels soar, and so do their cholesterol levels. A number of those struck by the mysterious illness—including young children and teenagers—become diabetic in fairly short order…. The federal government gives hundreds of millions of dollars to scientists at the best universities to decipher the inner workings of this virus, and they report that the reason it causes such global dysfunction is that it blocks a multitude of neurotransmitter receptors in the brain—dopaminergic, serotonergic, muscarinic, adrenergic, and histaminergic. All of those neuronal pathways in the brain are compromised. Meanwhile, MRI studies find that over a period of several years, the virus shrinks the cerebral cortex, and this shrinkage is tied to cognitive decline. A terrified public clamors for a cure.

Now such an illness has in fact hit millions of American children and adults. We have just described the effects of Eli Lilly’s best-selling antipsychotic, Zyprexa.

If psychoactive drugs are useless, as Kirsch believes about antidepressants, or worse than useless, as Whitaker believes, why are they so widely prescribed by psychiatrists and regarded by the public and the profession as something akin to wonder drugs? Why is the current against which Kirsch and Whitaker and, as we will see, Carlat are swimming so powerful? I discuss these questions in Part II of this review.

—This is the first part of a two-part article.

  1. *See Claudia Dreifus, “Using Imaging to Look at Changes in the Brain,” The New York Times , September 15, 2008. 

 

Read more