Highlight: a piece of work by Robert Whitaker

Sorry to those of you who enjoy my original material, but I haven’t been up to writing lately. So I’m doing my best to keep the dialog alive by posting interesting tidbits from around the web. Today’s piece is by Robert Whitaker, author of Mad in American a very compelling history of psychiatry. I post an interesting excerpt from this book here. Apparently a lot of “mad” folk don’t stay “mad” in countries where neuroleptics aren’t used.

bobwhitaker.jpg

Below is posted additional information by Robert Whitaker:

Anatomy of an Epidemic: Psychiatric Drugs and the Astonishing Rise of Mental Illness in America

Abstract

Over the past 50 years, there has been an astonishing increase in severe mental illness in the United States. The percentage of Americans disabled by mental illness has increased five-fold since 1955, when Thorazine—remembered today as psychiatry’s first “wonder” drug—was introduced into the market. The number of Americans disabled by mental illness has nearly doubled since 1987, when Prozac—the first in a second generation of wonder drugs for mental illness—was introduced. There are now nearly 6 million Americans disabled by mental illness, and this number increases by more than 400 people each day. A review of the scientific literature reveals that it is our drug-based paradigm of care that is fueling this epidemic. The drugs increase the likelihood that a person will become chronically ill, and induce new and more severe psychiatric symptoms in a significant percentage of patients.

The modern era of psychiatry is typically said to date back to 1955, when chlorpromazine, marketed as Thorazine, was introduced into asylum medicine. In 1955, the number of patients in public mental hospitals reached a highwater mark of 558,922 and then began to gradually decline, and historians typically credit this emptying of the state hospitals to chlorpromazine. As Edward Shorter wrote in his 1997 book, A History of Psychiatry, “Chlorpromazine initiated a revolution in psychiatry, comparable to the introduction of penicillin in general medicine.” (Shorter, 1997, p. 255). Haldol and other antipsychotic medications were soon brought to market, and then antidepressants and antianxiety drugs. Psychiatry now had drugs said to target specific illnesses, much like insulin for diabetes.

However, since 1955, when this modern era of psychopharmacology was born, there has been an astonishing rise in the incidence of severe mental illness in this country. Although the number of hospitalized mentally ill may have gone down, every other metric used to measure disabling mental illness in the United States has risen dramatically, so much so that E. Fuller Torrey, in his 2001 book The Invisible Plague, concluded that insanity had risen to the level of an “epidemic” (Torrey, 2001). Since this epidemic has unfolded in lockstep with the ever-increasing use of psychiatric drugs, an obvious question arises: Is our drug-based paradigm of care fueling this modern-day plague?

The Epidemic

The U.S. Department of Health and Human Services uses “patient care episodes” to estimate the number of people treated each year for mental illness. This metric tracks the number of people treated at psychiatric hospitals, residential facilities for the mentally ill, and ambulatory care facilities. In 1955, the government reported 1,675,352 patient care episodes, or 1,028 episodes per 100,000 population. In 2000, patient-care episodes totaled 10,741,243, or 3,806 per 100,000 population. That is nearly a four-fold per-capita increase in 50 years. (Table 1).

A second way to assess this epidemic is to look at the number of disabled mentally ill in the country. Up until the 1950s, the number of hospitalized mentally ill provided a rough estimate of this group. Today, the disabled mentally ill typically receive a disability payment either from the Social Security Disability Insurance (SSDI) program or the Supplemental Security Income (SSI) program, and many live in residential shelters or other subsidized living arrangements. Thus, the hospitalized patient of 50 years ago receives either SSDI or SSI today, and this line of evidence reveals that the number of disabled mentally ill has increased nearly six-fold since Thorazine was introduced.

In 1955, there were 559,000 people in public mental hospitals, or 3.38 people per 100,000 population. In 2003, there were 5.726 million people who received either an SSI or SSDI payment (or from both programs), and were either disabled by mental illness (SSDI statistics) or diagnosed as mentally ill (SSI statistics).[i] That is a disability rate of 19.69 people per 100,000 population, which is nearly six times what it was in 1955. (Table 2.)

It is also noteworthy that the number of disabled mentally ill has increased dramatically since 1987, the year Prozac was introduced. Prozac was touted as the first of a second generation of psychiatric medications said to be so much better than the old. Prozac and the other SSRIs replaced the tricyclics, while the atypical antipsychotics (Risperidone, Zyprexa, etc.) replaced Thorazine and the other standard neuroleptics. The combined sales of antidepressants and antipsychotics jumped from around $500 million in 1986 to nearly $20 billion in 2004 (from September 2003 to August 2004), a 40-fold increase. [ii]During this period, the number of disabled mentally ill in the United States, as calculated by the SSI and SSDI figures, increased from 3.331 million people to 5.726 million.[iii]That is an increase of 149,739 people per year, or 410 people newly disabled by mental illness every day. (Table 3).

A Biological Cause for the Epidemic

The notion that psychiatric drugs work by balancing brain chemistry was first raised in the early 1960s. Once Thorazine and the standard neuroleptics were shown to block dopamine activity in the brain, researchers hypothesized that schizophrenia was caused by too much of this neurotransmitter. Thus, the neuroleptics—by blocking the dopamine receptors—helped normalize the brain’s dopamine system. Since the tricyclics raised norephinephrine and serotonin levels in the brain, researchers reasoned that depression was caused by low levels of these brain chemicals. Merck, meanwhile, marketed its antianxiety drug Suavitil as a “mood normalizer.” These normalizing claims suggested that the drugs were indeed curative of biological ailments.

However, this hypothesis—that the drugs balanced abnormal brain chemistry— never panned out. Although the public may still be told that the drugs normalize brain chemistry, the truth is that researchers did not find that people with schizophrenia had overactive dopamine systems (prior to being medicated), or that those diagnosed with depression suffered from abnormally low levels of serotonin or norephinephrine. As U.S. Surgeon General David Satcher acknowledged in his 1999 report on mental health, the causes of mental disorders “remain unknown” (Satcher, 1999, p.102).

Yet, scientists have come to understand how the drugs affect the human brain, at least in terms of their immediate mechanisms of action. In 1996, the director of the National Institute of Mental Health, neuroscientist Steven Hyman, set forth a paradigm for understanding how all psychiatric drugs work. Antipsychotics, antidepressants, and antianxiety drugs, he wrote, “create perturbations in neurotransmitter functions” (Hyman and Nestler, 1996, p. 153). In response, the brain goes through a series of compensatory adaptations. For instance, Prozac and other SSRI antidepressants block the reuptake of serotonin. In order to cope with this hindrance of normal function, the brain tones down its whole serotonergic system. Neurons both release less serotonin and down-regulate (or decrease) their number of serotonin receptors. The density of serotonin receptors in the brain may decrease by fifty percent or more. As part of this adaptation process, Hyman noted, there are also changes in intracellular signaling pathways and gene expression. After a few weeks, Hyman concluded, the patient’s brain is functioning in a manner that is “qualitatively as well as quantitatively different from the normal state.”

In short, psychiatric drugs induce a pathology. Princeton neuroscientist Barry Jacobs has explicitly made this point about SSRIs. These drugs, he said, “alter the level of synaptic transmission beyond the physiologic range achieved under (normal) environmental/biological conditions. Thus, any behavioral or physiologic change produced under these conditions might more appropriately be considered pathologic, rather than reflective of the normal biological role of serotonin.” (Jacobs, 1991). Once psychiatric drugs are viewed in this way, it is easy to understand why their widespread use would precipitate an epidemic of mental illness. As E. Fuller Torrey wrote in The Invisible Plague, conditions that “disrupt brain chemistry may cause delusions, hallucinations, disordered thinking, and mood swings—the symptoms of insanity” (Torrey, 2001, p. 315). He noted that infectious agents, tumors, metabolic and toxic disorders, and various diseases could all affect the brain in this manner. What Torrey failed to mention is that psychiatric medications also “disrupt brain chemistry.” As a result, their long-term use is bound to be problematic, and that is precisely what the research literature reveals: Their use increases the likelihood that a person will become chronically ill, and they cause a significant percentage of patients to become ill in new and more severe ways.

Turning Patients Chronically Ill
Neuroleptics

The study that is still cited today as proving the efficacy of neuroleptics for curbing acute episodes of schizophrenia was a nine-hospital trial of 344 patients conducted by the National Institute of Mental Health in the early 1960s. At the end of six weeks, 75% of the drug-treated patients were “much improved” or “very much improved” compared to 23% of the placebo patients. (Cole, Klermn et al., 1964).

However, three years later, the NIMH reported on one-year outcomes for the patients. Much to their surprise, they found that “patients who received placebo treatment were less likely to be rehospitalized than those who received any of the three active phenothiazines” (Schooler, Goldberg et al., 1967, pp. 991). This result raised an unsettling possibility: While the drugs were effective over the short term, perhaps they made people more biologically vulnerable to psychosis over the long run, and thus the higher rehospitalization rates at the end of one year.

In the wake of that disturbing report, the NIMH conducted two medication-withdrawal studies. In each one, relapse rates rose in correlation with neuroleptic dosage before withdrawal. In the two trials, only 7% of patients who were on placebo relapsed during the following six months. Twenty-three percent of the patients on less than 300 mg of chlorpromazine daily relapsed following drug withdrawal; this rate climbed to 54% for those receiving 300-500 mg and to 65% for patients taking more than 500 mg. The researchers concluded: “Relapse was found to be significantly related to the dose of the tranquilizing medication the patient was receiving before he was put on placebo—the higher the dose, the greater the probability of relapse” (Prien, Levine et al., 1971, pp. 22).

Once again, the results suggested that neuroleptics increased the patients’ biological vulnerability to psychosis. Other reports soon deepened this suspicion. Even when patients reliably took their medications, relapse was common, and researchers reported in 1976 that it appeared that relapse during drug administration was greater in severity than when no drugs were given (Gardos and Cole, 1977). A retrospective study by Bockoven also indicated that the drugs were making patients chronically ill. He reported that 45% of patients treated at Boston Psychopathic Hospital in 1947 with a progressive model of care did not relapse in the five years following discharge, and that 76% were successfully living in the community at the end of that follow-up period. In contrast, only 31% of patients treated in 1967 with neuroleptics at a community health center remained relapse-free over the next five years, and as a group they were much more “socially dependent”—on welfare and needing other forms of support—than those in the 1947 cohort (Bockoven and Solomon, 1975).

With debate over the merits of neuroleptics rising, the NIMH revisited the question of whether newly admitted schizophrenia patients could be successfully treated without drugs. There were three NIMH-funded studies conducted during the 1970s that examined this possibility, and in each instance, the newly admitted patients treated without drugs did better than those treated in a conventional manner. In 1977, Carpenter reported that only 35% of the non-medicated patients in his study relapsed within a year after discharge, compared to 45% of those treated with neuroleptics (Carpenter, McGlashan et al., 1977). A year later, Rappaport reported that in a trial of 80 young male schizophrenics admitted to a state hospital, only 27% of patients treated without neuroleptics relapsed in the three years following discharge, compared to 62% of the medicated group (Rappaport, Hopkins et al., 1978). The final study came from Mosher, head of schizophrenia research at the NIMH. In 1979, he reported that patients who were treated without neuroleptics in an experimental home staffed by nonprofessionals had lower relapse rates over a two-year period than a control group treated with drugs in a hospital. As in the other studies, Mosher reported that the patients treated without drugs were the better functioning group as well (Bola and Mosher, 2003; Mathews, Roper et al., 2003)

The three studies all pointed to the same conclusion: Exposure to neuroleptics increased the long-term incidence of relapse. Carpenter’s group defined the conundrum:

There is no question that, once patients are placed on medication, they are less vulnerable to relapse if maintained on neuroleptics. But what if these patients had never been treated with drugs to begin with? We raise the possibility that antipsychotic medication may make some schizophrenic patients more vulnerable to future relapse than would be the case in the natural course of the illness (Carpenter and McGlashan, 1977, pp. 19).

In the late 1970s, two physicians at McGill University in Montreal, offered a biological explanation for why this was so (one that fits with the paradigm later outlined by Hyman). The brain responds to neuroleptics—which block 70-90% of all D2 dopamine receptors in the brain—as though they are a pathological insult. To compensate, dopaminergic brain cells increase the density of their D2 receptors by 30% or more. The brain is now “supersensitive” to dopamine, and this neurotransmitter is thought to be a mediator of psychosis. The person has become more biologically vulnerable to psychosis and is at particularly high risk of severe relapse should he or she abruptly quit taking the drugs (Chouinard, Jones et al., 1978; Chouinard and Jones, 1980). The two Canadian researchers concluded:

Neuroleptics can produce a dopamine supersensitivity that leads to both dyskinetic and psychotic symptoms. An implication is that the tendency toward psychotic relapse in a patient who had developed such a supersensitivity is determined by more than just the normal course of the illness. (Chouniard, Jones, et al., 1978, pp. 1410)

Together, the various studies painted a compelling picture of how neuroleptics shifted outcomes away from recovery. Bockoven’s retrospective and the other experiments all suggested that with minimal or no exposure to neuroleptics, at least 40% of people who suffered a psychotic break and were diagnosed with schizophrenia would not relapse after leaving the hospital, and perhaps as many as 65% would function fairly well over the long-term. However, once first-episode patients were treated with neuroleptics, a different fate awaited them. Their brains would undergo drug-induced changes that would increase their biological vulnerability to psychosis, and this would increase the likelihood that they would become chronically ill (and thus permanently disabled.)

That understanding of neuroleptics had been fleshed out by the early 1980s, and since then, other studies have provided additional confirming evidence. Most notably, the World Health Organization twice compared schizophrenia outcomes in the rich countries of the world with outcomes in poor countries, and each time the patients in the poor countries—where drug usage was much less—were doing dramatically better at two-year and five-year followups. In India, Nigeria and Colombia, where only 16% of patients were maintained continuously on neuroleptics, roughly two-thirds were doing fairly well at the end of the followup period and only one-third had become chronically ill. In the U.S. and other rich countries, where 61% of the patients were kept on antipsychotic drugs, the ratio of good-to-bad outcomes was almost precisely the reverse. Only about one-third had good outcomes, and the remaining two-thirds became chronically ill (Jablensky, Sartorius et al., 1992; Leff, Sartorius et al., 1992).

More recently, MRI studies have shown the same link between drug usage and chronic illness. In the mid 1990s, several research teams reported that the drugs cause atrophy of the cerebral cortex and an enlargement of the basal ganglia (Chakos, Lieberman et al., 1994; Gur, Cowell et al., 1998; Madsen, Keiding et al., 1998). These were disquieting findings, as they clearly showed that the drugs were causing structural changes in the brain. Then, in 1998, researchers at the University of Pennsylvania reported that the drug-induced enlargement of the basal ganglia was “associated with greater severity of both negative and positive symptoms”(Gur, Maany et al., 1998, pp. 1711). In other words, they found that over the long term the drugs cause changes in the brain associated with a worsening of the very symptoms the drugs are supposed to alleviate. The MRI research, in fact, had painted a very convincing picture of a disease process: An outside agent causes an observable change in the size of brain structures, and as this occurs, the patient deteriorates.

Antidepressants

The story of antidepressants is a bit subtler, and yet it leads to the same conclusion that these drugs increase chronic illness over time. Even their short-term efficacy, in terms of a benefit greater than placebo, is of a questionable sort.

In the early 1960s, there were two types of antidepressants, monoamine oxidase inhibitors (MOAIs) and tricyclics. However, MOAIs soon fell out of the favor because of dangerous side effects and a 1965 finding by the Medical Research Council in the United Kingdom that they were no more effective than placebo (Medical Research Council, 1965). Four years later, the NIMH concluded that there was also reason to doubt the merits of tricyclics. After reviewing the medical literature, NIMH investigators determined that in “well-designed studies, the differences between the effectiveness of antidepressant drugs and placebo are not impressive.” About 61% of the drug-treated patients improved, versus 46% of the placebo patients, producing a net drug benefit of only 15% (Smith, 1969).

This finding led some investigators to wonder whether the placebo response was the mechanism that was helping people feel better. What the drugs did, several speculated, was amplify the placebo response, and they did so because they produced physical side effects, which helped convince patients that they were getting a “magic pill” for depression. To test this hypothesis, investigators conducted at least eight studies in which they compared a tricyclic to an “active” placebo, rather than an inert one. (An active placebo is a chemical that produces an unpleasant side effect of some kind, like dry mouth.) In seven of the eight, there was no difference in outcomes, leading investigators at New York Medical College to conclude “there is practical value in viewing (psychotropics) as mere amplifiers or inhibitors of the placebo effects” (Thompson, 1982).

With such confusion over the efficacy of tricyclics hanging in the air, the NIMH launched an ambitious long-term study of depression treatments in the early 1980s. Two hundred thirty-nine patients were randomized into four treatment groups—cognitive behavior therapy, interpersonal therapy, the tricyclic imipramine, and placebo. The results were startling. At the end of 16 weeks, “there were no significant differences among treatments, including placebo plus clinical management, for the less severely depressed and functionally impaired patients.” Only the severely depressed patients fared better on a tricyclic than on placebo. However, at the end of 18 months, even this minimal benefit disappeared. Stay-well rates were best for the cognitive behavior group (30%) and poorest for the imipramine group (19%) (Elkin, 1990). Moreover, two pharmacology researchers at the State University of New York, Seymour Fisher and Roger Greenberg, concluded that if study dropouts were included in the analysis, then the “results look even worse.” Patients treated with an antidepressant were the most likely group to seek treatment following termination of the initial treatment period, they had the highest incidence of relapse, and they “exhibited the fewest weeks of reduced or minimal symptoms during the follow-up period” (Greenberg and Fisher, 1997, pp. 147).

Once again, the results led to an unnerving conclusion. Antidepressants were making people chronically ill, just like the antipsychotics were. Other studies deepened this suspicion. In 1985, a U.K. group reported that in a two-year study comparing drug therapy to cognitive therapy, relapse “was significantly higher in the pharmacotherapy group”(Blackburn, 1986). In 1994, Italian researcher Giovanni Fava reviewed the outcomes literature and concluded that “long-term use of antidepressants may increase the (patient’s) biochemical vulnerability to depression,” and thus “worsen the course of affective disorders” (Fava, 1994). Fava revisited the issue in 2003. An analysis of 27 studies, he wrote, showed that “whether one treats a depressed patient for 3 months or 3 years, it does not matter when one stops the drugs. A statistical trend suggested that the longer the drug treatment, the higher the likelihood of relapse” (Fava, 2003, pp. 124).

Benzodiazepines

This same basic paradox—that a psychiatric drug may curb symptoms over the short-term but worsen the long-term course of the disorder—has been found to hold true for benzodiazepines, at least when used to treat panic attacks. In 1988, researchers who led the large Cross-National Collaborative Panic Study, which involved 1,700 patients in 14 countries, reported that at the end of four weeks, 82% of the patients treated with Xanax (alprazolam) were “moderately improved” or “better,” versus 42% of the placebo patients. However, by the end of eight weeks, there was no difference between the groups, at least among those who remained in the study (Balanger, 1988). Any benefit with Xanax seemed to last for only a short period. As a followup to that study, researchers in Canada and the U.K. studied benzodiazepine-treated patients over a period of six months. They reported that the Xanax patients got better during the first four weeks of treatment, that they did not improve any more in weeks four to eight, and that their symptoms began to worsen after that. As patients were weaned from the drugs, a high percentage relapsed, and by the end of 23 weeks, they were worse off than patients treated without drugs on five different outcomes measures (Marks, 1993). More bad news of this sort was reported by Pecknold in 1988. He found that as patients were tapered off Xanax they suffered nearly four times as many panic attacks as the non-drug patients, and that 25% of the Xanax patients suffered from rebound anxiety more severe than when they began the study. The Xanax patients were also significantly worse off than non-drug patients on a global assessment scale by the end of the study (Pecknold, 1988).

Then and Now

Research by David Healy, a prominent U.K. psychiatrist who has written several books on the history of psychopharmacology, shows how this problem of drug-induced chronicity plays out in society as a whole. Healy determined that outcomes for psychiatric patients in North Wales were much better a century ago than they are today, even though patients back then, at their moment of initial treatment, were much sicker. He concluded that today’s drug-treated patients spend much more time in hospital beds and are “far more likely to die from their mental illness than they were in 1896.” “Modern treatments,” he said, “have set up a revolving door” and appear to be a “leading cause of injury and death” (Healy, Harris, et.al, unpublished paper. See also Healy, Harris, 2001).

Manufacturing Mental Illness

It is well known that all of the major classes of psychiatric drugs—antipsychotics, antidepressants, benzodiazepines, and stimulants for ADHD—can trigger new and more severe psychiatric symptoms in a significant percentage of patients. This is the second factor causing a rapid rise in the number of disabled mentally ill in the United States. Moreover, it is easy to see this epidemic-creating factor at work with Prozac and the other SSRIs.

Although serotonin has been publicly touted as the brain’s mood molecule, in truth it is a very common chemical in the body, found in the walls of the blood vessels, the gut, blood platelets, and the brain. The serotonin system is also one that could be said to be primitive in kind. Serotonergic neurons are found in the nervous systems of all vertebrates and most invertebrates, and in humans their cell bodies are localized along the midline of the brain stem. From their, their axons spread up into the brain and down into the spinal cord. The first purpose of this neuronal network is thought to be control of respiratory, cardiac, and repetitive motor activity, as opposed to higher cognitive functions.

As one would expect, perturbing this system—and to a degree that could be considered pathologic, as Jacobs said—causes a wide range of problems. In Prozac’s first two years on the market, the FDA’s Medwatch program received more adverse-event reports about this new “wonder drug” than it had received for the leading tricyclic in the previous 20 years. Prozac quickly took up the top position as America’s most complained about drug, and by 1997, 39,000 adverse-event reports about it had been sent to Medwatch. These reports are thought to represent only one percent of the actual number of such events, suggesting that nearly four million people in the U.S. had suffered such problems, which included mania, psychotic depression, nervousness, anxiety, agitation, hostility, hallucinations, memory loss, tremors, impotence, convulsions, insomnia, and nausea. The other SSRIs brought to market caused a similar range of problems, and by 1994, four SSRIs were among the top 20 most-complained-about drugs on the FDA’s Medwatch list (Moore, 1997).

In terms of helping fuel a rapid rise in the number of disabled mentally ill, the propensity of Prozac and other SSRIs to trigger mania or psychosis is undoubtedly the biggest problem with these drugs. In clinical trials, slightly more than one percent of the Prozac patients developed mania, which was three times higher than the rate for patients given a tricyclic (Breggin, 2003). Other studies have found much higher rates of SSRI-induced mania. In 1996, Howland reported that 6% of 184 depressed patients treated with an SSRI suffered manic episodes that were “generally quite severe.” A year later, Ebert reported that 8.5% of patients had a severe psychological reaction to Luvox (fluvoxamine) (Breggin, 2003). Robert Bourguignon, after surveying doctors in Belgium, estimated that Prozac induced psychotic episodes in 5% to 7% of patients (Bourguignon, 1997). All of this led the American Psychiatric Association to warn that manic or hypomanic episodes are “estimated to occur in 5% to 20% of patients treated with antidepressants” (Breggin, 2003)

As Italy’s Giovanni Favi has noted, “Antidepressant-induced mania is not simply a temporary and reversible phenomenon, but a complex biochemical mechanism of illness deterioration” (Fava, 2003, pp. 126). The best available evidence suggests that this is now happening to well more than 500,000 Americans a year. In 2001, Preda and other Yale researchers reported that 8.1 percent of all admissions to a psychiatric hospital they studied were due to SSRI-induced mania or psychosis (Preda, MacLean et al., 2001). The federal government reported that there were 10.741 million “patient care episodes” in 2000; if 8% percent were SSRI-induced manic or psychotic episodes, that would mean that 860,000 people suffered this type of adverse reaction in 2000.

Thus, the SSRI path to a disabling mental illness can be easily seen. A depressed patient treated with an antidepressant suffers a manic or psychotic episode, at which time his or her diagnosis is changed to bipolar disorder. At that point, the person is prescribed an antipsychotic to go along with the antidepressant, and once on a drug cocktail, the person is well along on the road to permanent disability. Since Prozac was introduced in 1987, the number of disabled mentally ill in the U.S. has risen by 2.4 million people, and given the risk of mania and psychosis with the SSRIs, that increase was to be expected.

Conclusion

A century ago, fewer than two people per 1,000 were considered to be “disabled” by mental illness and in need of hospitalization. By 1955, that number had jumped to 3.38 people per 1,000, and during the past 50 years, a period when psychiatric drugs have been the cornerstone of care, the disability rate has climbed steadily, and has now reached around 20 people per 1,000. (Table 2). As with any epidemic, one would suspect that an outside agent of some type—a virus, a bacterial infection, or an environmental toxin—was causing this rise in illness. That is indeed the case here. There is an outside agent fueling this epidemic of mental illness, only it is to be found in the medicine cabinet. Psychiatric drugs perturb normal neurotransmitter function, and while that perturbation may curb symptoms over a short term, over the long run it increases the likelihood that a person will become chronically ill, or ill with new and more severe symptoms. A review of the scientific literature shows quite clearly that it is our drug-based paradigm of care that is fueling this modern-day plague.

(for references see the link to the article)

 

6 thoughts on “Highlight: a piece of work by Robert Whitaker

  1. I love this article. I was one of the many subjected to the “wonder drug” Thorazine. Robert used some good logic in deriving the number of people disabled from mental illness today, as compared to the 1950’s. His numbers really prove there is more mental illness today than years ago, despite all the medications that are supposed to work and all the people who believe that we understand how the pills work. In my experience, if you suggest to people that antidepresants are not too good they get defensive–like you are attacking their religion or their family.

    I was surprized that there was so much evidence from so long ago that antidepresants did not work very well. I remember when the study comparing different therapies came out. If the cognitive therapy produced the best long term results, why did we not push cognitive therapy on everyone? Of course, if we did, the drug companies would not have made much money.

    In many studies antidepresents cause a quick response. However, it is an effect that does not last, then creates various problems. Alcohol took a similar path with me. I was a bit shy, lacked confidence, and worried a lot. When I started drinking as a college freshman, I gained all sort of confidence. My shyness disappeared. And, it all happened quickly–instantly. I became more sociable. I attended more dances and parties. I functioned quite well for a time; then I suffered from side effects: pot belly, blackouts, falling down, fights, destruction of property, loss of control of body functions. Maybe, there is no chemical cure to life’s problems. Maybe, we have to accept a certain amout of sadness and try to carry on as best as we can.

    Thank yo so much for having this article out there where we can study it.
    Jim S

  2. Robert Whitaker tells it like it is – a great writer.

    Thanks for posting – a must-read for all family members who encourge thier love ones to stay on meds……

    Should be required reading for all psychiatric nurses and doctors – who only think they know what’s best

    Antonin Zanetti

  3. John,
    this is not Robert’s blog. I highlighted his piece because I think it is an important piece of work.

    As far as your wife goes there are alternatives to medicine. There are ideas on this site, though I certainly can’t know what is right for your wife.

    On the “About” page on this blog however are links to resources for natural care. You may find something useful there.

    The “about” page is on the upper left hand corner of the blog.
    My best to you and your wife.

  4. Hi, Robert

    I know personally that some of the anit psychotic
    drugs are causing my wife to slowly become disabled to the point to
    where she can’t hardly work any longer. Then on the other
    hand if she wasn’t medicated her Bipolar symptoms would
    come raging back. I feel like maybe we are just making a choice
    between the two evils .Stay medicated and risk loosing your
    natural abilities to function or just be crazy” after all look at
    Van Gogh “ if he would have been on Thorazine his work
    would have suffered and he probably would killed himself anyway.

Comments are closed.

Powered by WordPress.com.

Up ↑

Discover more from Beyond Meds: Alternatives to Psychiatry

Subscribe now to keep reading and get access to the full archive.

Continue reading