The Advent of Psychiatry and the Rise of Mental Illness in America

I mentioned back in Part 1 of my series on my withdrawal from psychiatric medication that I might post the research paper I wrote about the subject online. Now, three years later, in the aftermath of successfully coming off all medications and living med-free for a year with great success, I’ve decided to finally post it. This will have a decidedly different tone from my usual posts, and it includes citations and endnotes because this was written as an undergraduate paper several years ago. For a more personal explanation of my own story and the important supplements I’ve discovered to make this lifestyle possible, please read Part 4 and Part 5 of my Withdrawal series.

I promise to return to my usual style of posting after this! I have a number of important faith subjects I intend to tackle this year. But for now, here is my research paper, tweaked slightly in formatting for the purposes of making it a blog post:

The Advent of Psychiatry and the Rise of Mental Illness in America

            If you were diagnosed with manic-depression (now called bipolar disorder) while living in pre-1970 America, you would have a 75-90% chance of a good long-term outcome. Today your chance would be 33%. Prior to 1950, one out of every ten-thousand Americans received such a diagnosis. Today that number has jumped to one out of every forty (Whitaker 192-193). If you find that hard to swallow, consider this: in 1955, depression severely impaired about 0.02% of the total American population; in 2014, the number had risen to 12.5% (Whitaker 151; NIMH RSS).  Psychiatry has made a number of major breakthroughs in the past sixty years and there are numerous psychopharmaceutical treatments now available for doctors to prescribe, but mental illness in America has not declined; on the contrary, it has exploded (Whitaker 5). Where is psychiatry going wrong? A look into the history of the profession shines a light on information little-known by the general public, and it raises the uncomfortable possibility that the psychopharmaceutical treatments themselves are doing more harm than good.

Psychiatry as we know it today was born in the mid-1900s, during the era of “magic bullet” medications. German scientist Paul Ehrlich coined the term when, in 1909, he discovered a compound that cured syphilis without harming the infected patient. In 1935, the Bayer chemical company discovered a drug that cured staphylococcal and streptococcal infections. Penicillin came to market in the early 1940s, and other antibiotics followed hot on its heels, offering cures for pneumonia, scarlet fever, diphtheria, tuberculosis, and many others (Whitaker 40-41). The magic bullet revolution had begun in earnest. It was time for psychiatry to catch up with the rest of the medical field.  The National Institute of Mental Health (NIMH) was founded in 1949 to oversee a much needed reform of the mental health system (See endnote 1), and a few years later, the profession had finally developed some “magic bullets” of its own. But they did not arrive in the same way as had other such discoveries (Whitaker 46).

The grandparents of today’s psychopharmaceuticals were all stumbled across unexpectedly while scientists were looking for other things. What would become the first antipsychotic medication was discovered in 1946 by scientists trying to formulate a compound that would cure diseases such as malaria and African sleeping sickness. Though the research did not work out the way they had hoped, a compound they discovered in the process seemed to have promising potential as an anesthetic. After more research on it, they were able to develop a drug that seemed to disconnect parts of the brain that controlled motor movements and emotional responses, without inducing unconsciousness. It was considered a breakthrough in anesthesiology.  It was in 1951 that the drug, called chlorpromazine, was first suggested as a possible treatment for psychiatric ailments, since it produced “a veritable medicinal lobotomy” (qtd. in Whitaker 49). This “medicinal lobotomy” was marketed to the American public in 1954 as Thorazine, the first antipsychotic medication for the treatment of schizophrenia (Whitaker 47-51).

582px-thorazine_advert
Thorazine Advertisement

Thorazine became psychiatry’s first magic bullet medication, thus bringing the profession up to speed with the rest of the medical field (Whitaker 59). An article published in Time magazine on June 14th, 1954 claimed that the new “wonder drug” allowed patients to “sit up and talk sense with [the doctor], perhaps for the first time in months” (qtd. in Whitaker 58). Articles in the New York Times called the drug miraculous, claiming that it brought “peace of mind” and “freedom from confusion” to psychiatric patients (qtd. in Whitaker 58). The Smith Kline and French company had obtained approval from the FDA to sell this medication in America, and according to the company’s president, Thorazine had been put through the most stringent of tests and had been proven safe for human administration. But though the company had done extensive animal testing of the drug, fewer than 150 psychiatric patients had been exposed to it at the time the company submitted its application to the FDA (Whitaker 58). Furthermore, the French researchers who had initially discovered the drug had found that it worsened the conditions of one third of the schizophrenic patients they treated with it. It was not, in their opinion, a cure for the disease. Nevertheless, because studies in the United States showed that the drug worked, on average, marginally better than a placebo, it was marketed to the American public as a key breakthrough for psychiatry (Healy 88).

Given the praise lavished on Thorazine at the time of its release, it would be expected that it must have had a significant impact on the treatment of the mentally ill (See endnote 2). Initially, the short-term effects of the drug on patients seemed dramatic. A study conducted by the Psychopharmacology Service Center in 1961 found that 75% of patients treated with Thorazine, or a similar drug, were much improved over the course of six weeks, versus 20% of patients treated with a placebo (Whitaker 96). In 1977, a review of 149 similar trials concluded that in 83% of them, antipsychotic drugs were superior to placebo (Whitaker 97). However, when the Cochrane Collaboration (an international group of scientists not at that time funded by pharmaceutical companies) conducted a meta-analysis in 2007 of all the chlorpromazine-versus-placebo studies conducted up until that point, they were surprised at how weak the evidence of efficacy was for the drug. On average, for every single case of “global improvement,” seven patients had to be treated; furthermore, they admitted that “this finding may be an overestimate of the positive and an underestimate of the negative effects of giving chlorpromazine” (qtd. in Whitaker 96-97 footnote).

Tardive Dyskinesia

This leads us to the question of negative side-effects. The test of time has shown that the use of Thorazine provides questionable improvement for some steep costs. Over half of the patients treated with the drug in state hospitals developed tardive dyskinesia, a disfiguring, sometimes disabling, movement disorder that remained even once the drugs were withdrawn (Breggin 15; Whitaker 104). It has also been found that even though the drug can successfully combat psychosis over the short-term, it increases a patient’s susceptibility to psychosis over the long-term. For instance, in two drug-withdrawal trials, the NIMH found that 65% of the drug-treated patients relapsed when withdrawn from Thorazine, while only 7% of the placebo patients relapsed. It was also found that the higher the dose of medication pre-withdrawal, the greater the risk of relapse (Whitaker 99). Why? Thorazine and other antipsychotics have been shown to cause alterations in the brain that are often permanent after long-term use. The frontal lobes shrink, while the basal ganglia structures and the thalamus begin to swell. The latter effect results in patients becoming increasingly psychotic and more emotionally disengaged, while frontal lobe shrinkage eventually leads to frontotemporal dementia. In essence, the drug eventually increases the very symptoms it was supposed to treat (see endnote 3). (Whitaker 114; Frontotemporal Disorders)

While its fate is less than encouraging, Thorazine was only the first of many advances made in the field of psychopharmacology. Other drugs launched between 1954 and 1959 included the anti-anxiety agent meprobamate, marketed as Miltown, the “psychic energizer” iproniazid, and the first tricyclic antidepressant, imipramine. Miltown had been accidentally discovered during the search for alternative antibiotics to penicillin. Iproniazid was developed for the treatment of tuberculosis, but it was turned to as a potential treatment for depression because it had the unexpected side-effect of causing patients to start gleefully dancing in the wards (Whitaker 52). Imipramine had been stumbled across by Swedish researchers while they were searching for a treatment for schizophrenia (Fitzpatrick). These new discoveries were accidental, and none of them were “cures” in the sense that antibiotics were cures, because they were not treating the illness; they were simply treating the symptoms the illness caused (Whitaker 50-51). But this was not the picture that was painted for the American public.

At the time that these new drugs were being discovered, the American Medical Association (AMA) had recently given up its role as a watch-dog for the medical community. Previously, it had published a book each year detailing all of the drugs that had been proven safe and effective. But in 1951, the Durham-Humphrey amendment was added to the 1938 Food and Drug Cosmetics Act. This amendment mandated that prescriptions would be required for most new drugs, as well as their refills, thus putting doctors into a much more profitable position than they had hitherto been. No longer would the public be coming to them solely because of their expertise, so it mattered less from a business perspective if they made a point of only dispensing drugs proven to work. In 1952, the AMA ceased publishing its book of useful drugs and began to allow advertisements into its journals for drugs not approved by its Council on Pharmacy and Chemistry. A 1959 review found that 89% of these advertisements failed to provide information about the side-effects of the drugs, but the AMA received a convenient boost in advertisement revenues—from $2.5 million in 1950 to $10 million in 1960. It even lobbied against a proposal put forward by Tennessee senator Estes Kefauver in 1960 that drug companies be required to prove to the FDA that their products worked (Whitaker 57). Such was the scene into which psychiatry stepped as it began to expand and improve in the public eye.

One of the next major breakthroughs in psychiatry came in 1988 with the drug company Eli Lilly releasing the antidepressant Prozac, the first selective serotonin reuptake inhibitor (SSRI). The drug was said to work because it caused serotonin to pile-up at synapses in the brain, and since it was hypothesized that depression could be the result of low serotonin levels, the logic was that an SSRI drug would correct the chemical imbalance (Whitaker 79). Before the drug’s release, Eli Lilly employee Joachim Wernicke claimed it had “very few serious side effects,” and after its release, its efficacy was compared by some to be as great as that of antibiotics (qtd. in Whitaker 288; 291). According to the American Texas Medication Algorithm Project in 1994, Prozac and the other SSRIs that followed it had become the drugs of choice for treating depression (Healy 140). Psychiatrist Peter Kramer, in his book Listening to Prozac, announced that the drug even made some patients “better than well,” suggesting that people might be able to expect future pills to allow ordinary people to have whatever personality they wanted (qtd. in Whitaker 294). It seemed Eli Lilly had done something right.

Despite glowing reviews in the media, a look at the development of Prozac and the studies conducted with it reveals a very different side to the story. When the first human trial of the drug was conducted in 1977, Eli Lilly’s Ray Fuller admitted to his colleagues that “none of the eight patients who completed the four-week treatment showed distinct drug-induced improvement.” Furthermore, it had caused “a fairly large number of reports of adverse reactions” (qtd. in Whitaker 285). These included an incident of psychosis, and a number of reports of akathisia—a state of agitated distress that increases the risk of suicide and violence. This was a problem for the company, and in order to solve it they decided that future studies would allow the use of benzodiazepines (anti-anxiety agents) to help suppress reports of akathisia and boost efficacy results, even though an Eli Lilly employee later admitted in court that such a decision confounded the results and “interfered with the analysis of both safety and efficacy” (qtd. in Whitaker 268). On top of that, in six out of seven studies that Eli Lilly conducted comparing Prozac to the tricyclic antidepressant imipramine, the latter was proven more effective. In Germany, the country’s licensing authority in 1985 declared Prozac to be “totally unsuitable for the treatment of depression(qtd. in Whitaker 286). In their study, it had caused an incidence rate of suicidal acts that was 5.6 times greater than that of imipramine. This increased risk for suicide was also found in many studies conducted in the United States, which on average showed that patients on Prozac committed twice the number of suicidal acts as patients on placebo (Healy 212). In order to get the FDA’s approval for the drug and to gain acceptance for it in the medical community as an effective treatment, Eli Lilly chose to hide and intentionally misinterpret its own data regarding both its lack of efficacy and its potential to increase the risk of suicide (Breggin 14).

Advertisement for yourlawyer.com

Given such poor results in the studies, it should come as little surprise that the results of Prozac’s 1988 release to the public were less than positive at the grass roots level. By 1997, 39,000 adverse-event reports about the drug had flooded the FDA’s MedWatch program—far more than any other drug in that nine-year period. These events included instances of patients committing horrible crimes, committing suicide, and reports of numerous unpleasant side-effects, including psychotic depression, mania, hostility, amnesia, convulsions, and sexual dysfunction. Furthermore, according to FDA estimates, only about 1% of all adverse events end up being reported to the MedWatch program. It can be safely assumed that the 39,000 reports were only 1% of the poor responses to Prozac (Whitaker 287-288). There is also reason to believe that antidepressants such as Prozac have contributed to the sky-rocketing number of patients being diagnosed with bipolar disorder. A recent survey of members of the Depressive and Manic-Depressive Association showed that 60% of those with bipolar disorder had been exposed to an antidepressant prior to their diagnosis (Whitaker 175-177; 181). The generally accepted belief is that antidepressants simply reveal a pre-existing condition by triggering mania that would have eventually appeared anyway on its own (Bressert); however, a look at the aforementioned number of people diagnosed with bipolar disorder before the advent of antidepressants, and the number of people diagnosed with the same disorder today, is telling. Keep in mind, too, that the expectancy of good outcomes for bipolar patients today is far lower than it was fifty years ago.

After the advent of SSRIs, psychiatry’s next breakthrough came with the creation of a new class of antipsychotics, referred to as “atypicals,” that functioned somewhat differently, and supposedly more effectively, than typical antipsychotics like Thorazine (Atypical Antipsychotics).  One such example is Eli Lilly’s Zyprexa, a drug brought to market in 1996. After its handling of Prozac, and the lawsuits that inevitably followed as a result, one would hope that the company’s approach to later medications might improve. Initial reviews after the drug’s release were encouraging. A number of psychiatrists at various academic schools declared that it was well-tolerated by patients and that it caused a better global improvement of symptoms with fewer side-effects than the first atypical, Risperdal—a drug that had been brought to market by one of Eli Lilly’s competitors (Whitaker 301-302). Stanford University psychiatrist Alan Schatzberg described the new drug as “a potential breakthrough of tremendous magnitude” (qtd. in Whitaker 302). He might very well have been right, however, “tremendous magnitude” can be applied to negative events as well as positive, and the true nature of this “breakthrough” is questionable.

Adverse reactions to Zyprexa, as reported by CCHR International 

Psychiatric drug studies seem to inevitably shatter the glowing picture that drug companies paint of their products upon their release. During Eli Lilly’s trials of Zyprexa, two-thirds of the patients were unable to complete the studies, 22% of those that did suffered a “serious” adverse event, and twenty patients died. Today the drug is well known to cause hypersomnia, excessive weight gain, diabetes, and a host of other troubling effects that include some of the very same problems caused by Thorazine (Whitaker 301). In 2005, a study conducted by the NIMH showed that there were “no significant differences” between atypical antipsychotics like Zyprexa and the typical antipsychotics that they were supposed to replace; in fact, both classes of drugs had proven startlingly ineffective. Due to “inefficacy or intolerable effects,” 74% of the 1,432 patients had had to come off of the medications before the trial was complete (qtd. in Whitaker 303).

After seeing these results, it’s worth asking what exactly these drugs were supposedly doing in the first place. The theory that is widely considered common knowledge among the general public is that mental illness is due to chemical imbalances in the brain: for instance, depression is the result of a serotonin deficit, while schizophrenia is the result of an overactive dopamine system. These answers are simple, easy to understand, and easy to market medications with. But the chemical imbalance theory of mental illness has been repeatedly proven false. Numerous studies have shown that people with unmedicated depression have the very same variations in serotonin levels as those without depression, while schizophrenic patients that have never been exposed to medication have the very same dopamine levels and receptor numbers as people without the disorder. (Whitaker 72-79). As editor-in-chief emeritus of the Psychiatric Times Ronald Pies wrote on July 11, 2011, “the ‘chemical imbalance’ notion was always a kind of urban legend—never a theory seriously propounded by well-informed psychiatrists” (qtd. in Whitaker 365).

Rather than correct chemical imbalances in the brain, psychopharmaceuticals actually create them. As neuroscientist Steve Hyman explained, “[psychotropic drugs] create perturbations in neurotransmitter functions” (qtd. in Whitaker 83). In essence, these medications work by distorting the mechanisms of an ordinary brain in order to have an effect on the symptoms of the mental illness. The truth that is openly acknowledged within the medical community, but that the general public remains surprisingly ignorant of, is that there is still no known cause for any of the mental illnesses we see today. Thus, we have no way to treat the illnesses themselves. We are treating the symptoms, not the disease (Whitaker 84-85).

Schizophrenia Medication study
Source: Harrow, M. “Factors involved in outcome and recovery in schizophrenia patients not on antipsychotic medications.” The Journal of Nervous and Mental Disease, 195 (2007): 406-14

Perhaps one of the most telling examples of the effect psychiatric drugs can be seen in the long-term study funded by the NIMH and conducted by psychologist Martin Harrow on sixty-four young schizophrenic patients. They were divided into two groups: those on antipsychotics, and those off antipsychotics. In 2007, Harrow announced that at the end of fifteen years, 40% of the group that was off antipsychotics were in recovery and 28% still suffered from psychotic symptoms. In the group that remained on antipsychotics, 5% were in recovery while 64% still suffered from psychotic symptoms (Whitaker 115-116). This may seem shocking, but this is far from the only evidence of schizophrenic patients faring better when not kept on antipsychotics long-term.

In 1978, the World Health Organization (WHO) launched a ten-country study, primarily enrolling patients suffering from a first episode of schizophrenia. All of those involved had been diagnosed using Western criteria. At the end of two years it was found that in “developed” countries, including the United States, just over one-third of the patients had had good outcomes, while nearly two-thirds had become chronically ill. In contrast, just over one-third of the patients in “developing countries” had become chronically ill, and nearly two-thirds had had good outcomes. What was the difference? WHO investigators found that 61% of patients in “developed” countries had remained on antipsychotics, while only 16% of patients in “developing” had done the same. In places where patients had fared the best, such as Agra and India, only around 3% of patients had remained on antipsychotics. Contrast this with Moscow, the place with the highest medication usage, and the highest percentage of chronically ill patients (see endnote 4) (Whitaker 110-111).

What can we take away from all of this? I think that Robert Whitaker hit the nail on the head in his book Anatomy of an Epidemic when he stated that “[t]he psychopharmacology revolution was born from one part science and two parts wishful thinking” (47). Are psychopharmaceuticals behind the rise of mental illness over the past half-century? I think it’s safe to say that their indiscriminate use has, at the very least, been a significant contributing factor. Many doctors place far too much trust in the information they receive from drug companies. In 1992, the FDA’s Division of Neuropharmacological Drug Products warned that the testing done to acquire the FDA’s approval of a drug “may generate a misleadingly reassuring picture of a drug’s safety in use” (qtd. in Breggin 14). The drugs are by no means a cure, and while it isn’t true for every case, repeated studies have shown that many cases of depression, schizophrenia, and bipolar disorder, can be handled more successfully when medication is either not used, or is limited to very short-term usage. This flies in the face of psychiatric convention, and one might very well ask if it’s truly possible for an entire profession to be so mistaken about its practice for so many decades. My response is a confident ‘yes.’ Case in point: bloodletting was once considered to be highly beneficial and was one of the most common medical practices for a span of nearly two-thousand years (Bloodletting). In fact, I believe an argument could be made that one of the things the medical profession has been most successful at since the dawn of time is coming up with treatments that cause more harm than good, even when they are thought up with the best of intentions. This certainly seems to have been the case in psychiatry.

Notes

  1. The first half of the twentieth-century was not one of psychiatry’s high points. The popular “cures” that the profession made use of included treatments such as convulsive therapies and frontal lobotomies. It wasn’t until 1948 that the deplorable treatment of the mentally ill in American asylums was brought to the attention of the public. That year, journalist Albert Deutsch published his book The Shame of the States, giving the nation a photographic tour of such facilities. The photos showed naked patients left in rooms with nothing but their own feces, over-crowded sleeping wards filled with thread-bare cots, and facilities riddled with mold, rotted floors, and roofs that leaked. The public was horrified (Whitaker 43-45).
  2. Some credited Thorazine for emptying out America’s asylums, but this was incorrect. In 1955, there were 267,000 schizophrenic patients in state hospitals, and in 1963 there were 253,000—a modest reduction, at best. It wasn’t until the 1965 enactment of Medicare and Medicaid that the numbers of patients in asylums began to noticeably decline, since states began shipping their chronically ill patients out of state mental hospitals and into federally subsidized nursing homes in order to save money (Whitaker 93-94).
  3. In 1985, the publication of Dr. Peter Breggin’s book Psychiatric Drugs: Hazards to the Brain laid these results out for the public and pushed the FDA to upgrade its warnings about Thorazine. While it is still prescribed to patients today, it has fallen from favour in the wake of new drug developments (Breggin 15-16).
  4. Despite how alarming these results may first appear, it does not mean patients currently taking psychopharmaceuticals should abruptly stop them. In fact, doing so would be disastrous. The brain adapts itself to being on such medications for any length of time, and once it does so, any immediate withdrawal of them will almost certainly result in a relapse—likely more severe than previous ones. As Dr. Peter Breggin explains in his book Psychiatric Drug Withdrawal, “the brain can be slow to recover from its own biochemical adjustments or compensatory effects.” Coming off of psychiatric medications requires a carefully managed, often slow weaning process. Unfortunately, the fact that coming off of medication too quickly results in a relapse has reinforced the belief that the pills are helping to keep an otherwise out-of-control disease at bay (Breggin xxiii).

Works Cited

“Atypical Antipsychotics.” Drugs.com. Drugs.com. n.d. Web. 20 Mar. 2016.

“The Basics of Frontotemporal Disorders.” National Institute on Aging. U.S. Department of Health & Human Services, June 2014. Web. 20 Mar. 2016.

“Bloodletting.”  Science Museum Brought to Life: Exploring the History of Medicine. Science Museum. n.d. Web. 20 Mar. 2016.

Breggin, Peter. Psychiatric Drug Withdrawal: A Guide for Prescribers, Therapists, Patients, and Their Families. New York: Springer Publishing Company, 2013. Print.

Bressert, Steve. “The Causes of Bipolar Disorder (Manic Depression).” Psych Central. Psych Central. Web. 20 Mar. 2016.

Fitzpatrick, Laura. “A Brief History of Antidepressants.” Time. Time Inc., 07 Jan. 2010. Web. 20 Mar. 2016.

Healy, David. Pharmageddon. Berkeley: UP of California, 2012. Print.

“Major Depression with Severe Impairment Among Adolescents.” NIMH RSS. National Institutes of Health. n. d. Web. 20 Mar. 2016.

“Major Depression with Severe Impairment Among Adults.” NIMH RSS. National Institutes of Health. n. d. Web. 20 Mar. 2016.

Whitaker, Robert. Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America. New York: Broadway Books, 2015. Print.

~~~

That concludes my paper. Keep in mind that due to restrictions on length, this was a very cursory treatment of the subject. I strongly encourage you to do your own research. Check out my sources, especially Whitakers book. Visit Dr. Breggin’s website and see what he has to say about psych meds and withdrawal from them. And look into the very effective alternatives to psychiatric medications which I detail at length it my post on Med-Free Bipolar. This information is for anyone, with any mental illness, on any psychiatric medication. If you or a loved one is has been diagnosed with an illness and prescribed psych meds, please, please look into this further. You owe it to yourself and your loved ones to be armed with knowledge so you can take the best care of yourselves that you can.

If you have any questions, please leave a comment and I will do my best to find you an answer!

Until next time, take care and God bless,

Kasani

 

Withdrawal – Part 3: Joyful People Suffer

It’s been a long time since I posted anything. Or at least, it feels like a long time. Realistically it’s only been a few months, but that might as well have been a lifetime ago. A lot as happened since then.

I’d like to start with the good news: I successfully came off of my last medication (Lamictal/Lamotrigine) mid-December last year. It was, in a way, the most freeing experience of my life. It precipitated a manic episode that ended with me in the hospital, but that’s all right. I learned a lot from it. Christmas 2017 was beautiful for me. So many blessings. I had a strong re-conversion experience in which I gave my life to Jesus again to do with me what he willed. Admittedly, if I’d known doing that would end with me in a hospital, I probably would have hesitated. But God knows our weakness. He hid from me how things were going to turn out. He wanted my complete and unconditional trust, and he was there for me every step of the way. He and His mother, Mary.

I plan to write a blog series explaining what happened. For now, though, I’m still processing everything and picking up the pieces (i.e. catching up on everything I’m behind on after two weeks out-of-commission, and praying to discern God’s will moving forward). I just wanted to send a shout out to my few followers that yes, I am still alive! And I’m doing great. Just decidedly worn out after everything. I look forward to writing more in the future.

Until then, take care and God bless!

Kasani

(Click here for Part 4)

 

 

 

Withdrawal – Part 1: Have Blind Faith in God, not Doctors

We know that all things work for good for those who love God, who are called according to his purpose ~ Romans 8:28

Sometimes God’s purpose isn’t at all what we have in mind.

I arrived home last December after an hour long drive in the dark, having just completed a grueling 4 and a half hour final exam for my history course. I was tired but content. Finals were over. I was very ready to eat supper,  say rosary with my parents, and pass out for the night.

That wasn’t quite how the evening went.

I walked through the door, kicked off my boots and shuffled into the kitchen where I found my mother waiting.

“Finally done.” I offered her a tired grin. “I think it went well.”

“You have to come off all of your medications.”

She was clearly agitated, hovering by the island in the center of the kitchen with her Kindle in her hand.

I stared at her blankly.

“Huh?”

I’d been relatively stable for the past year, with only two or three mild episodes. I’d finally stopped rapid-cycling two years previously. As far as I was concerned, bipolar disorder was no longer something  I had to  worry much about. I had found a medication combination that worked, and it didn’t give me side-effects. I had a very effective anti-psychotic medication on hand to prevent me from ever having to go through another psycho-manic episode. I barely gave my disorder a second thought anymore. The tendonitis in both of my elbows that I’d spent the past two years combating was a much bigger problem in my mind, and more than enough of a cross to bear.

My mother is a pharmacist— not exactly a profession than encourages an anti-pill mentality. And yet, much to my alarm,  she  proceeded to explain to me that my medications were all doing terrible things to my brain and if I didn’t come off of them I would wind up in a really bad condition years down the road.

This wasn’t something I wanted to hear. My medications were my safety blanket. They kept me in control. I was not going to stop them. No way. She was nuts.

51kehe0uc8l

A lot of anxious wheedling later and my mother succeeded in talking me into at least reading up on what she’d discovered. Upon talking to my awesome history prof at the beginning of the winter semester, I received permission to write my research paper on the topic in order to kill two birds with one stone. So I purchased the audiobook for Robert Whitaker’s Anatomy of an Epidemic and began making my way through it during the drive home from school twice per week.

I can honestly say it made for the most upsetting, discouraging, enraging research project I have ever conducted. Upon completing that book I ordered in David Healy’s book Pharmageddon  and used the index to find all of the parts related to psychiatric medication. It only confirmed what I’d already come to accept after Whitaker’s book— namely that drug companies are one of the most corrupt things on the face of the planet, psychiatry has, in some ways, done much more harm than good to society, and that to my great dismay, my mother was right.

So to make a long story short, I’m coming off my medications this summer. I wanted to wait until after the semester was over before I started, or I would have probably started back in February. I made my first cutback on my antidepressant, bupropion (better known by its brand name Wellbutrin) on April 23rd, from 150mg to 125mg. It resulted in a week of discomfort, rather distinct discomfort on a few occasions, but I seem to have bounced back to normal. I’ll be cutting back again this coming Saturday, assuming I remain stable between now and then.

Looked at from a stance of  blissful ignorance, what I’m doing is utterly absurd. The daughter of a good friend of mine has flat out stated she thinks I’m crazy. I am rather tempted to point out, in good humour, that I fall under that category by default seeing as I have a mental illness. But the the choice to come off of my medications was far from arbitrary. In fact, after much prayer and deliberation I’m quite certain that this is God’s plan for me right now. I’ll probably be writing posts about this now and then throughout the summer. I don’t expect this process to be a smooth one, and pain is an excellent fertilizer for growth. Not that I go out of my way to experience it, but Jesus himself pointed out that his Father prunes us to make us bear more fruit (John 15:1-8). Pruning  is rarely pleasant.

Yes, I received permission from my psychiatrist to do this. In fact, I went into the appointment expecting to have to argue with her to let me do so; instead, she barely batted an eyelash, made no protest at all, and asked me why I hadn’t already come off of my medications since I no longer wanted to take them.

*cough*

So after I picked up my jaw off the floor, I was told that I could stop my bupropion that I’ve been on for 3 and a half years cold-turkey without any negative side-effects. Even though that flies in the face of everything I’ve read about coming off antidepressants. Needless to say, I disagreed. And after how I felt last week, I’m glad I decided to taper off slowly.

What exactly did my mother and I discover to make us want to do this? For an answer to that, I strongly encourage you to check out the books I mentioned above, particularly Whitaker’s. If you or a loved one are on any psychiatric medications, this is information you need to know. I’m not at all suggesting everyone should drop off their meds. That isn’t feasible for everyone, especially if a person has been on their pills for many years. But people need to know this stuff. It’s serious. And it’s not a conspiracy theory. As one of the professors at my university bluntly stated when I gave a presentation on this topic, “I always tell my students to never believe in conspiracy theories, unless they involve drug companies.” The evidence of corruption, fraud, and downright criminal activity is freely available to people who choose to look for it. A look at the various lawsuits that have been filed against companies like Eli Lily is telling. The numerous studies that have been swept under the carpet because their results were inconvenient are even more telling.

I have now posted my research essay online, so feel free to read it if you’re interested in more details. That said, I’d rather people check out my sources and do their own research. You are very unlikely to hear about any of it from your psychiatrist. I certainly never heard it from mine. Misinformation among the general public is rampant, drug companies encourage it, and many doctors buy into it as well.

Just one example:

Were you aware that the chemical-imbalance theory of mental illness is completely false? The medical community no longer accepts it because it has been proven wrong so many times over the past 40 years. In fact, there’s never been any solid evidence to support it. But the general public has been repeatedly informed that depression is the result of a serotonin deficiency (or some other chemical imbalance) and schizophrenia is chalked up to an overactive dopamine system. The reality is that studies have repeatedly shown unmedicated schizophrenics have the very same dopamine systems as healthy individuals, and unmedicated patents suffering from depression have the very same variations in serotonin levels as healthy individuals. Check out Whitaker’s book if you don’t believe me.

A quote from Ronald Pies, editor-in-chief emeritus of the Psychiatric Times on July 11, 2011, says it all:

“I am not one who easily loses his temper, but I confess to experiencing markedly increased limbic activity whenever I hear someone proclaim, “Psychiatrists think all mental disorders are due to a chemical imbalance!” In the past 30 years, I don’t believe I have ever heard a knowledgeable, well-trained psychiatrist make such a preposterous claim, except perhaps to mock it. On the other hand, the “chemical imbalance” trope has been tossed around a great deal by opponents of psychiatry, who mendaciously attribute the phrase to psychiatrists themselves. And, yes—the “chemical imbalance” image has been vigorously promoted by some pharmaceutical companies, often to the detriment of our patients’ understanding. In truth, the “chemical imbalance” notion was always a kind of urban legend—never a theory seriously propounded by well-informed psychiatrists.”

I offer in rebuttal two sources that (incorrectly) support the “preposterous” chemical imbalance theory. The first is The Bipolar Disorder Survival Guide written by PhD David J. Miklowitz, published in 2011. First off, in a list of the various things that influence bipolar disorder he includes:

biological agents–abnormal functioning of brain circuits involving neurotransmitters such as dopamine” (pg 75).

On the very same page he adds:

“Your brain may be over- or underproducing certain neurotransmitters, such as dopamine, serotonin, norepinephrine, or GABA.”

Farther into the book he explains:

“we suspect that people with bipolar disorder have disturbances in intracellular signalling cascades, which regulate the neurotransmitter, neuropeptide, and hormonal systems that are central to the limbic system” (pg 88). (Emphasis not added by me)

A short while later he adds:

“bipolar disorder is believed to be related to diminished functioning of the serotonin system…. bipolar disorder has been related to increased sensitivity of the dopamine receptors and changes in the regulation of dopamine ‘reward pathways'” (pg 90).

In Miklowitz’s defense, he nowhere claims that the chemical imbalance theory has been proven 100% true, or is the entire cause of the disorder. But he certainly doesn’t shoot it down as incorrect either.

My second source is the textbook from my Psych 101 university course Psychology: Themes and Variations by Wayne Weiten and Doug McCann. They claim:

“Recent evidence suggests that a link may exist between anxiety disorders and neurochemical activity in the brain…. Abnormalities in neural circuits using serotonin have recently been implicated in panic and obsessive-compulsive disorders. Thus, scientists are beginning to unravel the neurochemical basis for anxiety disorders” (pg 651).

Later on they claim:

“Correlations have been found between mood disorders and abnormal levels of two neurotransmitters in the brain: norepinephrine and serotonin, although other neurotransmitter disturbances may also contribute. The details remain elusive, but it seems clear that a neurochemical basis exists for at least some mood disorders” (pgs 661-662).

These quotes are taken from the Third Canadian Edition which was published in 2013.

I am not quoting these to defend the chemical imbalance theory. In fact, from what I’ve read elsewhere, I am as convinced as Mr. Pies that the theory is bogus. However, his snide derision of psychiatry’s opponents is as absurd as the theory itself. This theory has been propounded for years by psychiatrists and by people teaching psychiatric students in universities. Is he making the claim that none of these people were “knowledgeable” or “well-trained?” Am I “mendaciously” making up these quotes to smear psychiatrists? I’ve got both books sitting open beside me on the desk. Check them out for yourself if you don’t believe me.

So to make a long-winded story short, you can’t just blindly trust professionals. Do some research of your own. But don’t panic and go off your meds cold-turkey if what you find freaks you out. That’s dangerous. Do more research and taper off slowly, preferably under the supervision of a doctor who is willing to help you.

That turned into a much longer rant than I intended. That always happens when I get on this topic. Anyway, please check this stuff out yourself. And I’ll keep you posted on how things go on my end.

Take care and God bless,

Kasani

March 2019 Edit: I’ve now been successfully off of all my medications for over a year, and am doing much better now than I ever was on my medications. I encourage you to please check out Part 4 and Part 5 of this series for an explanation of why this is so and how you might be able to achieve the same result in your own life. God bless you!

(Click here for Part 2)