The Advent of Psychiatry and the Rise of Mental Illness in America

I mentioned back in Part 1 of my series on my withdrawal from psychiatric medication that I might post the research paper I wrote about the subject online. Now, three years later, in the aftermath of successfully coming off all medications and living med-free for a year with great success, I’ve decided to finally post it. This will have a decidedly different tone from my usual posts, and it includes citations and endnotes because this was written as an undergraduate paper several years ago. For a more personal explanation of my own story and the important supplements I’ve discovered to make this lifestyle possible, please read Part 4 and Part 5 of my Withdrawal series.

I promise to return to my usual style of posting after this! I have a number of important faith subjects I intend to tackle this year. But for now, here is my research paper, tweaked slightly in formatting for the purposes of making it a blog post:

The Advent of Psychiatry and the Rise of Mental Illness in America

            If you were diagnosed with manic-depression (now called bipolar disorder) while living in pre-1970 America, you would have a 75-90% chance of a good long-term outcome. Today your chance would be 33%. Prior to 1950, one out of every ten-thousand Americans received such a diagnosis. Today that number has jumped to one out of every forty (Whitaker 192-193). If you find that hard to swallow, consider this: in 1955, depression severely impaired about 0.02% of the total American population; in 2014, the number had risen to 12.5% (Whitaker 151; NIMH RSS).  Psychiatry has made a number of major breakthroughs in the past sixty years and there are numerous psychopharmaceutical treatments now available for doctors to prescribe, but mental illness in America has not declined; on the contrary, it has exploded (Whitaker 5). Where is psychiatry going wrong? A look into the history of the profession shines a light on information little-known by the general public, and it raises the uncomfortable possibility that the psychopharmaceutical treatments themselves are doing more harm than good.

Psychiatry as we know it today was born in the mid-1900s, during the era of “magic bullet” medications. German scientist Paul Ehrlich coined the term when, in 1909, he discovered a compound that cured syphilis without harming the infected patient. In 1935, the Bayer chemical company discovered a drug that cured staphylococcal and streptococcal infections. Penicillin came to market in the early 1940s, and other antibiotics followed hot on its heels, offering cures for pneumonia, scarlet fever, diphtheria, tuberculosis, and many others (Whitaker 40-41). The magic bullet revolution had begun in earnest. It was time for psychiatry to catch up with the rest of the medical field.  The National Institute of Mental Health (NIMH) was founded in 1949 to oversee a much needed reform of the mental health system (See endnote 1), and a few years later, the profession had finally developed some “magic bullets” of its own. But they did not arrive in the same way as had other such discoveries (Whitaker 46).

The grandparents of today’s psychopharmaceuticals were all stumbled across unexpectedly while scientists were looking for other things. What would become the first antipsychotic medication was discovered in 1946 by scientists trying to formulate a compound that would cure diseases such as malaria and African sleeping sickness. Though the research did not work out the way they had hoped, a compound they discovered in the process seemed to have promising potential as an anesthetic. After more research on it, they were able to develop a drug that seemed to disconnect parts of the brain that controlled motor movements and emotional responses, without inducing unconsciousness. It was considered a breakthrough in anesthesiology.  It was in 1951 that the drug, called chlorpromazine, was first suggested as a possible treatment for psychiatric ailments, since it produced “a veritable medicinal lobotomy” (qtd. in Whitaker 49). This “medicinal lobotomy” was marketed to the American public in 1954 as Thorazine, the first antipsychotic medication for the treatment of schizophrenia (Whitaker 47-51).

582px-thorazine_advert
Thorazine Advertisement

Thorazine became psychiatry’s first magic bullet medication, thus bringing the profession up to speed with the rest of the medical field (Whitaker 59). An article published in Time magazine on June 14th, 1954 claimed that the new “wonder drug” allowed patients to “sit up and talk sense with [the doctor], perhaps for the first time in months” (qtd. in Whitaker 58). Articles in the New York Times called the drug miraculous, claiming that it brought “peace of mind” and “freedom from confusion” to psychiatric patients (qtd. in Whitaker 58). The Smith Kline and French company had obtained approval from the FDA to sell this medication in America, and according to the company’s president, Thorazine had been put through the most stringent of tests and had been proven safe for human administration. But though the company had done extensive animal testing of the drug, fewer than 150 psychiatric patients had been exposed to it at the time the company submitted its application to the FDA (Whitaker 58). Furthermore, the French researchers who had initially discovered the drug had found that it worsened the conditions of one third of the schizophrenic patients they treated with it. It was not, in their opinion, a cure for the disease. Nevertheless, because studies in the United States showed that the drug worked, on average, marginally better than a placebo, it was marketed to the American public as a key breakthrough for psychiatry (Healy 88).

Given the praise lavished on Thorazine at the time of its release, it would be expected that it must have had a significant impact on the treatment of the mentally ill (See endnote 2). Initially, the short-term effects of the drug on patients seemed dramatic. A study conducted by the Psychopharmacology Service Center in 1961 found that 75% of patients treated with Thorazine, or a similar drug, were much improved over the course of six weeks, versus 20% of patients treated with a placebo (Whitaker 96). In 1977, a review of 149 similar trials concluded that in 83% of them, antipsychotic drugs were superior to placebo (Whitaker 97). However, when the Cochrane Collaboration (an international group of scientists not at that time funded by pharmaceutical companies) conducted a meta-analysis in 2007 of all the chlorpromazine-versus-placebo studies conducted up until that point, they were surprised at how weak the evidence of efficacy was for the drug. On average, for every single case of “global improvement,” seven patients had to be treated; furthermore, they admitted that “this finding may be an overestimate of the positive and an underestimate of the negative effects of giving chlorpromazine” (qtd. in Whitaker 96-97 footnote).

Tardive Dyskinesia

This leads us to the question of negative side-effects. The test of time has shown that the use of Thorazine provides questionable improvement for some steep costs. Over half of the patients treated with the drug in state hospitals developed tardive dyskinesia, a disfiguring, sometimes disabling, movement disorder that remained even once the drugs were withdrawn (Breggin 15; Whitaker 104). It has also been found that even though the drug can successfully combat psychosis over the short-term, it increases a patient’s susceptibility to psychosis over the long-term. For instance, in two drug-withdrawal trials, the NIMH found that 65% of the drug-treated patients relapsed when withdrawn from Thorazine, while only 7% of the placebo patients relapsed. It was also found that the higher the dose of medication pre-withdrawal, the greater the risk of relapse (Whitaker 99). Why? Thorazine and other antipsychotics have been shown to cause alterations in the brain that are often permanent after long-term use. The frontal lobes shrink, while the basal ganglia structures and the thalamus begin to swell. The latter effect results in patients becoming increasingly psychotic and more emotionally disengaged, while frontal lobe shrinkage eventually leads to frontotemporal dementia. In essence, the drug eventually increases the very symptoms it was supposed to treat (see endnote 3). (Whitaker 114; Frontotemporal Disorders)

While its fate is less than encouraging, Thorazine was only the first of many advances made in the field of psychopharmacology. Other drugs launched between 1954 and 1959 included the anti-anxiety agent meprobamate, marketed as Miltown, the “psychic energizer” iproniazid, and the first tricyclic antidepressant, imipramine. Miltown had been accidentally discovered during the search for alternative antibiotics to penicillin. Iproniazid was developed for the treatment of tuberculosis, but it was turned to as a potential treatment for depression because it had the unexpected side-effect of causing patients to start gleefully dancing in the wards (Whitaker 52). Imipramine had been stumbled across by Swedish researchers while they were searching for a treatment for schizophrenia (Fitzpatrick). These new discoveries were accidental, and none of them were “cures” in the sense that antibiotics were cures, because they were not treating the illness; they were simply treating the symptoms the illness caused (Whitaker 50-51). But this was not the picture that was painted for the American public.

At the time that these new drugs were being discovered, the American Medical Association (AMA) had recently given up its role as a watch-dog for the medical community. Previously, it had published a book each year detailing all of the drugs that had been proven safe and effective. But in 1951, the Durham-Humphrey amendment was added to the 1938 Food and Drug Cosmetics Act. This amendment mandated that prescriptions would be required for most new drugs, as well as their refills, thus putting doctors into a much more profitable position than they had hitherto been. No longer would the public be coming to them solely because of their expertise, so it mattered less from a business perspective if they made a point of only dispensing drugs proven to work. In 1952, the AMA ceased publishing its book of useful drugs and began to allow advertisements into its journals for drugs not approved by its Council on Pharmacy and Chemistry. A 1959 review found that 89% of these advertisements failed to provide information about the side-effects of the drugs, but the AMA received a convenient boost in advertisement revenues—from $2.5 million in 1950 to $10 million in 1960. It even lobbied against a proposal put forward by Tennessee senator Estes Kefauver in 1960 that drug companies be required to prove to the FDA that their products worked (Whitaker 57). Such was the scene into which psychiatry stepped as it began to expand and improve in the public eye.

One of the next major breakthroughs in psychiatry came in 1988 with the drug company Eli Lilly releasing the antidepressant Prozac, the first selective serotonin reuptake inhibitor (SSRI). The drug was said to work because it caused serotonin to pile-up at synapses in the brain, and since it was hypothesized that depression could be the result of low serotonin levels, the logic was that an SSRI drug would correct the chemical imbalance (Whitaker 79). Before the drug’s release, Eli Lilly employee Joachim Wernicke claimed it had “very few serious side effects,” and after its release, its efficacy was compared by some to be as great as that of antibiotics (qtd. in Whitaker 288; 291). According to the American Texas Medication Algorithm Project in 1994, Prozac and the other SSRIs that followed it had become the drugs of choice for treating depression (Healy 140). Psychiatrist Peter Kramer, in his book Listening to Prozac, announced that the drug even made some patients “better than well,” suggesting that people might be able to expect future pills to allow ordinary people to have whatever personality they wanted (qtd. in Whitaker 294). It seemed Eli Lilly had done something right.

Despite glowing reviews in the media, a look at the development of Prozac and the studies conducted with it reveals a very different side to the story. When the first human trial of the drug was conducted in 1977, Eli Lilly’s Ray Fuller admitted to his colleagues that “none of the eight patients who completed the four-week treatment showed distinct drug-induced improvement.” Furthermore, it had caused “a fairly large number of reports of adverse reactions” (qtd. in Whitaker 285). These included an incident of psychosis, and a number of reports of akathisia—a state of agitated distress that increases the risk of suicide and violence. This was a problem for the company, and in order to solve it they decided that future studies would allow the use of benzodiazepines (anti-anxiety agents) to help suppress reports of akathisia and boost efficacy results, even though an Eli Lilly employee later admitted in court that such a decision confounded the results and “interfered with the analysis of both safety and efficacy” (qtd. in Whitaker 268). On top of that, in six out of seven studies that Eli Lilly conducted comparing Prozac to the tricyclic antidepressant imipramine, the latter was proven more effective. In Germany, the country’s licensing authority in 1985 declared Prozac to be “totally unsuitable for the treatment of depression(qtd. in Whitaker 286). In their study, it had caused an incidence rate of suicidal acts that was 5.6 times greater than that of imipramine. This increased risk for suicide was also found in many studies conducted in the United States, which on average showed that patients on Prozac committed twice the number of suicidal acts as patients on placebo (Healy 212). In order to get the FDA’s approval for the drug and to gain acceptance for it in the medical community as an effective treatment, Eli Lilly chose to hide and intentionally misinterpret its own data regarding both its lack of efficacy and its potential to increase the risk of suicide (Breggin 14).

Advertisement for yourlawyer.com

Given such poor results in the studies, it should come as little surprise that the results of Prozac’s 1988 release to the public were less than positive at the grass roots level. By 1997, 39,000 adverse-event reports about the drug had flooded the FDA’s MedWatch program—far more than any other drug in that nine-year period. These events included instances of patients committing horrible crimes, committing suicide, and reports of numerous unpleasant side-effects, including psychotic depression, mania, hostility, amnesia, convulsions, and sexual dysfunction. Furthermore, according to FDA estimates, only about 1% of all adverse events end up being reported to the MedWatch program. It can be safely assumed that the 39,000 reports were only 1% of the poor responses to Prozac (Whitaker 287-288). There is also reason to believe that antidepressants such as Prozac have contributed to the sky-rocketing number of patients being diagnosed with bipolar disorder. A recent survey of members of the Depressive and Manic-Depressive Association showed that 60% of those with bipolar disorder had been exposed to an antidepressant prior to their diagnosis (Whitaker 175-177; 181). The generally accepted belief is that antidepressants simply reveal a pre-existing condition by triggering mania that would have eventually appeared anyway on its own (Bressert); however, a look at the aforementioned number of people diagnosed with bipolar disorder before the advent of antidepressants, and the number of people diagnosed with the same disorder today, is telling. Keep in mind, too, that the expectancy of good outcomes for bipolar patients today is far lower than it was fifty years ago.

After the advent of SSRIs, psychiatry’s next breakthrough came with the creation of a new class of antipsychotics, referred to as “atypicals,” that functioned somewhat differently, and supposedly more effectively, than typical antipsychotics like Thorazine (Atypical Antipsychotics).  One such example is Eli Lilly’s Zyprexa, a drug brought to market in 1996. After its handling of Prozac, and the lawsuits that inevitably followed as a result, one would hope that the company’s approach to later medications might improve. Initial reviews after the drug’s release were encouraging. A number of psychiatrists at various academic schools declared that it was well-tolerated by patients and that it caused a better global improvement of symptoms with fewer side-effects than the first atypical, Risperdal—a drug that had been brought to market by one of Eli Lilly’s competitors (Whitaker 301-302). Stanford University psychiatrist Alan Schatzberg described the new drug as “a potential breakthrough of tremendous magnitude” (qtd. in Whitaker 302). He might very well have been right, however, “tremendous magnitude” can be applied to negative events as well as positive, and the true nature of this “breakthrough” is questionable.

Adverse reactions to Zyprexa, as reported by CCHR International 

Psychiatric drug studies seem to inevitably shatter the glowing picture that drug companies paint of their products upon their release. During Eli Lilly’s trials of Zyprexa, two-thirds of the patients were unable to complete the studies, 22% of those that did suffered a “serious” adverse event, and twenty patients died. Today the drug is well known to cause hypersomnia, excessive weight gain, diabetes, and a host of other troubling effects that include some of the very same problems caused by Thorazine (Whitaker 301). In 2005, a study conducted by the NIMH showed that there were “no significant differences” between atypical antipsychotics like Zyprexa and the typical antipsychotics that they were supposed to replace; in fact, both classes of drugs had proven startlingly ineffective. Due to “inefficacy or intolerable effects,” 74% of the 1,432 patients had had to come off of the medications before the trial was complete (qtd. in Whitaker 303).

After seeing these results, it’s worth asking what exactly these drugs were supposedly doing in the first place. The theory that is widely considered common knowledge among the general public is that mental illness is due to chemical imbalances in the brain: for instance, depression is the result of a serotonin deficit, while schizophrenia is the result of an overactive dopamine system. These answers are simple, easy to understand, and easy to market medications with. But the chemical imbalance theory of mental illness has been repeatedly proven false. Numerous studies have shown that people with unmedicated depression have the very same variations in serotonin levels as those without depression, while schizophrenic patients that have never been exposed to medication have the very same dopamine levels and receptor numbers as people without the disorder. (Whitaker 72-79). As editor-in-chief emeritus of the Psychiatric Times Ronald Pies wrote on July 11, 2011, “the ‘chemical imbalance’ notion was always a kind of urban legend—never a theory seriously propounded by well-informed psychiatrists” (qtd. in Whitaker 365).

Rather than correct chemical imbalances in the brain, psychopharmaceuticals actually create them. As neuroscientist Steve Hyman explained, “[psychotropic drugs] create perturbations in neurotransmitter functions” (qtd. in Whitaker 83). In essence, these medications work by distorting the mechanisms of an ordinary brain in order to have an effect on the symptoms of the mental illness. The truth that is openly acknowledged within the medical community, but that the general public remains surprisingly ignorant of, is that there is still no known cause for any of the mental illnesses we see today. Thus, we have no way to treat the illnesses themselves. We are treating the symptoms, not the disease (Whitaker 84-85).

Schizophrenia Medication study
Source: Harrow, M. “Factors involved in outcome and recovery in schizophrenia patients not on antipsychotic medications.” The Journal of Nervous and Mental Disease, 195 (2007): 406-14

Perhaps one of the most telling examples of the effect psychiatric drugs can be seen in the long-term study funded by the NIMH and conducted by psychologist Martin Harrow on sixty-four young schizophrenic patients. They were divided into two groups: those on antipsychotics, and those off antipsychotics. In 2007, Harrow announced that at the end of fifteen years, 40% of the group that was off antipsychotics were in recovery and 28% still suffered from psychotic symptoms. In the group that remained on antipsychotics, 5% were in recovery while 64% still suffered from psychotic symptoms (Whitaker 115-116). This may seem shocking, but this is far from the only evidence of schizophrenic patients faring better when not kept on antipsychotics long-term.

In 1978, the World Health Organization (WHO) launched a ten-country study, primarily enrolling patients suffering from a first episode of schizophrenia. All of those involved had been diagnosed using Western criteria. At the end of two years it was found that in “developed” countries, including the United States, just over one-third of the patients had had good outcomes, while nearly two-thirds had become chronically ill. In contrast, just over one-third of the patients in “developing countries” had become chronically ill, and nearly two-thirds had had good outcomes. What was the difference? WHO investigators found that 61% of patients in “developed” countries had remained on antipsychotics, while only 16% of patients in “developing” had done the same. In places where patients had fared the best, such as Agra and India, only around 3% of patients had remained on antipsychotics. Contrast this with Moscow, the place with the highest medication usage, and the highest percentage of chronically ill patients (see endnote 4) (Whitaker 110-111).

What can we take away from all of this? I think that Robert Whitaker hit the nail on the head in his book Anatomy of an Epidemic when he stated that “[t]he psychopharmacology revolution was born from one part science and two parts wishful thinking” (47). Are psychopharmaceuticals behind the rise of mental illness over the past half-century? I think it’s safe to say that their indiscriminate use has, at the very least, been a significant contributing factor. Many doctors place far too much trust in the information they receive from drug companies. In 1992, the FDA’s Division of Neuropharmacological Drug Products warned that the testing done to acquire the FDA’s approval of a drug “may generate a misleadingly reassuring picture of a drug’s safety in use” (qtd. in Breggin 14). The drugs are by no means a cure, and while it isn’t true for every case, repeated studies have shown that many cases of depression, schizophrenia, and bipolar disorder, can be handled more successfully when medication is either not used, or is limited to very short-term usage. This flies in the face of psychiatric convention, and one might very well ask if it’s truly possible for an entire profession to be so mistaken about its practice for so many decades. My response is a confident ‘yes.’ Case in point: bloodletting was once considered to be highly beneficial and was one of the most common medical practices for a span of nearly two-thousand years (Bloodletting). In fact, I believe an argument could be made that one of the things the medical profession has been most successful at since the dawn of time is coming up with treatments that cause more harm than good, even when they are thought up with the best of intentions. This certainly seems to have been the case in psychiatry.

Notes

  1. The first half of the twentieth-century was not one of psychiatry’s high points. The popular “cures” that the profession made use of included treatments such as convulsive therapies and frontal lobotomies. It wasn’t until 1948 that the deplorable treatment of the mentally ill in American asylums was brought to the attention of the public. That year, journalist Albert Deutsch published his book The Shame of the States, giving the nation a photographic tour of such facilities. The photos showed naked patients left in rooms with nothing but their own feces, over-crowded sleeping wards filled with thread-bare cots, and facilities riddled with mold, rotted floors, and roofs that leaked. The public was horrified (Whitaker 43-45).
  2. Some credited Thorazine for emptying out America’s asylums, but this was incorrect. In 1955, there were 267,000 schizophrenic patients in state hospitals, and in 1963 there were 253,000—a modest reduction, at best. It wasn’t until the 1965 enactment of Medicare and Medicaid that the numbers of patients in asylums began to noticeably decline, since states began shipping their chronically ill patients out of state mental hospitals and into federally subsidized nursing homes in order to save money (Whitaker 93-94).
  3. In 1985, the publication of Dr. Peter Breggin’s book Psychiatric Drugs: Hazards to the Brain laid these results out for the public and pushed the FDA to upgrade its warnings about Thorazine. While it is still prescribed to patients today, it has fallen from favour in the wake of new drug developments (Breggin 15-16).
  4. Despite how alarming these results may first appear, it does not mean patients currently taking psychopharmaceuticals should abruptly stop them. In fact, doing so would be disastrous. The brain adapts itself to being on such medications for any length of time, and once it does so, any immediate withdrawal of them will almost certainly result in a relapse—likely more severe than previous ones. As Dr. Peter Breggin explains in his book Psychiatric Drug Withdrawal, “the brain can be slow to recover from its own biochemical adjustments or compensatory effects.” Coming off of psychiatric medications requires a carefully managed, often slow weaning process. Unfortunately, the fact that coming off of medication too quickly results in a relapse has reinforced the belief that the pills are helping to keep an otherwise out-of-control disease at bay (Breggin xxiii).

Works Cited

“Atypical Antipsychotics.” Drugs.com. Drugs.com. n.d. Web. 20 Mar. 2016.

“The Basics of Frontotemporal Disorders.” National Institute on Aging. U.S. Department of Health & Human Services, June 2014. Web. 20 Mar. 2016.

“Bloodletting.”  Science Museum Brought to Life: Exploring the History of Medicine. Science Museum. n.d. Web. 20 Mar. 2016.

Breggin, Peter. Psychiatric Drug Withdrawal: A Guide for Prescribers, Therapists, Patients, and Their Families. New York: Springer Publishing Company, 2013. Print.

Bressert, Steve. “The Causes of Bipolar Disorder (Manic Depression).” Psych Central. Psych Central. Web. 20 Mar. 2016.

Fitzpatrick, Laura. “A Brief History of Antidepressants.” Time. Time Inc., 07 Jan. 2010. Web. 20 Mar. 2016.

Healy, David. Pharmageddon. Berkeley: UP of California, 2012. Print.

“Major Depression with Severe Impairment Among Adolescents.” NIMH RSS. National Institutes of Health. n. d. Web. 20 Mar. 2016.

“Major Depression with Severe Impairment Among Adults.” NIMH RSS. National Institutes of Health. n. d. Web. 20 Mar. 2016.

Whitaker, Robert. Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America. New York: Broadway Books, 2015. Print.

~~~

That concludes my paper. Keep in mind that due to restrictions on length, this was a very cursory treatment of the subject. I strongly encourage you to do your own research. Check out my sources, especially Whitakers book. Visit Dr. Breggin’s website and see what he has to say about psych meds and withdrawal from them. And look into the very effective alternatives to psychiatric medications which I detail at length it my post on Med-Free Bipolar. This information is for anyone, with any mental illness, on any psychiatric medication. If you or a loved one is has been diagnosed with an illness and prescribed psych meds, please, please look into this further. You owe it to yourself and your loved ones to be armed with knowledge so you can take the best care of yourselves that you can.

If you have any questions, please leave a comment and I will do my best to find you an answer!

Until next time, take care and God bless,

Kasani

 

The Advantage of Suffering – Part 1: Offering it Up

“Brothers and sisters, I am now rejoicing in my sufferings for your sake, and in my flesh I am completing what was lacking in Christ’s afflictions for the sake of his body, that is, the church.” ~ Colossians 1:24

Suffering is an unfortunate fact of life, and people with mental illnesses experience their fair share of it. The suffering is compounded for those with comorbidity (when a person has two or more illnesses occurring at the same time. e.g. Fibromyalgia often occurs in patients with mood disorders) or when personal tragedy strikes. There are no easy answers to the problem of suffering, although a number of excellent books have been written on the subject (Making Sense out of Suffering by Peter Kreeft and The Problem of Pain by C.S. Lewis are two examples). There’s nothing I can tell you that hasn’t been said more eloquently and with better insight by someone else, but I’m hoping to offer you a way of looking at your suffering that allows you to make use of it to achieve something positive.

pexels-photo-326559First off, allow me to chuck a few assumptions out the window. I’m not going to elaborate on the idea that “what doesn’t kill you makes you stronger.” My friend and I have a joke that according to that rule we should both be able to bench-press semitrailers by now. It has some credence. Pain changes you, often for the better. But not always. Then there’s the saying that “pain is just weakness leaving the body.” To be blunt, I think that’s one of the stupidest sayings in existence and anyone who tosses it at me receives a withering glare. Pain creates weakness, not the other way around. I’m not talking about athletes and soldiers who have to physically push themselves to the breaking point to achieve a goal. That kind of pain does make you stronger, in a very literal sense. You become physically tougher, with better endurance and better abilities.

Mental illness doesn’t do that.

Depression leaves you curled in a ball of self-loathing pain on the floor, unable to even decide which clothes to wear and lacking the energy to put them on anyway. Hypomania takes your thoughts, shakes them up like a bottle of pop and makes it impossible to remain seated long enough to read one page of a textbook (which wouldn’t have worked anyway thanks to your racing thoughts), and if it progresses to full-blown mania you might get to spend some time in a psych ward. Anxiety gives you panic attacks that leave you paralyzed, unable to breathe, unable to act, so terrified and miserable that you’re afraid you’re dying. ADHD does the same thing to your thoughts as hypomania, except it’s 24/7, 365 days a year, and people blame you and make fun of you for struggling with a disorder that lots of them don’t even think is real. People with schizophrenipexels-photo-551588a suffer through hallucinations and delusions that very few people can even begin to comprehend. People with borderline personality disorder struggle with the lonely misery of alienating the people they love because of their behavior, which the disorder makes very difficult to control.

The list goes on and on, and outside of a Christian context, it can be difficult to find positive things within that mire of unpleasantness. There are some: You might develop coping mechanisms that give you strength. You might get used to your disorder and become more resilient to its effects. You might become more compassionate towards the suffering of others. Or not. Ultimately, mental illness makes life a lot harder than it would be otherwise, and to what purpose? How can there be an advantage to suffering? How can you possibly turn abject misery into something good? Unless you’re coming at it from a Christian perspective, I don’t think you can.

Now, when it comes to Christianity and suffering, one of the first objections to God that atheists and agnostics toss out is that very thing: why would an all-powerful, all-good and loving God allow suffering in the first place? I don’t claim to have the answer to that, but this post  by Tianna Williams does a lovely job of tackling the subject. For now, I want to offer some concrete suggestions to believers about how suffering can be put to good use. These will not take away your suffering. They will simply give it a purpose, and that can make it easier to bear.

There are two concepts in particular I want to discuss. One of them is Purgatory, and I’ll be attempting to tackle that in Part 2 of this post. As far as I know, Protestants don’t believe in it, so if you’re Protestant then that might not be of much use to you. But there’s a lot of confusion and misunderstanding revolving around the concept of Purgatory and I might be able to clear some of that up for you, so I encourage you to check it out anyway. The other concept can apply to Christians of any denomination, without question, although I’m not sure if it’s something that is discussed much outside of the Catholic church. I’ll tackle that concept first.

keep-calm-and-offer-it-up-7If you’re Catholic, you’ve probably heard of the idea of “offering up” your suffering to God for a purpose. Or you might not have. A few years ago, I had heard about it, but for a long time I had no understanding of its value. I wasn’t close enough to God to feel inclined to try it, especially when I was in the midst of intense suffering. It was an airy-fairy sort of subject that sounded to me like a half-hearted consolation prize handed out by people who didn’t know what else to say to someone in pain. I’ve since revised that opinion. Part of my confusion came from not knowing how to offer my suffering up. It wasn’t as if I could grab it off a shelf and give it to God. I also couldn’t understand how offering God my suffering could have any value. Suffering was forced on me against my will. It wasn’t as if I was making any special effort to do something for God by experiencing it. And then there was the question “if I offer my suffering up, does that mean I can’t ask God to take it away?”

All of this conspired to keep me from exploring the subject. I also, deep down, still resented God a little for having to deal with the suffering in the first place. If you resent God for your suffering then it’s pretty hard to make any use of it at all. It took me a long time to accept the grace that allowed me to pull that deeply rooted weed out of my heart. But once it was gone, I received a whole new dimension to my world-view. Christ’s suffering and death redeemed the entire world. He died once, for all. But that doesn’t make all of the suffering in the world that’s come since his death obsolete and useless. Suffering has merit.

“Dear in the eyes of the Lord is the death of his devoted” ~Psalm 116:15

Other versions of the bible read: “Precious in the eyes of God is the death of his saints.” It means the same thing. God values our suffering. He understands deeply just how much we hurt. It moved him to send his only begotten Son to earth to die for us on the cross. It gave our suffering a purpose. Because Jesus opened up the gates of heaven for us, we can join our suffering to his on the cross and do something with it. I didn’t understand this idea at first. How can I join my suffering to Christ on the cross? For some reason the idea didn’t ‘click’ with me. Then I was given another way of looking at it: because Christ used his suffering and death to pay the price for our sins, we can now go to God with our suffering and say “you used your Son’s suffering to redeem me and the world. Please use my suffering too.”

God can make use of suffering. Don’t ask me how. I don’t know. But he does. When you’re praying for something, maybe for a loved one, or for the resolution of a problem of some sort, you can take whatever suffering comes your way and embrace it for the sake of that intention. You essentially put your money where your mouth is: “God, instead of resenting this bout of depression, I accept it willingly for the sake of my loved one who has turned away from you. Please make use of it to guide her home.” Now, this doesn’t mean you can’t pray for God to take the suffering away. You can. But by accepting it with patience for as long as you’re forced to endure it (or at least making an effort to do so; it isn’t easy) you gain great merit for yourself and for the intention you’re offering it up for. (You can also offer it up as a penance or mortification, but I’ll discuss that in a later post.)

This is one of those things that’s easier said then done. In theory, it’s an exciting possibility. God used his Son’s suffering to redeem me, so he must be able to use my suffering to accomplish something too! In the same breath, we have to keep in mind that we aren’t Jesus. He was a perfect, innocent human being without blemish (not to mention, he was also God). He didn’t deserve any of the suffering he endured on this earth, but he embraced it anyway for our sake. No amount of suffering on our part will ever come close to being worth that kind of merit. Despite being redeemed by his death, we are still sinful creatures. But our suffering can still have great worth when we attempt to imitate Christ by picking up our cross and following him.

This idea also plays into my discussion of Purgatory in Part 2 of this post.

Until then, take care and God bless!

Kasani