fbpx
Articles

6 steps to making wise decisions about psychotropic medications

/
September 15, 2014

Let’s begin this discussion by placing the question in the correct category – whether an individual chooses to use psychotropic medication in their struggle with mental illness is a wisdom decision, not a moral decision. If someone is thinking, “Would it be bad for me to consider medication? Is it a sign of weak faith? Am I taking a short-cut in my walk with God?” then they are asking important questions (the potential use of medication) but they are placing them in the wrong category (morality instead of wisdom).[1]

Better questions would be:

In order to answer these kinds of questions, I would recommend a six step process. This process will, in most cases, take six months or more to complete. But it often takes many months for doctors and patients to arrive at the most effective medication option, so this process does not elongate the normal duration of finding satisfactory medical treatment.

Having an intentional process is much more effective than making reactionary choices when the emotional pain (getting on medication) or unpleasant side effects (getting off medication) push you to “just want to do something different.” With a process in place, it is much more likely that what is done will provide the necessary information to make important decisions about the continuation or cessation of medication.

Preface: This six step process assumes that the individual considering medication is not a threat to themselves, a threat to others, and is capable of fulfilling basic life responsibilities related to their personal care, family, school, and work. If this is not the case, then a more prompt medical intervention or residential care would be warranted.

If you are unsure how well you or a friend is functioning, then begin with a medical consultation or counseling relationship. If you would like more time with your doctor than a diagnostic and prescription visit, then ask the receptionist if you can schedule an extended time with your physician for consultation on your symptoms and options.

STEP ONE – ASSESS LIFE AND STRUGGLE

Most struggles known as mental illness do not have a body-fluid test (i.e., blood, saliva, or urine) to verify their presence. We do not know a “normal range” for neurotransmitters like we do for cholesterol. The activity of the brain is too dynamic to make this kind of simple number test easy to obtain. Gaining neurological fluid samples would be highly intrusive and more traumatic than the information would be beneficial. Brain scans are not currently cost effective for this kind of medical screening and cannot yet give us the neurotransmitter differentiation we would need.

For these reasons, the diagnosis for whether a mental illness has a biological cause is currently a diagnosis-by-elimination in most cases. However, an important part of this initial assessment should be a visit to your primary care physician. In this visit you should:

As you prepare for this medical visit, it would be important to also consider:

STEP TWO – MAKE NEEDED NON-MEDICAL CHANGES

Medication will never make us healthier than our current choices allow. Our lifestyle is the “ceiling” for our mental health; we will never be sustainable happier than our beliefs and choices allow. Medication can correct some biological causes and diminish the impact of environmental causes to our struggles. But medication cannot raise our “mental health potential” above what our lifestyle allows.

Too often we want medication to make-over our unhealthy life choices in the same way we expect a multi-vitamin to transform our unhealthy diet. We assume that the first step towards feeling better is receiving a diagnosis and prescription. This may be the case, and there is no shame if it is, but it need not be our guiding assumption.

Look at the lifestyle, beliefs, and relational changes that your assessment in step one would require. If there are choices that you could make to reduce the intensity of your struggle, are you willing to make them? Undoubtedly these changes will be hard, or you would have already done so. But they are essential if you want to use medication wisely.

As you identify these changes, assess the areas of sleep, diet, and exercise. Sleep is vital to the replenishing of the brain. Diet is the beginning of brain chemistry – our body can only create neurotransmitters from the nutrition we provide it. Exercise, particularly cardiovascular, has many benefits for countering the biological stress response (a primary contributor to poor mental health). Your first “prescription” should be eight hours of sleep, a balanced diet high in antioxidants, and cardiovascular exercise for at least thirty minutes three days a week.[2]

A key indicator of whether we are using psychotropic medication wisely is whether we are (a) using medication as a tool to assist us in making needed lifestyle and relational changes, or (b) using medication as an alternative to having to make these changes. “Option A” is wise. “Option B” results in over-medication or feeling like “medication didn’t work either” as we continually try to compensate medically for our volitional neglect of our mental health.

STEP THREE – DETERMINE THE NON-MEDICATED BASE-LINE FOR YOUR MOOD AND LIFE FUNCTIONING

This is an important, and often neglected, step. Any medication is going to have side effects. The most frequent reason people stop taking psychotropic medications, other than cost, is because of their side effects.

If we are not careful, we will merely want to feel better than we do “now.” Initially “now” will be how we feel without medication. Later “now” will be how we feel with medication’s side effects. In order to avoid this unending cycle, we need to have a baseline of how we feel when we live optimally off of medication.

One of the reasons postulated for why placebos often have as beneficial an effect as psychotropic medication is the absence of side effects. Those who take a placebo get all the benefits of hope (doing something they expect to improve their life) without any unpleasant side effects. Getting the baseline measurement of how life goes when you simply practice “good mental hygiene” is an important way to account for this effect.

“As I practice medicine these days, my first question when a patient comes with a new problem is not what new disease he has. Now I wonder what side effects he is having and which drug is causing it (p. 191).” Charles Hodges, M.D. in Good Mood Bad Mood

There is another often over-looked benefit of step three. Frequently people get serious about living more healthily at the same time life has gotten hard enough to begin taking medication. This introduces two interventions (medication and new life practices), maybe three or four (often people also begin counseling or being more open with friends who offer care and support), at the same time. It becomes very difficult to discern which intervention accounts for their improvements.

Writing out your answers to these questions will help you discern if you need to move on to step four and make the needed assessment in step five.

STEP FOUR – BEGIN A MEDICATION TRIAL

If your struggles persist to a degree that is impairing your day-to-day functioning, then you should seek out a physician or psychiatrist for advisement about medical options. As you have this conversation, consider asking your physician the following questions:

These questions should help you work with your doctor to determine which medication would be best for you. Remember, you have a voice in this process and should seek to be an informed consumer with your medical treatment; in the same way you would for any other product or service you purchase.

In this consultation you also want to decide upon the initial period of time for which you will remain on the medication (unless you experience a significant side effect from the medication). In determining this length of time, you would want to consider:

Once you determine this set period of time, your goal is to continue implementing the changes you began in step three while monitoring (a) the level of progress in your area of struggle and (b) any side effects from the medication.

STEP FIVE – ASSESS LEVEL OF PROGRESS AGAINST THE MEDICATION SIDE EFFECTS

Near the end of the trial period, you want to return to the life assessment questions you answered at the end of step three. Compare how you are able to enjoy and engage life at this point with your answers then. The questions you want to ask are:

The more specific you were in your answers at the end of step three, the easier it will be to evaluate your experience at the end of step five. At this point, try to be neither pro-medication nor anti-medication. Your goal is to live as full and enjoyable a life as possible. It is neither better nor worse if medication is or is not part of that optimal life.

STEP SIX – DETERMINE WHETHER TO REMAIN ON MEDICATION

At this point in the process there are several options available to you; this is more than a yes-no decision. But any option should be decided in consultation with your prescribing physician or psychiatrist. You can decide to:

Regardless of what you choose, by following this process you can have the assurance that you are making an informed decision about what is the best choice for you.

[1] For more on understanding the choice about psychotropic medications as a wisdom issue, I would recommend the lecture “Understanding Psychiatric Treatments” by Michael Emlet, MD at the 2011 CCEF conference on “Psychiatric Disorders.”

[2] Additional guidance on this kind of “life hygiene” can be found at bradhambrick.com/burnout.

This article was orignally published here.

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone. 

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ 5 human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Romans 13:1-7; 1 Peter 2:13-14

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7 

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, non-maleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Fur- 3 thermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being. 

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24