fbpx
Articles

How euthanasia came to Europe (part 2)

/
January 5, 2017

(Note: This is the second in a two part series. Part one can be found here.)

Over the past few decades the Dutch have expanded the scope of protected physician killing to include children. With their parent’s permission, a child between the ages of 12 to 16 years old may request and receive assisted suicide. (Initially, minors could obtain an assisted death even if their parents objected, but after domestic and international criticism, the law was changed to require parental consent.) Even before the Parliament made it legal to euthanize young children, doctors in the Netherlands took it upon themselves to end the life of infants and others who do not have the free will to agree to end their own lives, but whose existence doctors or parents deemed them “unfit.”

In October 2004, the Groningen Academic Hospital officially proposed a government policy—dubbed the Groningen Protocol—which would allow doctors to legally euthanize children under the age of twelve for conditions in which suffering was “so severe that the newborn has no hope of a future.” The hospital even admitted to administering a lethal dose of sedatives to four newborns in 2003. In the previous three-year period, fourteen other cases had also been reported by various hospitals to the Justice Ministry. No legal proceedings were ever taken against either the hospitals that condoned the practice or the doctors who carried out the killings.

The lack of prosecutions is hardly surprising considering the Dutch people’s attitude toward killing those deemed unworthy of life. A survey by the NIPO Institute in 1998 found that 77 percent of the populace favored non-voluntary euthanasia while only 76 percent favored voluntary euthanasia. Although the one percent difference falls within the margin of error, it may also be attributable to the false belief that non-voluntary killing is considered only as a last resort while voluntary euthanasia can be administered for almost any reason. As reported in one Dutch documentary, a young woman in remission from anorexia was concerned that her eating disorder would return. To prevent a relapse, she asked her doctor to kill her. He willingly complied with her request.

Death for those ‘suffering from life’

The anorexia example is horrifying, but at least in that instance an actual physical illness was involved. As the most recent legislative proposal shows, some advocates of the practice consider the presence of a debilitating illness or physical suffering as too stringent a prerequisite for permitting euthanasia.

The Dutch Voluntary Euthanasia Society (DVES), for example, was generally pleased with the relaxation of euthanasia laws, but it was disappointed that the law continued to forbid the killing of people who are simply tired of living. “We think that if you are old, you have no family near, and you are really suffering from life,” said DVES spokesperson Walburg de Jong, “then [euthanasia] should be possible.”

Days after the change in the law, Dutch health minister Els Borst admitted in an interview that she had no problems with providing “suicide pills” for elderly citizens who were simply “bored sick” with living. (The public now seem to agree: A study in 2013 found that more than one in five Dutch people believe that euthanasia should be allowed for elderly people who are "tired of living.") And another poll in 2015 found that Dutch doctors would willingly euthanize anyone who was, “tired of living, with medical grounds for suffering but in the absence of a severe physical or psychiatric disease.”

Perhaps the most significant shift in the public acceptability of voluntary euthanasia occurred in the summer of 1991, crystallizing around another important legal case. Psychiatrist Boudewijn Chabot treated a woman whom he gave the fictional name of “Netty Boomsma.” The woman was suffering from grief over the loss of her youngest son to cancer at the age of twenty. Her eldest son was also dead, having killed himself two years earlier after being rejected by his girlfriend. Boomsma, who had a long history of depression, approached Chabot with the understanding that he would assist her suicide if she did not change her mind about wanting to die.

Although the crushing grief over losing a child can last for years, Chabot treated Boomsma for only two months before fulfilling his promise. Four months after the loss of her youngest son to cancer, Chabot gave Boomsma the lethal agent she needed to kill herself. While listening to the sounds of the same Bach flute sonata that had played at her son’s funeral, the grieving mother took the medication and asked the psychiatrist: “Why do young kids want suicide?” Thirty minutes later she was dead.

With the aid of the psychiatrist, the mother was able to end her life and fulfill her desire to be buried between the graves of her two sons. In his defense, Chabot insisted that Boomsma was not depressed, nor even a real patient. She was, he claimed, simply a grieving woman who wanted to die. Many Dutch therapists insist that there is an obligation to assist in the suicide of a patient with suicidal ideation if treatment has not succeeded.

But Chabot provided only minimal treatment: The despairing patient became her own diagnostician, and the doctor simply acted as the deadly pharmacist. After reporting the case to the coroner, Chabot was prosecuted for violating Dutch law, but the case was appealed to the country’s supreme court, which upheld the precedent set by the Leeuwarden criminal court in 1973 that pain relief that runs the risk of shortening life is acceptable when helping a patient suffering from a terminal condition.

The court found that Chabot was guilty of not having provided an adequate psychiatric review of the patient’s case before assisting with the suicide. However, the court imposed no penalty on Chabot, and the legal ruling established the precedent that physical illness was not a requirement for providing “pain relief” that ends a life when the request is voluntary, well-considered, and reviewed by a second physician. Suicidal depression became a terminal disease; psychic distress became a legitimate ground for doctor-assisted death.

While the Supreme Court’s decision was hailed as a victory by euthanasia supporters, it took more than ten years before the medical community openly agreed that neither a terminal illness nor physical suffering should be necessary for ending a patient’s life. After a three-year investigation, the KNMG concluded in January 2005 that doctors should be able to kill patients who are not ill but who are judged to be “suffering through living.”

Jos Dijkhuis, the emeritus professor of clinical psychology who led the inquiry, said that it was “evident to us that Dutch doctors would not consider euthanasia from a patient who is simply ‘tired of, or through with, life.’” Instead, the committee agreed on the term “suffering through living,” because a patient may present a variety of physical and mental complaints that can lead them to conclude that life is unbearable. “In more than half of cases we considered, doctors were not confronted with a classifiable disease,” said Dijkhuis. “In practice the medical domain of doctors is far broader . . . . We believe a doctor’s task is to reduce suffering, therefore we can’t exclude these cases in advance. We must now look further to see if we can draw a line and if so where.”

No boundaries on dealing out death

Over a period of forty years, the Dutch have continued the search for where to draw the line with euthanasia, shifting from acceptance of voluntary euthanasia for the terminally ill, to voluntary euthanasia for the chronically ill, to non-voluntary euthanasia for the sick and disabled, to euthanasia for those who are not sick at all but are merely alcoholics or “suffering through living.”

While the initial impetus may have been spurred by a desire to give expanded rights to the person who faces extreme suffering or imminent death, the effect has been to concentrate power into the hands of state-sponsored medical professionals. And while the justification for assisted death is usually the supposed well being of the suffering patient, the Dutch have redefined natural dependency into an unacceptable or unwanted social burden.

This increasing acceptance of euthanasia in the Netherlands is inversely proportional to the decline of Christianity in the country. In the mid-1960s, about 65 percent of the nation was Christian. Today, that same percentage (67 percent) claim no affiliation. Slightly more than 25 percent of the Dutch people are atheists while only 17 percent believe in the existence of God.

The Dutch sought autonomy from God, which led to a radical embrace of autonomy for the individual. Not surprisingly, the rejection of the Author of Life has led to a Culture of Death in the Netherlands. Faced with the many pains, heartaches, and disabilities that eventually afflict most of us in one form or another, and having no ultimate Redeemer to turn to, the Dutch are resorting euthanasia to quell their distress.

Euthanasia came to Europe through the agnosticism and atheism of the Netherlands. The experience on that continent should serve as a warning that when a nation ceases to believe in God, they embrace collective suicide carried out one person at a time.

Joe Carter

Joe Carter is the author of The Life and Faith Field Guide for Parents, the editor of the NIV Lifehacks Bible, and the co-author of How to Argue Like Jesus: Learning Persuasion from History’s Greatest Communicator. He also serves as an executive pastor at the McLean Bible Church Arlington location in Arlington, Virginia. Read More

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone. 

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ 5 human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Romans 13:1-7; 1 Peter 2:13-14

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7 

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, non-maleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Fur- 3 thermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being. 

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24