fbpx
Articles

The Death of Capital Punishment?

/
February 24, 2016

On Sunday, Pope Francis made a stunning appeal to the world’s governing class to abolish the death penalty. In our corner of the world, the machinery is already in place to do just that. Just last year, on the Supreme Court’s final day in session, Justice Stephen Breyer issued a forty-page denial of the constitutionality of the death penalty.

In addition to showcasing judicial cleverness — choosing the Court’s final day to attack that most final of punishments — Breyer’s opinion telegraphed what will be the Court’s next offensive against what it deems a relic of outmoded morality. Indeed, there is reason to believe the death penalty’s days are numbered. Consider the following: Breyer’s dissent was joined by Justice Ruth Bader Ginsburg, which means only three more votes are needed to secure a victory. Though they didn’t sign their names to Breyer’s dissent, is there any doubt as to how Justices Elena Kagan and Sonia Sotomayor will vote? If those votes are indeed secured, only one additional vote is needed to abolish the death penalty. Justices John Roberts, Samuel Alito, and Clarence Thomas won’t support the measure, making it likely that Justice Anthony Kennedy is again left to decide matters. Justice Antonin Scalia’s death throws some uncertainty into the question, since it is not yet known who will replace him. But it cannot be denied that abolitionism now has a plausible path to victory. Breyer’s dissent should therefore not be seen as a purely academic exercise, but rather as the opening salvo of the Court’s coming examination of capital punishment.

Breyer’s opinion thus coincides with a positive political moment for abolitionism: in addition to the Pope’s recent comments, the reality is that Republican support is decreasing. On top of that, a report from last year suggested President Obama could soon try to use his political capital to help overturn the death penalty, a practice he finds “deeply troubling.” Although Democratic frontrunner Hillary Clinton affirmed the death penalty in a recent debate, her support for it was so qualified it hardly registered as an endorsement of it. Now is therefore the perfect time to ask: Should conservatives support the death penalty?

Throughout history, capital punishment has been justified on a number of grounds. Some, such as the philosopher Immanuel Kant, adduce philosophical arguments to show that it is the punishment that justice demands. Others, such as the economist Isaac Ehrlich, support capital punishment on empirical grounds, arguing that it is better than alternative punishments in securing socially desirable outcomes, such as the safety of the population. Conservatives have challenged both claims.

In an article for Bloomberg View, Ramesh Ponnuru makes his case for why we shouldn’t be executioners.

The state has the legitimate authority to execute criminals, but it should refrain if it has other means of protecting people from them…. We shouldn’t execute people. But not because we might hurt people in the process, and not even because we might on some very rare occasion kill innocent people. We shouldn’t execute people who are unquestionably guilty because we don’t have to do it.

Writing in the wake of Oklahoma’s infamous botched execution of 2014, readers might have expected Ponnuru’s argument to rely on the gruesome reports of the pain experienced by the murderer. But Ponnuru’s preeminent concern is the safety of society at large, which is reflected in his view of punishment as essentially being a means of optimizing this safety. Since we can achieve this outcome without recourse to the death penalty, we should abolish the practice.

Unfortunately, Ponnuru misunderstands the nature of punishment. Asking “Should we allow capital punishment?” is a good question, but it follows the conceptually prior question, “How should we view punishment?” If we want clarity concerning particular forms of punishment, we should first clarify our stance on punishment more generally.

Let’s make a distinction between backward-looking and forward-looking conceptions of punishment. These are not value judgments; “backward” here is not intended pejoratively, and “forward” is not being used as a term of praise. Rather, a backward-looking conception is one that focuses on the act — which has already occurred, and is thus in the past — in order to determine which punishment is needed to redress the balance of justice in society. A forward-looking view, for its part, asks how we can, through the instrumentality of punishment, harness the state’s monopoly on legitimate power to generate socially beneficial outcomes. This is not a new debate: for a long time Immanuel Kant’s retributive theory of punishment has gone up against John Stuart Mill’s utilitarian conception.

Ponnuru’s conception is entirely forward-looking. Though he is not a utilitarian, when Ponnuru offers the safety of society as his main criterion, this overrides a consideration of the nature of the act committed. Of course, looking back at the act itself does not force one to support the death penalty. Ponnuru could look back and find that the act in question only merits a sentence of life without parole. The problem is that Ponnuru does not build into his calculus the importance of looking back. The result is a view that sees punishment primarily as a vehicle for social improvement, and only secondarily as a justice-restoring mechanism.

But of course punishment need not be viewed unidirectionally. Ponnuru is wrong not because he adopts a forward-looking conception of punishment, but because he ignores the most salient aspect of punishment, which is that it serves as a response to an act which deserves or merits it. Again, his mistake is not that sees in the mechanism of punishment the capacity to achieve socially beneficial outcomes. His error is to fail to appreciate punishment’s primary role as a justice-restoring apparatus. As the philosopher Igor Primoratz wrote, “the offense is the sole ground of the state’s right and duty to punish.”

The deficiencies in Ponnuru’s view are teased out by a simple thought experiment that has its origins in Kant. Under Ponnuru’s understanding, if suddenly everyone on earth disappeared save for one person who had committed a murder and was awaiting execution, the justification for the murderer’s death sentence would have disappeared along with the people. Since there is no longer a society to keep safe, there is no longer a reason to punish the wrongdoer. But there is something off about this result. Many of us would conclude that the murderer has done something that merits punishment, regardless of whether there is a society left to deliberate about it or not.

Over and above his neglect of a retributive rationale for punishment, Ponnuru also fails to give any support for his empirical claim that the death penalty is not needed in order to maximize safety. On the contrary, a regime which bans capital punishment is one in which the surpassing value of life is insufficiently respected. To keep the death penalty option off the table is to tell society that wanton destruction of human life is not taken seriously enough to warrant the forfeiture of the offender’s life. This can’t help but lead to a depreciation of life in other respects. By contrast, when the state holds up life as so valuable, so precious, that to wrongly take it is to forfeit one’s own, it broadcasts to all its members the significance of life. Thus it’s the death penalty and not life imprisonment which best reflects life’s weightiness and resists its devaluation, and a state which includes it joins that class of governments for whom murder is not just murder, but desecration.

Jay Sekulow, of the American Center for Law and Justice, is another conservative who opposes the death penalty. He explains:

I’m opposed to the death penalty…because…the taking of life is not the way to handle even the most significant of crimes…Who amongst anyone is not above redemption? I think we have to be careful in executing final judgment. The one thing my faith teaches me—I don’t get to play God. I think you are short-cutting the whole process of redemption…I don’t want to be the person that stops that process from taking place.

Sekulow’s reasoning is in one sense commendable but in another quite baffling. He introduces theological considerations, a move that is most welcome in a world aggressively hostile to their application in the public square. Yet since redemptive concerns are not, under any theological framework I’m aware of, relevant to the justification of punishment in a legal sense, Sekulow’s challenge to the death penalty cannot be seen as a serious one. What do the redemptive prospects of the wrongdoer have to do with a punishment’s justification? The death penalty is not a ministry. Sekulow’s bizarre spiritual concerns aside, his argument is ultimately a moral one. He sees a moral problem with “the taking of life” in response to even the worst of crimes.

The basis for the death penalty, under a retributive theory of punishment, is the lex talionis, or eye-for-an-eye principle. Part of the reason for retributivism’s association with Judeo-Christian ethics comes from formulations such as Numbers 35:31, which says “You shall accept no ransom for the life of a murderer who is guilty of death; but he shall be put to death.” This suggests what makes a punishment right is its retributive function; in this and all cases, the offense requires a proportionate loss to be inflicted on the wrongdoer. Since human life is invaluable, no amount of money can be given as “ransom.” As the philosopher G. W. F. Hegel put it: “Since life is the full compass of a man’s existence, the punishment [for murder] cannot simply consist in a ‘value’, for none is great enough, but can consist only in taking away a second life.”

Contrary to Sekulow’s suggestion, what is most secure about capital punishment is its moral justification. What continues to be the strongest case against it stems from procedural or practical concerns, not from theoretical ones. Our biggest problem is that human error is ineradicable. We can put in place a blinding number of safeguards, we can implement all the accountability measures our imaginations can dream up, yet the possibility that new injustices will nevertheless occur cannot be ruled out. Justice Breyer’s recent dissent is awash with such instances. In 1972 the death penalty was ruled unconstitutional based on these very concerns. Since then Congress has rehabilitated it by implementing reforms intended to avoid the worries that led to its judicial disrepute all those decades ago. If Justice Breyer is right that today’s version violates the eighth amendment’s ban on cruel and unusual punishment (see Scalia’s rebuttal to Breyer’s application of the eighth amendment), it is unclear whether capital punishment will be able to enjoy another revival, since such a conclusion would involve pessimism about the death penalty’s ability to ever be able to eliminate its inequitable and arbitrary implementation.

Justice Breyer’s argument that it is unacceptable for racial, geographical, and economic factors to be more decisive in capital cases than the offender’s culpability should resonate with all of us. Still, we need to recognize that this is not an argument against the death penalty itself, but rather against our non-ideal implementation of it. As long as the death penalty’s greatest challenge remains fundamentally procedural rather than philosophical, the Court’s arguments won’t go any distance toward impugning the rightness of capital punishment. We’re all too aware by now that what the Justices declare is not necessarily what justice declares.

*The view expressed in this commentary belongs solely to the author and is not necessarily the view of the ERLC.

Berny Belvedere

Berny Belvedere has studied philosophy (Florida International University and University of Florida) and theology (Trinity International University and Knox Theological Seminary), and is a professor of philosophy at Florida International University, Miami Dade College, and St. Thomas University. He has written on ethics, politics, economics, pop culture and more at … Read More

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone. 

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ 5 human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Romans 13:1-7; 1 Peter 2:13-14

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7 

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, non-maleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Fur- 3 thermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being. 

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24