fbpx
Articles

Understanding Ethics: Consequentialism

/
August 15, 2014

As Christians called to be salt and light within our culture, we must be able to analyze the ethical theories of our society in order to bring Scripture to bear upon them. Many of the decisions happening daily in our culture fall within the category of consequentialist ethics. While consequentialism is nothing new and much more extensive work has been offered on it than can be found in this article, my goal is to explain how a broad understanding of consequentialism is helpful for the Christian when parsing ethical decisions. Adding competency in consequentialism to the Christian’s tool belt will supply a ready filter useful in deconstructing an ethical decision.

Consequentialism focuses decision making upon the potential outcomes of an action; the outcome, coupled to some extent with intent, becomes the standard for morality. Situation ethics, utilitarianism, and pragmatism are examples of the larger school of ethical thought known as consequentialism. A crude, but often effective, way of characterizing consequentialism is to claim that the ends justify the means. In other words, if deemed necessary, then seemingly unethical actions can be employed ethically so long as the outcome is itself, ethical.

Initially, consequentialism seems intuitive, even natural. Don’t we always choose what we think is best? Shouldn’t we choose what we thing is best? Biblical ethics, however, seeks those actions that God deems best. Instead of seeking what we think to be the best outcome, our duty is to seek the will of God in humble obedience. God’s will may happen to coincide with the outcome that we thing is best, but it will be coincidental to the reason for the ethical decision. With this contrast between biblical ethics and consequentialism in hand, we can offer some general critiques of consequentialism.

The primary difficulty with consequentialism arises in deciding who determines the bestaction in any given situation. If the end determines the means, who determines what end ought to be sought? Various themes are offered, such as Jeremy Bentham’s utility principle or Joseph Fletcher’s love principle, but no theme can ever be considered anything but subjective. What objective feature of the universe demands that we love someone? Which universal aspect of reality points to utility as a good? Unless some objective, universal standard can be offered, any consequentialist ethic yields subjective ethics which are necessarily not binding upon others.

Secondly, if no objective standard exists, then how can one truly know which action is best? Consequentialism lacks a sufficient knowledge base from which to categorize good or bad. Unless one can see into the future, many actions must be recognized as presently ambiguous. Only a being with the attributes of God can be sure that he/she is making the proper decision.

Another way of stating this idea is that any perceived outcome is primarily dependent on one’s own experience and the best available evidence, facts, and information. Without much effort we can imagine wrong conclusions coming from good evidence, good facts, and good information that is grounded in our previous experiences. Just consider any scenario in which an individual who is actually innocent, perhaps framed by some devious nemesis, is judged guilty by a group of peers in a court of law based on evidence that does, in fact, point towards that individual’s guilt. Just as the jury in our thought exercise was technically incorrect in their decision to ascribe guilt, we too run just such a risk if our primary impetus for action is based on potential outcomes.

To show how to utilize consequentialism as a filter and to combat it biblically, consider the following scenario. Imagine a young man Joe seeking a pastor’s counsel. Joe has recently graduated from college with an economics degree and has been offered a great position in a large financial firm. Joe worked hard for his degree, his parents gave much to see him graduate, and his professors put their reputations on the line by recommending Joe for his newly acquired position. Joe tells his pastor that he has felt called as a missionary to a country hostile to the Gospel and evangelism. He worries that it would be unjust to “throw away” his parents’ sacrifices and stain his professors’ reputations. Nevertheless, he maintains that he is truly convicted to pursue this missionary opportunity. Which action is the ethical action for the Christian?

The consequentialist can give a variety of answers. If the guiding theme is self-preservation, then Joe should take the job with the financial firm because he will probably be killed in the foreign country, possibly without ever winning anyone to the gospel. Another answer could be based on the theme of utility; Joe could be of more benefit by earning a great living and donating large sums of money to organizations that contribute to struggling parts of the world than he could ever do by actually living there himself. He could even fund the sending of multitudes of missionaries to the very country in question which is surely better than his going himself. Then again, Joe could be murdered in any U.S. city just as easily as he could be murdered in a foreign country, so either decision could be the correct decision; Joe should simply do what makes him happy.

Hopefully you can see that the consequentialist has no firm basis for any of this advice. The proper biblical response would be to seek the God’s guidance through prayer, petition, and fellowship with other believers, and then to follow the conviction of the Spirit. Since Joe feels convicted concerning a specific location and the Bible teaches to make disciples of all nations, Joe should pursue his missionary calling.

John 11 offers two examples of consequentialist thinking. Mary says to Jesus in John 11:32, “Lord, if You had been here, my brother would not have died.” While that statement does not entail an ethical decision, it does exhibit a consequentialist mindset. We should not fault Mary for her sadness, but it is obvious that she assumes a longer life is better than a shorter life (leaving aside any sociopolitical concerns Mary may have had concerning income, etc) Why is a short life bad? We can speculate dozens of morbid, painful scenarios that Lazarus may have had to endure had he lived which would make death enviable. God’s will was to resurrect Lazarus for the glory of God, which is surely a good.

Next, the high priest Caiaphas says in John 11.50, “It is expedient for you that one man die for the people, and that the whole nation not perish.” While it is true that the Spirit intended this comment as prophecy of Jesus’ crucifixion, Caiaphas certainly had no such intentions. Instead, he attempted to play a numbers game saying that an innocent man should die so that the potential for further death does not arise. Consequentialism allows for the death of an innocent if it prevents more deaths so that Caiaphas would actually be justified in making the decision to seek Jesus’ death. It should be apparent that the numbers game always leaves one in an ethical fog. How does Caiaphas know that killing Jesus won’t incite Jesus’ followers to murder every Jew they can find? How does he know that the emperor wouldn’t convert if Jesus continues teaching, which would presumably be good for the nation? The subjectivity of consequentialism and ignorance of the future are clearly seen in Caiaphas’ thinking.

Paul Wilkinson

Paul was ordained at Millbrook Baptist Church in Aiken, SC on May 31, 2015. He graduated from Southern Baptist Theological Seminary in Louisville, KY with a PhD in Philosophy of Religion in that same month. After joining Brentwood Baptist Church in 2012, Paul taught a series of apologetics and theology … Read More

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone. 

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ 5 human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Romans 13:1-7; 1 Peter 2:13-14

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7 

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, non-maleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Fur- 3 thermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being. 

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24