fbpx
Articles

Understanding ethical systems: Biblical ethics (Part 2)

/
January 26, 2021

Editor’s note: This is the second article in a series on what Christians should know about ethical theories. The first article and future articles can be found here.

In this series, we’re looking at several of the most common ethical systems within normative ethics (such as deontology, consequentialism, and virtue ethics), considering their strengths and weaknesses, and comparing them to a baseline standard, which we’ll call “biblical ethics.” The first article explained what biblical ethics is and how we know an action is moral. In this article we’ll examine moral decision-making, including how we know which biblical rules or principles apply to a given situation and what we do when moral acts conflict.

How do we know which rules or principles apply to a given situation?

As pertains to moral decision-making, the Bible should be understood as a revelation of God’s commands, principles, and virtues. God’s moral instruction comes to us in the form of commands and principles and is also revealed in Christian virtues and examples (particularly in the example of Jesus). We should therefore prioritize commands because they help us to most clearly understand God’s standards for our conduct. They also help us determine how principles and virtues are to be applied.

Within Scripture we find two basic categories of commands: broad (or general) commands and narrow (or specific) commands. Broad/general commands typically apply to many situations, such as the command to love our neighbor, and always apply in some way to all cultures and all contexts.

Narrow or specific commands relate to a particular circumstance, often in a culture that differs from our own. An example is Deuteronomy 22:8, which says, “When you build a new house, make a parapet around your roof so that you may not bring the guilt of bloodshed on your house if someone falls from the roof.” An application in our day might be to build a fence around your backyard pool so that a neighbor’s child doesn’t fall in and drown.

Narrow commands might not always apply to all cultures and all contexts. In some cases (as with the example above) there might be a parallel application. Narrow commands are similar to “case law” (i.e., law as established by the outcome of former cases) in that they give us paradigmatic examples for situations we might encounter.

In determining how a command applies we must consider the reason for the command. If the reason for the command is a theological principle that is always true, then the rule will almost always apply today. As a general rule, if the Old Testament gives a moral command it is still in effect unless later canceled, either explicitly or implicitly, in the New Testament.

Sometimes it is rather obvious how a command in Scripture can be applied. But oftentimes, to determine whether an action or circumstance is similar to an action judged to be wrong in Scripture, we must use analogical reasoning. In his essay “The Place of Scripture in Christian Ethics,” James Gustafson states the commonly accepted method of scriptural analogy:

Those actions of persons and groups are to be judged morally wrong which are similar to actions that are judged to be wrong or against God’s will under similar circumstances in Scripture, or are discordant with actions judged to be right or in accord with God’s will in Scripture.

An example of how to use analogical reasoning would be to consider the question of whether abortion is immoral. Our first step would be to ask, “What ethical rules or principles apply in this situation?” For this question, the answer is rather straightforward since the Bible has a clear command that prohibits the taking of innocent life.

The command was given by God to Moses as one of the Ten Commandments on two separate occasions (Exodus 20:13 and Deuteronomy 5:17). In the New Testament, we also find the commandment reconfirmed by Jesus (Matthew 5:21), and reiterated by his apostle, Paul (Romans 13:9). As pastor-theologian Kevin DeYoung notes, the sixth commandment prohibits much more than just cold-blooded, premeditated murder. It prohibits killing or causing to be killed, by direct action or inaction, any legally innocent human.

An elective abortion (as opposed to a spontaneous abortion, or miscarriage) is the killing of an innocent human being. Based on scientific knowledge of human development, we know a human embryo/fetus is an actual human being and that human life begins at fertilization/conception. Several passages in the Bible also strongly suggest that human life begins at conception (cf. Job 31:13-15; Psalms 51:5; 139:13-16; Matthew 1:20). Because elective abortion unjustly takes the life of a defenseless human being, abortion is prohibited by God under the sixth commandment.

What do we do when moral acts conflict?

There may be times when we may find that two or more moral commands or principles appear to be in conflict. An oft-used example is the “Nazi at the door” problem:

Imagine that you are living in World War II Germany and are hiding a family of Jews in your basement. A Nazi SS soldier comes to your door and asks if there are any Jews in your home. On the one hand, you know it is morally wrong to lie. On the other hand, you also know it would be morally wrong to answer in a way that would get the family killed. What should you do?

There are three ways to resolve this issue. The first is to determine whether there is an actual moral conflict. The second is to conclude that true ethical conflicts cannot exist. The third is to determine the hierarchy of commands.

Many Christians—including me—would say that in this particular situation there is no moral conflict because there is no lie being told. A lie is an intentional falsehood that violates someone’s right to know the truth. I’m convinced there are cases in which people forfeit their right to know the truth. The Nazi at the door has forfeited his right to know the truth about whether you have Jews in your home because he has a nefarious intent. It would be similar to the Hebrew midwives who deceived Pharaoh when he wanted to kill all newborn male babies (Exodus 1:17–21).

If there was an actual moral conflict (because you believe failing to acknowledge the Jews hidden in your home would be lying to the Nazi), then we have to apply the second or third approaches. The second approach is called “non-conflicting absolutism.” It denies that a true ethical conflict can even exist and claims that any perceived conflict is a result of human misinterpretation. Under this view, if we have a perfect view of ‘right’ and ‘wrong,’ any illusion of conflict is dispelled. The problem, of course, is that we have no perfect view and so it is not clear how we would know what decision to make under this perspective. This is why the non-conflicting absolutism is rarely held by Christian ethicists.

The third approach is called “hierarchicalism” (or graded absolutism). This view holds that moral conflict can exist and that when ethical laws are in conflict a “right” choice is available through a hierarchy of principles found in Scripture. Under hierarchicalism, if one duty clearly has priority, we must choose that duty. Even if we believed that we would be lying to the Nazi and that it would be morally wrong, the duty to protect the lives of the Jewish family would take priority. According to hierarchicalism, as long as we follow the higher moral law, we are not held responsible for failing to keep the lower-level command.  Also, under hierarchicalism, if both duties appear to have equal priority, we are free to obey either duty (though we need to be certain they are indeed of equal priority).

Hierarchicalism has solid biblical support, as even Jesus prioritized some rules and commands when they appear to come into conflict (see, for example, Matthew 12:9-13). It’s important to remember that hierarchicalism is about selecting the better of two goods, not choosing the “lesser of two evils.” We are not called to choose any evil—even a lesser one. We are not called to choose an evil that good may come.

What is the process for moral decision-making?

We can put all of this together to devise a seven-step process for making moral decisions:

This may initially seem like a labor-intensive process, and too burdensome for use in real life. But once we develop a solid grasp of God’s commands and the relevant fact patterns, the process often becomes rather straight-forward.

In the next article in this series, we’ll wrap up our focus on biblical ethics by considering the role of conscience. The remaining articles in this series will then compare and contrast other ethical systems—deontology, consequentialism, and virtue ethics—to the biblical standard.

Joe Carter

Joe Carter is the author of The Life and Faith Field Guide for Parents, the editor of the NIV Lifehacks Bible, and the co-author of How to Argue Like Jesus: Learning Persuasion from History’s Greatest Communicator. He also serves as an executive pastor at the McLean Bible Church Arlington location in Arlington, Virginia. Read More

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone. 

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ 5 human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Romans 13:1-7; 1 Peter 2:13-14

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7 

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, non-maleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Fur- 3 thermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being. 

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24