The worst philosophical defense of abortion you’ll ever hear

August 9, 2017

In a recent interview, Elizabeth Harman, a professor of philosophy at Princeton University, presents what is likely to be the worst defense of abortion ever made by a reputable philosopher.

Although I’ll be quoting Harman verbatim throughout this article, I recommend spending the five minutes to watch the video. It is truly one of the most jaw-droppingly incoherent cases for abortion you’ll ever hear.

To recap, Harman says she is defending the “liberal position about early abortion” that there’s “nothing morally bad” about early state abortions. Harman’s position is that among “early fetuses” there are two “very different kinds of beings.” She claims that she and her interviewers “already had moral status then”—that is, as an early embryo—“in virtue of our futures.”  Harman’s claim is that they were all “the beginning stages of persons.”

Ironically, Harman’s view is based in part on a famous, reputable argument against abortion, one that claims what makes killing inherently wrong is that it deprives a victim of their future experiences. She also concedes that the early embryo does indeed have moral status because it is the beginning stage of a person—just as infancy, adolescence, and adulthood are later stages of a person.

But Harman then adds a strange qualifier: the early embryo only has moral status if it lives. “[S]ome early fetuses will die in early pregnancy,” says Harman, “either due to abortion or miscarriage. And in my view that’s a very different kind of entity. That’s something that doesn’t have a future as a person and it doesn’t have moral status.”

Before we continue, let’s consider the implications if we applied her “different kinds of beings” principle to one of the other “stages of persons.”

Imagine there are two children, Jack and Jill, who are in children’s hospital and being treated for a serious illness. The doctors tell Jack he has been cured and can go home tomorrow, but they tell Jill her disease has progressed and she is expected to die tomorrow. Jack has a future, while Jill does not. According to Harman, Jack is a being that has moral status (because he will continue to have a future), but Jill is not only a being who does not have moral status right now (because she does not have a future), but Jill is a being who never had moral status.

That’s a strange conclusion, but it gets even more bizarre. In the example above, Jill’s condition is similar to miscarriage in that her moral status is changed by a natural death. But Harman argues that moral status also changes because of the decision to have an abortion. So in Jill’s case, she would cease to have moral status—and indeed would never have had moral status—if her doctor decided to murder her.

But that can’t truly be what Harman is claiming, can it? Could she really be claiming that our moral status depends on whether or not someone is planning to kill us? Sort of. As we’ll see, she adds a qualification that she believes distinguishes the early embryo from other stages of human development.

Her interviewers then ask if we can only determine if the being had moral status after it ceased to have a future. “Right, so there’s the real question of ‘How can we know?’” says Harman, “Well, often we do know. If we know that a woman is planning to get an abortion, and we know abortion is available to her, then we know that fetus is going to die. It’s not the kind of being like the fetuses that became us. It’s not something with moral status.”

To clarify, Harman thinks she is not claiming that the action of the pregnant woman determines whether the embryo has moral status. She is merely saying that if the child is going to die, then the child no longer has moral status—and never did. We’ll consider her reasoning in a moment. But let’s finish hearing her claims.

“Often we have reason to believe that the fetus is the beginning stage of a person,” says Harman. “If we know that a woman is planning to continue her pregnancy, then we have good reason to think that her fetus is something with moral status. Something with this future as a person.”

At this point one of the interviewers, James Franco, points out this sounds like circular argument: the permissibility of abortion depends on the moral status of the embryo, and the moral status of the embryo depends on whether the woman chooses to have an abortion.

Harman says that’s not the argument she’s making. She says, “So you [James] have moral status and, in my view, back when you were an early fetus you had moral status. But it’s not that aborting you would have been wrong. Because if your mother had chosen to abort her pregnancy, then it wouldn’t have been the case that you would have had moral status because you would have died as an early fetus. So she would have been aborting something that didn't have moral status.”

Let’s outline what Harman is claiming:

1. James now has moral status and had it as an early embryo. Let’s say that James has moral status now, at time Z, and also had it then, at time X.

2. However, if James’s mother had chosen to kill him in between time X and Z—let’s say that she aborted him at time Y—then James would have never existed at time Z (and so could not have moral status) but would also not have had moral status as time Y.

3. James had moral status if and only if he did not die, which is dependent on whether his mother decided not to abort him. Once she decided at time Y to abort him, he no longer had moral status at time X.

Harman clarifies that a child only has moral status if he does not die. “It’s a contingent matter if you have moral status,” says Harman, “you actually have moral status but you might not have counted morally at all if you had been aborted. You would have existed but you would have had this very short existence in which you would not have mattered morally.”

What Harman is saying is that events in the future affect the moral status of persons in the past. If the mother decides to have an abortion, then the child will die and thus he never had moral status at time X. If the mother decides not to have an abortion, then the child will live and thus always had moral status, including at time X.

How is this possible? Why is the moral status of the child contingent on whether the child dies because the mother decided to have an abortion?

As Harman explains, “Just given the current state of the fetus, it’s not having any experiences, there’s nothing about it’s current state that would make it a member of the moral community. It’s derivative of it’s future whether it gets to have moral status. So it’s really the future that endows moral status on it. And if we allow it to have this future, then we’re allowing it to be the kind of thing that now would have moral status. So in aborting it, I don’t think you’re depriving it of something it independently has.”

Harman here falls back on the tired old functionalist arguments for abortion. The embryo doesn’t yet have certain faculties necessary for moral status (consciousness, experiences, etc.) and thus only can get these faculties if the child lives. If you kill the child, though, it can never get these faculties and thus never had moral status.

The argument Harman makes in the video is based on a paper she published in the journal Philosophy & Public Affairs. In that paper she says we should deny the claim that, “For any two early fetuses at the same stage of development and in the same health, either both have some moral status or neither does.” Her reasoning is based on what she deems the Actual Future Principle: “An early fetus that will become a person has some moral status. An early fetus that will die while it is still an early fetus has no moral status.” She says the Actual Future Principle leads to the following conclusion: “The very liberal view on the ethics of abortion: Early abortion requires no moral justification whatsoever.”

It may seem that Harman’s argument for abortion still relies on the circular reasoning we mentioned above (i.e., the permissibility of abortion depends on the moral status of the embryo, and the moral status of the embryo depends on whether the woman chooses to have an abortion). But her argument is even less coherent than that. She preemptively responds to this objection by saying:

First, the objector is right that "you just can't lose" if you have an abortion. As I have argued, the Actual Future Principle implies the very liberal view on abortion. Therefore, according to the Actual Future Principle, no moral justification is required for an early abortion.

In other words, an early fetus that will become a person has some moral status but an early fetus that will die while it is still an early fetus has no moral status.

So there is no moral justification necessary for killing the early fetus since a fetus that dies has no moral status.

(Most professors wouldn’t allow a freshman taking Philosophy 101 to attempt to pass off this circular reasoning as a reasonable argument. Yet somehow it made it into a peer-reviewed philosophy journal.)

Harman’s entire argument is rooted in the idea that the current moral status of certain beings is dependent on what other people do to them in the future. If the child is killed, then it never had moral status since you can’t have moral status as an embryo if you do not have a future.

This argument utterly fails as a defense of abortion. But that’s not really Harman’s point. Her argument is not meant to justify abortion (which it cannot do because it’s based on circular reasoning) but to give a woman who wants to have an abortion a justification to ignore their conscience:

There is something upsetting and saddening about having an abortion, for many women, which is independent of uncertainty about the choice itself. It has seemed that the only way to explain these experiences is by saying that these women are recognizing their moral responsibility for a morally significant bad event, the death of the fetus. The very liberal view blocks this explanation.

The reason many women regret their abortions is because their conscience bears witness to the “the work of the law is written on their hearts” that killing one’s child is morally wrong (Rom. 2:15). Harman is attempting to give them a way to sear their conscience (1 Tim. 4:2) so that they will not have to recognize the natural guilt we feel in killing our own children. All that Harman has done, though, is torture logic and reasoning to justify the killing of children’s futures.

Addendum: The best rebuttal to Harman was offered a decade before her paper was published. In 1989, philosopher Donald Marquis provided an intriguing argument that explains why abortion is wrong that relies on many of the same reasons Harman herself accepts. Marquis circumvents the discussion of fetal personhood and examines the question of what makes killing wrong. According to Marquis, this is the question that needs to be addressed from the start:

After all, if we merely believe, but do not understand, why killing adult human beings such as ourselves is wrong, how could we conceivably show that abortion is either immoral or permissible.

Marquis concludes that what makes killing inherently wrong is that it deprives a victim of all the “experiences, activities, projects, and enjoyments that would otherwise have constituted one’s future.” It is not the change in the biological state that makes killing wrong, says Marquis, but the loss of all experiences, activities, projects, and enjoyments that would otherwise have constituted one’s future (hereafter we will refer to these as EAPE).

These EAPE are either valuable for their own sake or lead to something else that is valuable for its own sake. When a victim is killed, they are deprived not only of all that they value but all that they will value in the future. Therefore, what makes the prima facie killing of any adult human being wrong is this loss of future EAPE.

This has obvious implications for abortion. Marquis concludes that:

The future of a standard fetus includes a set of experiences, projects, activities, and such which are identical with the futures of adult human beings and are identical with the futures of young children. Since the reason that is sufficient to explain why it is wrong to kill human beings after the time of birth is a reason that also applies to fetuses, it follows that abortion is prima facie morally wrong.

Because Marquis defends the argument in detail, I won’t rehash the points he makes in response to objections. I recommend that anyone who finds fault with the conclusion read the paper in its entirety.

Joe Carter

Joe Carter is the author of The Life and Faith Field Guide for Parents, the editor of the NIV Lifehacks Bible, and the co-author of How to Argue Like Jesus: Learning Persuasion from History’s Greatest Communicator. He also serves as an executive pastor at the McLean Bible Church Arlington location in Arlington, Virginia. Read More

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone. 

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ 5 human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Romans 13:1-7; 1 Peter 2:13-14

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7 

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, non-maleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Fur- 3 thermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being. 

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24