The Ethics of Prisoner of War Exchanges

June 3, 2014

This past weekend the lone American prisoner of war from the war in Afghanistan, captured by insurgents nearly five years ago, was released to American forces in exchange for five Taliban detainees held at Guantánamo Bay, Cuba. The five detainees included two senior militant commanders said to be linked to operations that killed American and allied troops as well as implicated in murdering thousands of Shiites in Afghanistan.

As American Christians we should pray for the released Taliban leaders and be thankful this young soldier will be reunited with his family. But we can and should also use this incident to reflect on the ethics of prisoner of war exchanges.

The Purpose of Capturing Prisoners

In the Marine Corps Doctrinal Publication Warfighting, there is one sentence that succinctly explains the goal of warfare: “The object in war is to impose our will on our enemy.”

The role of military forces in warfare is therefore simply to provide “organized application or threat of violence” to make the enemy conform to a nation’s will. When militaries clash on the battlefield, the overall objective on each side is to incapacitate this threat in order to prevent the enemy from being able to impose their will upon your side. Generally speaking, the primary means of carrying out this task of incapacitating the enemy is by killing, maiming, or capturing their human forces.

If you kill an individual enemy, you’ve removed one troop from the fight. But if you maim an individual, other troops will likely come to their aid, removing multiple threats (at least temporarily). The third, and most humane, approach is to capture as many of their forces as possible without having to kill or maim.

Even though the capture of large number of prisoners can produce logistical problems, this is still one of the most direct way to impose one’s will on the enemy. During the Gulf War (1991), for example, thousands of Iraqi soldiers surrendered to American forces, a move that prevented greater losses of life and considerably shortened the conflict.

Why Prisoners are Exchanged

Prisoners of war are usually held until the end of hostilities to prevent them from returning to the fight. If they are unable to fight or lead troops in combat, there is no reason to keep them prisoners. Under the Geneva Conventions, any prisoner that is seriously wounded or seriously sick and cannot contribute to the war effort and is entitled to be repatriated to their home country or internment in a neutral country.

In some wars during the eighteenth and nineteenth centuries (e.g., the American Revolutionary War, the Napoleonic Wars, the War of 1812), able-bodied combatants of similar rank would be exchanged even while the war was ongoing. In later wars, however, most releases and exchanges only occurred after the fighting had ended. The reason is that there is no obvious benefit to swapping out soldiers. If combatants on both sides are reentering the ranks and returning to the battlefield, they are only delaying the primary objective of warfare.

While this is a prime strategic reason for not engaging in prisoner swaps, it can also serve as moral justification.

The Moral Factor We Should Consider: Average Lethality

Most individual combatants have a negligible effect on the outcome of a war. In fact, the average individual soldier has a limited degree to which they themselves are capable of causing injury or death on the battelfield. To quantify this effect, let’s call this ability to maim and kill the Average Lethality Factor (ALF). In modern wars, the ALF is less than 1 since few individual soldiers actually kill or maim in combat. But let’s take a high estimate and say that it can range from 0 to 10, which would make the median ALF a factor of 5 (that is, the average solidier could possibly kill five enemy combantants during the course of a war). This means that if returned to combat, the average individual prisoner of war would be able to maim or kill an average of 5 enemy combatants. When exchanging prisoners, a nation would therefore want to swap those with similar ALFs. Someone with an ALF-1 would be traded for other ALF-1s, ALF-10s for other ALF-10s, etc. A nation would not want to intentionally release prisoners who would upset the overall balance of the war effort in their enemies favor.

But what if you had a prisoner of war who, because of experience or leadership ability, was able to have an ALF multiplier? Say, for instance, that during World War II, the Nazis were somehow able in 1944 to capture Gen. Dwight D. Eisenhower, the Supreme Commander of the Allied Forces in Europe. If the Nazis were unaware of Eisenhower’s reputation, they might grade him as having an ALF-1. But if they knew he was the architect of early war efforts, they might grade him as an ALF-50,000. His experience and knowledge makes him, even as an individual, an exceptionally dangerous prisoner. They would need to get 50,000 Nazi POWS back in exchange for this one man—though even then, it might not be worth the trade if Eisenhower caused them to lose the war effort.

The Moral Calculus

From an objective moral standpoint, our support of a prisoner swap should depend primarily on which side is gaining a strategic advantage. Even if we oppose the war in general, we should want the side with the most moral justification for imposing their will on their enemy to gain the advantage. If Nation A has the most just cause (relatively speaking), we should favor them making prisoner swaps when the ALF imbalance is in their favor. If the imbalance is in favor of Nation B, we should oppose the trade, even if we support the emotional reasons for the exchange.

In the latest exchange the U.S. swapped five high-risk, high-ranking Taliban officials for Army Sergeant Bowe Bergdahl. From an emotional and individual perspective some Americans could say it was a good trade since the life of one American is worth more than a hundred Taliban. But we have to consider the future lives lost. Sgt. Bergdahl has an ALF of zero, since he will not be returning to combat. The five Taliban, however, should be rated—at a minimum—an ALF-1. So the swap was not just for the life of one American, but also for the potential loss of life of at least 25 Americans or allied Afghanis.

Does this exchange have the potential for a moral imbalance and greater loss of American and allied lives? Presumably it would, though it’s likely the Obama Administration made a similar calculation, was aware of the danger, and believed such an outcome could be avoided. Hopefully, the administration knows more than we do about true threat these Taliban leaders pose and have taken steps to neutralize that danger.

When Americans are held as prisoners of war, many factors, ranging from the emotional to the political, can influence the decision to exchange them for enemy combatants. While we can create criteria for making moral judgments in such situations, we should also be sympathetic to those who have to make the final decisions. Instead of be too hasty in our criticisms, let’s pray our leaders were guided by prudence, wisdom, and God’s leading.

Joe Carter

Joe Carter is the author of The Life and Faith Field Guide for Parents, the editor of the NIV Lifehacks Bible, and the co-author of How to Argue Like Jesus: Learning Persuasion from History’s Greatest Communicator. He also serves as an executive pastor at the McLean Bible Church Arlington location in Arlington, Virginia. Read More

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone. 

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ 5 human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Romans 13:1-7; 1 Peter 2:13-14

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7 

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, non-maleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Fur- 3 thermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being. 

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24