The ISIL Strategy and the Just War Tradition

October 3, 2014

In his ISIL speech on September 10, the President outlined a four-part strategy to combat the militant insurgency of the Islamic State in northern Iraq: to “degrade, and ultimately destroy ISIL through a comprehensive and sustained counter-terrorism strategy,” to increase support to Iraqi forces already on the ground, to continue prevention of additional attacks on U.S. interests, and lastly, to provide ample humanitarian aid to communities decimated by the ruthless ISIL advance. On a cursory reading the strategy seems feasible. Perhaps it will even rebuff the threat as promised. The question, however, is whether the strategy follows the rules set down by a longstanding tradition of just war theory. And in pursuing this line of inquiry we must bear in mind a key distinction between the ends sought militarily and the rationale offered for achieving those ends. Military intervention may be morally necessary and justifiable when the rationale offered for intervention is not.

Historically, the rules of war are grouped into three moments: jus ad bellum, jus in bello, and jus post bellum. The first, jus ad bellum, has to do with the decisive questions or issues relevant to deliberations over whether war is to be waged. The second, jus in bello, concerns how war is to be waged once combat has initiated. And the third, jus post bellum, focuses on rules of conduct following the formal completion of war; that is, justice after the war has been “won.” Love cannot be divorced from justice, and in a world where war remains a regrettable necessity, the motivating reasons for conflict, and indeed its guiding rationale, must be justice and charity. In the Just War tradition intervention is always on behalf of a nation, a neighbor, that cannot defend itself against an unjust assailant.

Again, at issue here is not whether intervention is justified in Iraq and Syria—as will be seen, I believe just cause conditions were met long ago—but whether the rationale offered publicly for intervention correspond to rules of Just War theory. Let’s identify a few of these dissonances and then conclude with a few cautionary remarks.

What will it look like to succeed in Iraq? That has been a recurring question since the President’s 9/10 speech: does the U.S. have a reasonable chance of success? The answer to this question depends, of course, on what it might mean to be “victorious” over fanatical terrorism. The first objective in the President’s new strategy remains too vague and does not help us (the American public) grasp what he intends by “destroy.” It is relatively easy to “degrade” a vastly underpowered opponent; experience shows it is comparably harder to destroy it. By what means, exactly, will annihilation of institutionalized resentment be achieved? To what lengths?

The Bush administration’s mistake in 2001-02 was to characterize the U.S. response to the Twin Towers bombing as a War on Terror. To the extent that terror operations are motivated by a violent hatred of the West (especially the U.S.) there is no compelling reason to think that even the world’s greatest superpower with the most sophisticated weaponry known to man could eradicate this hatred from the earth. A War on Terror is not winnable. The Obama administration is not invulnerable to the same mistake. If ISIL is mere proxy for a far more wide-ranging campaign against hostiles in the near East, then “success” in this campaign will invariably prove elusive. Our final objective in Iraq and Syria needs specificity and redirection. If the one of the unforeseen consequences of invading Iraq was to someday invade Syria, then what might be the unforeseeable consequences of invading Syria?

Why, exactly, the administration could tolerate for so long the extermination of Christians and the near genocide of Yazidis but not the beheading of two journalists remains puzzling. Just Cause conditions were met months, if not years ago. I suggest “years” here in part because I think one of the more persistent oversights in current discussions surrounding new military operations is a failure to recognize that their renewal is a direct result of having insufficiently upheld the rules of jus post bellum in the first place. U.S. rebuilding efforts in Iraq were shortsighted and misinformed. For evidence one perhaps need look no further than the forced democratization of Iraq and the protracted political disorder that has resulted from it. Since the hurried withdraw of U.S. troops beginning in 2009, Iraq has been the site of vicious sectarian violence. Defense leadership now acknowledges that early withdraw was injudicious—hence the “new” strategy—and that is also why the current problem has more to do with a failure to uphold jus post bellum than of satisfying jus ad bellum conditions all over again.

In truth, notifying U.S. citizens of the intention to begin targeting strategic ISIL positions was several weeks late and was in any event an aberration of the norm. Airstrikes are currently being conducted in no fewer than six different countries, including now Syria, which has a political situation so horrendous and confusing it is impossible to get anything like a clear picture of what is happening there without requisite security clearances. It is believed that arming and training diverse militias in support of their opposition to the Assad regime will morph later into opposition of ISIL and other radical groups. The American public is being told to prepare for a long martial commitment nevertheless; we simply don’t know when we will be able to extricate ourselves from the region. But of course it is likely that surveillance drones will remain omnipresent long after flags are dropped.

As for the use of drones, I have written elsewhere that the current drone program does not adequately meet the discrimination principle of jus in bellow. The principle of discrimination holds that a necessary distinction must be maintained in warfare between combatant and non-combatant. It is never morally permissible to target civilians deliberately.  It may happen that civilian life is lost accidentally as a result of direct combatant targeting, of course, but the negative consequences associated with that action have a different moral status. Since 2001 the defense of drone use has appealed to two empirical benefits, reduced risk of pilot causalities and increased targeting precision. Regarding that second benefit, data furnished by third-party whistleblowers estimates that of the roughly 3000 people killed by unmanned aerials approximately 400-900 are civilian. Some claim that civilians account for closer to a third of all deaths. These numbers strongly suggest the principle of discrimination is being deliberately ignored, and any unmanned aerial targeting that also knowingly accepts the death of noncombatants as a justifiable consequence of targeting combatants is morally impermissible.

The pre-emptivist doctrines that have framed U.S. defense policy for the past several decades are likewise stretched to their logical and practical limits. It is a policy of recurrent anticipation.  If the U.S. were to pursue arguably the most important objective in war, establishment of peace, and not primarily on our own behalf but on that of our neighbors, this would represent the most dramatic refocusing of U.S. foreign policy since WWII. The futile quest for political certainty in the near East is a direct contributor to the undermining of any enduring peace. This is not to suggest that some pre-emptive tactics might not be applicable to many novel defense challenges; they undoubtedly might. The problem is rather in thinking we can cancel all contingencies in the world by implementing a comprehensive system of preemptive techniques. Our sprawling surveillance state is but one example of how far the state department is willing to go to throw its net over global communications.

Now, many of the problems outlined above could be circumvented if U.S. officials took seriously the criterion of legitimate authority. In their current form entities like al-Qaida and ISIL are what Augustine would call societies. Neither organization possesses legitimate political authority but are more like robber-gangs united by their common animosity toward the West. Supporting politically legitimate Iraqi and Syrian leaders would go a long way toward achieving our martial aims, only no legitimate authority exists in Syria (hence its civil war) and given the state of affairs in northern Iraq, there isn’t much evidence of public legitimacy there either. If we were to interpret part of the President’s new strategy flexibly as something akin to reestablishment of legitimate government—though it is not at all clear that is what he means—then perhaps a start could be made. Until we are able to state with confidence who we are intervening on behalf of in Syria and how the governmental infrastructure of that nation might be rebuilt, the more definite ends we pursue in Syria, aside from dismantling ISIL, will remain inchoate and malleable.

In this respect what has become increasingly clear is that the War on Terror has not ended—its epicenter has merely shifted geographically.  If Afghanistan was phase one and Iraq (for better or worse) stage two, then this new strategy perhaps represents stage three of the War on Terror. The stages will multiply further, I suggest, so long as the U.S. pits itself against Terror and its syndicates (al-Qaida, ISIL, etc.) and not against legitimate authorities accountable to the rule of law. The potential endlessness of this war is represented by the following contradiction: the means we use to “destroy” are the very same means for inspiring radical reinforcement.  In other words, the U.S. campaign against terror is also at the same time a campaign for terror in the sense that conflict with ISIL (or any group harboring hatred toward the U.S.) is a tragically effective form of recruitment to ISIL. This is self-defeat by definition.

“Terrorism names a historical conjunction of two distinct phenomena,” suggest Oliver O’Donovan, “the waging of war by disordered means, in defiance of proportion or (especially) discrimination, and the waging of war by military organizations which are not only not governments, or subject to governments, but are not even putative governments, and so have no direct interest in the provision of judgment for any community which they plunge into the turmoil of armed struggle.” This explains quite nicely why jihadist networks are not legitimate authorities—they are not executors but antagonists of justice, and in turn wherever they are their existence is indicative of an acute lack of political authority. So important is the task of nation-building, therefore, a war would be unjust if “it does not intend the state of peaceful and lawful governance for the community against which war is waged.” Its being just does not mean it will be easy, of course. Afghanistan still faces enormous sovereignty challenges. Perhaps the U.S. would do well to allow for alternatives to democracy in these nations with profoundly anti-democratic political theologies.

The U.S. is right to begin airstrikes and to deploy additional ground support in northern Iraq and Syria, but the rationale offered so far in justification of the campaign remain too vague and open-ended. If, in fact, it is doing so for the explicit purpose of intervening on behalf of another nation and of attempting to re-establish peace in that region through reinstallation of governmental infrastructures, then that would more closely resemble an approach adherent to the rules of just war theory. If, however, Syria becomes another Iraq 2003 where we topple governments and write constitutions anew, then, again, these ends won’t be met. Looking forward the real question will be whether this new location for our War on Terror falls under the same logic as before or whether achievable, legitimate ends guide our strategic policies. For my part, I fear intervention will morph almost seamlessly into invasion, and that any tacit hope of establishing peace in the region will deflate under the pressurized exigencies of the campaign. By all appearances we have at present a strategy looking for an end, rather than an end looking for a strategy. Principles from the tradition of just war would have helped a great deal with that, if they had but been consulted.

The view expressed in this commentary belongs solely to the author and is not necessarily the view of the ERLC.

Matthew Arbo

Matthew Arbo has a Ph.D. in ethics from the University of Edinburgh, currently serves as a research fellow in Christian Ethics at the ERLC, and has taught at Southeastern, Midwestern, and Southern Seminary in Christian Ethics and Public Theology. He has formerly held a bioethics fellowship at the Paul Ramsey … Read More

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone. 

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ 5 human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Romans 13:1-7; 1 Peter 2:13-14

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7 

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, non-maleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Fur- 3 thermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being. 

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24