fbpx
Articles

Asking Questions Few Want to Ask

/
November 13, 2014

Some questions should never be expressed, given their timing and content. The police pull you over and ask for your driver’s license. You do not say, “Oh? Why?” You give them your license. The boss says, “I need 20 copies of this in an hour.” You do not reply, “Can someone else do it?” You find a copy machine. When duty calls, duty expects an answer. That’s how life works. We don’t get to challenge authority very often without adverse consequences; and sometimes, a mere question crosses the line. Every parent understands. When we tell our kids, “Take out the garbage,” or, “Go to bed,” we expect compliance, not useful dialogue about ways, means, and rationales. In those situations, “Why?,” is an offensive response. Children should not search for reasons to disobey or construe their obligations as narrowly as possible—e.g., by going to bed without going to sleep or by setting the garbage outside the door instead of the dumpster.

So meet the lawyer of Luke 10:25-28, a man who asks an ill-timed and inappropriate question. He comes to Jesus with an ultimate worry, a concern shared by many in that day. He wants to know—or he thinks he wants to know—what an overachiever like him must do to inherit eternal life. There has to be a plan, he expects, some way of facing the judgment of God with exceptional confidence. Maybe this Jesus would know, if anyone does, given his miracles and teaching. At the very least, it could do no harm to hear the prophet’s answer. Which laws—among the Old Testament’s 613 options—really count? What are God’s priorities, if some matters of the Law are taken to be weightier than others?

From one angle, of course, even this first question would get it wrong, if “do” suggests a transactional pathway to eternal bliss, where enough merit makes the grade. In that case, we have no hope, not precious little. But the lawyer may innocently desire to know what it’s like to love God as he ought to do, given his place among the chosen people; and if so, we haven’t seen his dark side—not yet, anyway. The latter appears soon enough, however, as this conversation unfolds. When Jesus applies the right kind of pressure, the lawyer forgets himself and asks the exact wrong question, one that should never be voiced, given its content and intent.

At first, Jesus tests the lawyer’s interpretive skills: “What is written in the Law? How do you read it?” (v. 25). In other words, he asks, ‘Which laws capture the essence of what Yahweh requires?,’ and the lawyer responds correctly with two answers. The righteous man loves the LORD with everything he’s got, as per Deuteronomy 6:5 (v. 27). But something else is needed, taken from Leviticus 19.18: he would love his neighbor as himself. Do these two things, Jesus confirms, “and you will live” (v. 28). But lawyer takes another step, this one too far, desiring to justify himself. He asks, “And who is my neighbor?”

On the surface, this question seems harmless. What’s wrong with knowing, in specific detail, who exactly one’s “neighbor” is? If Jesus would address this detail, the might examine himself more usefully, to see whether he is ‘in’ or ‘out’ of the covenant. But Jesus sees through the question: in fact, the lawyer is playing a clever game. Once Jesus says, “Behold, your neighbor,” the lawyer will also learn who his neighbor is not. He will know who the strangers are, defined as people he doesn’t have to care about. Thus Jesus tells the parable of the Good Samaritan, thereby teaching simple but profound lesson. Draw in your mind a ‘circle of concern,’ and put yourself in the center of it. If you were the lawyer, you’d want to know, “Who is inside my circle, and who is outside?” But from Jesus, he gets this response: “The circle moves with you.” In other words, all kinds of people can and do fall within one’s circle of concern, given the usual patterns of daily life. The Samaritan understood this fact intuitively and acted upon it. The others did not; and the lawyer is expected to regard the conduct of these others as moral failures.

In the same way, we face similar temptations and may find ourselves asking Godless questions. Was that a knock on the door? A friend in need? We answer, “No, I guess not,” but some part of us really means, “I hope not.” Was that Fred from the church, back there by the roadside, struggling with a flat tire? We tell ourselves that it was not, while we secretly add, “Also, I hope not.” In our worst moments, we think of people not as friends and neighbors, but as transient liabilities, as problems we can do without. Someone else will come along. Someone else will do the right thing, if not us. But worse cases than these arise all the time, even here in America, where we need so little and want so much. In fact, worse cases come up especially in America, and they’ve done so ever since 1973.

Perhaps the most famous article ever written on abortion is also, in its own perverse way, the bravest. In 1971, while most Americans were still pro-life, Judith J. Thompson published an essay in which he stipulates what defenders of abortion typically do not: “the fetus is a person from the moment of conception.” But never mind, Thompson argues: the personhood of unborn children, even conceded, does not require us to ban abortion. It can be morally permissible to kill a baby, she argues, even when the mother’s very life is not threatened by ongoing pregnancy (Philosophy & Public Affairs, Fall 1971). Most abortion advocates, however, are not so candid. On the contrary, their strategy looks very much like the lawyer of Luke 10:25-28. If we can gerrymander the boundary between ‘my neighbor’ and ‘the stranger,’ we limit our bioethical liabilities. In this case, the categories are ‘persons’ and ‘non-persons,’ and one cannot help noticing the energy devoted to missing an obvious point about the unborn: they are living human beings and thus living human persons.

In debates about the beginning and end of life, modern societies expend great effort examining the personhood of what is to be killed. It’s either permissible or wicked to kill the unborn and infirm, depending on whether they are enough like the rest of us to count. Some argue that it’s actually good to kill here and there, rather than let live, though they still construe the resulting deaths as tragic. But if we get back to the lawyer’s question, a new worry arises, one that precedes any particular answer to the question of personhood, as clear as that answer actually is on both ends of life. That is, it should concern us greatly that lots of people in this country are asking nasty, ill-timed questions, two of them being, (1) Are human fetuses really unborn children?, and (2) Are desperately sick, elderly people really living?

On the surface, such questions seem harmless enough. They merely ask whether one has duties toward the unborn and elderly, given how they differ from regular people in certain respects. Yet they also serve a deeper, less admirable agenda, even perhaps a wicked one, depending on what responses they contemplate; for what accompanies the question is the unspoken thought, “I hope not,” as per our other examples. In other words, the questions themselves reduce the unborn and elderly to the status not of friends and neighbors, but of assets and liabilities. They are either in the way or out of the way, and thus the parable of the Good Samaritan finds its mark. We might have thought, “We’d be sinning, of course, if we were to mistreat actual persons.” But this parable implies that the wrong turn can happen even sooner. Some inquiries should never be engaged, especially when they seek to limit our moral exposure. From that perspective, the question, “Who is really human?,” can be a nasty one indeed.

Thor Madsen

Dr. Thor Madsen has been at Midwestern Seminary since 1999 and is currently Dean of Graduate Studies, PhD Program Director and Professor of New Testament, Ethics and Philosophy. Dr. Madsen graduated from Wheaton College with the Bachelor of Arts degree, majoring in philosophy. Following his studies at Wheaton, he went to … Read More

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone. 

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ 5 human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Romans 13:1-7; 1 Peter 2:13-14

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7 

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, non-maleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Fur- 3 thermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being. 

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24