Book Review

Book review: The Coddling of the American Mind

October 10, 2018

The Coddling of the American Mind by Greg Lukianoff and Jonathan Haidt is, in the authors’ own words, “about wisdom and its opposite” (p. 1). Based on an article the duo originally wrote in the September 2015 edition of The Atlantic, the book focuses on three “Great Untruths” that have grown in popularity the last few years and are particularly potent on college campuses today. These three Great Untruths are:

  1. The Untruth of Fragility
  2. The Untruth of Emotional Reasoning
  3. The Untruth of Us Versus Them

The Coddling of the American Mind is broken into four parts: an exploration of these Great Untruths; a couple of examples of how those untruths are manifest; a in-depth look into how the Great Untruths came to prominence; and some practical advice on where to go from here. This book is full of important observations and timely applications for parents, pastors, educators, and others, despite not being a Christian volume. Let’s examine the book through the lenses of the three Great Untruths on which it is based.

1. The Untruth of Fragility

This untruth is expressed as, “What doesn’t kill you makes you weaker.”

Lukianoff and Haidt open up this section with a controversial example by examining a 2015 study on peanut allergies. In short, the study concluded that among children who were “protected” from peanuts, 17 percent developed a peanut allergy; and among students exposed to peanut products, only 3 percent developed an allergy (p. 20-21). What do the authors say this has to do with the coddling of America’s young people? A lot.

The authors cite the work of statistician and stock trader Nassim Nicholas Taleb. In his book Antifragile, Taleb breaks everything down into three basic categories: fragile, resilient, and antifragile (p. 23). A glass vase is fragile because it can be easily damaged or broken when put under stress. A plastic sippy cup is resilient because it can be thrown on the floor by a toddler every night at dinner and not be damaged. Human muscles are antifragile because resistance, challenges, and moderated stressors actually make them stronger.

Humans are meant to be antifragile beings, say Lukianoff and Haidt, and their case is strong. Their claim is that the coddling of America’s young people, or the constant pursuit of not only their physical safety, but also their mental and emotional safety, is actually doing more harm than good.

Because the Lukianoff and Haidt work on colleges campuses and because their observations are most readily seen in college students, much of the book addresses how these untruths affect those in college (or of college age). The authors ask, “Should college students interpret emotional pain as a sign that they are in danger” (p. 27)? If you pay any attention to the news, you will remember a number of recent cases of college students claiming that certain speakers holding events on their campuses could be “harmful” or “violent” simply because their views may make some students uncomfortable. This victim mentality ultimately exists because many college students today see themselves as fragile and breakable, not resilient, and most certainly not antifragile.

But what if students have genuinely endured traumatic life situations that may make certain experiences emotionally and/or mentally difficult? Shouldn’t authority figures do all they can to provide safe spaces that protect these students from harmful triggers? Lukianoff and Haidt say no. They write, “Avoiding triggers is a symptom of PTSD, not a treatment for it” (p. 29).

Later they break down the unintentional negative effects of “safetyism” which is “a culture or belief system in which safety has become a sacred value” (30). To conclude the discussion on this untruth, the authors write, “Safetyism deprives young people of the experiences that their antifragile minds need, thereby making them more fragile, anxious, and prone to seeing themselves as victims” (p. 32).

2. The Untruth of Emotional Reasoning

This untruth is expressed as, “Always trust your feelings.”

Lukianoff and Haidt begin this chapter by listing nine of the most common cognitive distortions humans express, including emotional reasoning (letting feelings determine reality), catastrophizing (focusing on the worst possible outcome), mind reading (assuming you know the thoughts of others), and other distortions commonly taught in counseling courses or those learning cognitive behavioral therapy.

Right away the authors tackle one of the most common buzzwords on college campuses (and on social media) today: “microaggressions.” In short, microaggressions are common verbal, behavioral, or otherwise social interactions that communicate bias, privilege, or other negative messages, whether intended or unintended. Some microaggressions the authors have heard before are listed: a white person calling America a “melting pot,” a white person saying “I believe the most qualified person should get the job,” and others (p. 41).

The problem with microaggressions, say Lukianoff and Haidt, is that people often unintentionally offend others simply because of their life experience, and that does not line up with the meaning of “aggression.” They write, “Aggression is not unintentional or accidental. If you bump into someone by accident and never meant any harm, it is not an act of aggression, although someone may misperceive it as one” (p. 40). That is where the phenomenon of microaggressions and The Untruth of Emotional Reasoning collide. The authors identify a shift in morals on campus—a shift from “intent” to “impact” (p. 43). What people intend has taken a back seat to how their act made someone feel, regardless of what the “aggressor” intended. The authors contend, “A faux pas does not make someone an evil person or an aggressor” (p. 44).

The best way to summarize this section is in the author’s own words, “Discomfort is not danger” (p. 51).

3. The Untruth of Us Versus Them

This untruth is expressed as, “Life is a battle between good people and evil people.”

Writing specifically about one concerning event on a college campus, Lukianoff and Haidt write, “It’s as though some of the students had their own mental prototype, a schema with two boxes to fill: victim and oppressor. Everyone is placed into one box or the other” (p. 57).  Though the authors are making this statement in regard to one particular event, their observations show that it can be applied today to many college campuses and beyond.

In this section of the book, the authors break down the phenomenon of identity politics, showing that it is not a new phenomenon but acknowledging that it has evolved and has been transformed into “call-out culture” (p. 71). In such a culture, “students gain prestige for identifying small offenses committed by members of their community, and then publicly “calling out” the offenders” (p. 71). The authors continue, “One gets no points, no credit, for speaking privately and gently with an offender—in fact, that could be interpreted as colluding with the enemy. . . . This is one reason social media has been so transformative: there is always an audience eager to watch people being shamed, particularly when it is so easy for spectators to join in and pile on” (p. 71-72).

All of these untruths compound on one another like the cliché image of a snowball rolling down the hill. They build on one another and attempt to bowl over anyone who may attempt to stand in their way and recognize the problem. The authors recognize this and help their readers understand and learn how best to respond.

A brief conclusion

America’s young people are in a moral crisis, but it’s not as simple as it seems. It is not simply a moral crisis of making poor decisions. It is a moral crisis that is turning the basis of morality upside-down. In The Coddling of the American Mind, Lukianoff and Haidt provide a thorough analysis of this phenomenon, and they show their work throughout.

It is difficult to do this book justice in a brief review such as this. We didn’t even really take the time to examine all of the helpful advice Lukianoff and Haidt provide for parents, universities, and societies in general. So, the reader should know that this book is intensely practical as well.

I highly recommend this book to anyone who engages with young people in any capacity. Whether you’re a student pastor, a parent, a college administrator, or otherwise, this volume will give you a complete perspective on why America’s young people seem so fragile today. Remember, however, that this book is not a “Christian” book. So, while Lukianoff and Haidt provide sound counsel and advice throughout, it will be up to you to make connections to the truth of the Scripture and how you practice biblical parenting and ministry.

Join the ERLC in Dallas this week for The Cross-Shaped Family. This conference is designed to equip families to see that all of our family stories are shaped by the ultimate story of our lives, the gospel. Speakers include Russell Moore, Jen Wilkin, Matt Chandler, Eric Mason, Ray Ortlund, Beth Moore, Jamie Ivey, and many more. Register today to attend or find out more about our live stream option!

Chris Martin

Chris Martin, author of “Terms of Service,” is a content marketing editor at Moody Publishers and a social media, marketing and communications consultant. He has led social media strategy at Lifeway Christian Resources and advised some of the foremost Christian leaders and authors on digital content strategy. He writes regularly at … Read More

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone. 

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ 5 human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Romans 13:1-7; 1 Peter 2:13-14

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7 

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, non-maleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Fur- 3 thermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being. 

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24