fbpx
Articles

Explainer: Justice Thomas and the possibility of reining in Big Tech

/
April 12, 2021

Last week was a particularly busy week for the technology industry at the nation’s highest court. First, the United States Supreme Court ruled in Google’s favor in a decadeslong court battle with Oracle over the use of certain software code to build the Android operating system. Oracle claimed that Google’s use of the code violated federal copyright law. Then, the high court released its decision in the case Biden vs. Knight First Amendment Institute at Columbia University. This particular case was ruled moot, and the lower decision was dismissed. The case was originally titled Trump vs. Knight. It was changed with the inauguration of Joseph R. Biden since the case revolved around the question of the president’s ability to block access to the public on a social media platform.

What was the case about?

The original lawsuit was filed back in July 2017 by the Knight First Amendment Institute and seven social media users against President Trump on account that he had blocked these seven individuals on Twitter after they criticized him or his policies. Being blocked by the president meant that these users could no longer see or respond to his posts on the platform. As veteran court reporter Amy Howe wrote, “The plaintiffs alleged that blocking them on Twitter violated the First Amendment, and the district court agreed. The U.S. Court of Appeals for the 2nd Circuit upheld that ruling.” The lower court ruled that the president’s Twitter account was a public forum and that the government violated the rights of these individuals by blocking access to it.

On Aug. 20, 2020, a petition for a writ of certiorari was filed. The Supreme Court agreed to review the case, but it was also during an election year. In January, the Trump administration filed a brief indicating to “the justices that, although the 2nd Circuit’s decision was worthy of their review, the case would become moot once Joe Biden succeeded Trump as president on Jan. 20.” Amy Howe explains, “Trump had been sued as the president, rather than in his personal capacity, the administration explained, but Biden would not have any control over Trump’s Twitter account.” Then after the attack on the United States Capitol over alleged election fraud, President Trump was permanently suspended from Twitter over the claim that he incited the violence (even though the administration said that this suspension could be overturned, so that fact should not have bearing on the case.) All of these shifting circumstances ultimately led the court to grant the petition for a writ of certiorari, vacate the judgement, and remand the case back to the Second Circuit with instruction to dismiss the case as moot.

What does this case have to do with online content moderation?

On April 5, Justice Clarence Thomas released a concurring opinion alongside the court’s ruling. Justice Thomas explained in detail the court’s deliberations and the reasoning behind the decision to grant the petition for a writ of certiorari. But he went on to connect this case to the larger questions surrounding the immense responsibility and control that certain technology companies have in civic discourse given our public dependence on and the massive size of technology companies such as Facebook, Twitter, Amazon, and Google.

Justice Thomas writes, “Today’s digital platforms provide avenues for historically unprecedented amounts of speech, including speech by government actors. Also unprecedented, however, is the concentrated control of so much speech in the hands of a few private parties. We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.” He went on to state that the government might have a compelling interest to intervene in this new power dynamic by possibly limiting the right of a private company to exclude. Justice Thomas explained, “If part of the problem is private, concentrated control over online content and platforms available to the public, then part of the solution may be found in doctrines that limit the right of a private company to exclude.” He submitted two possible legal doctrines for consideration, designating social media as “common carriers” or as “public accommodations,” both of which are highly controversial in digital governance debates, especially among legal media scholars.

Justice Thomas argued that the “common carrier” designation has been applied to other industries with considerable market size, such as those in transportation and communication. These industries are given special privileges by the government, but also have restrictions placed on their ability to exclude. “By giving these companies special privileges, governments place them into a category distinct from other companies and closer to some functions, like the postal service, that the State has traditionally undertaken.” This particular argument may overlook the difference between social media as simply a carrier of information, rather than a curator of that information posted by users. 

The other designation of “public accommodation” would apply regardless of the relative market size of the companies, given the ongoing scholarly debate about whether market power is a necessary aspect for a company to be considered a common carrier. Justice Thomas wrote that these companies may not “not ‘carry’ freight, passengers, or communications,” but nevertheless they could have their right to exclude curtailed given their public utility. “If the analogy between common carriers and digital platforms is correct, then an answer may arise for dissatisfied platform users who would appreciate not being blocked: laws that restrict the platform’s right to exclude.” While he acknowledges that technology companies do indeed have their own First Amendment rights, he nevertheless argues that these rights may need to be diminished in light of the influence this industry has over our public discourse. This is a complex situation, especially for conservatives who traditionally resist the government’s intrusion into the rights of individuals and corporations.

Overall, Justice Thomas explores each of these options as well as their potential pitfalls throughout the concurrence. He rightly points out that these decisions would need to be enacted by various legislatures, but they also might be under the prerogative of the courts depending on the contours of the cases brought forth. This opinion, while not holding any enforceable action, is significant because a sitting Justice of the Supreme Court is making these types of arguments to reign in the power of the technology industry—an issue that both Democrats and Republican have been pursuing , even if on different ideological grounds.

What does this mean?

Justice Thomas acknowledged the tenuous realities in the current public policy debates over the role that these digital platforms play in our public discourse in light of their immense size and influence, including their ability to moderate user content. He is correct in saying that applying old doctrines to the new challenges of digital platforms is an extremely complicated matter, whether it be on issues of free speech, questions of public accommodation, or the nature of religious expression online.

As legal expert and free speech attorney David French correctly states, “Millions of Americans are deeply concerned about the power and reach of America’s largest tech companies, but their concerns often diverge sharply depending on their partisan affiliation.” French goes on to say, “The two sides are increasingly united in wanting more government regulation. They’re deeply divided as to what those regulations should say.” French, as others have pointed out, is concerned about government intervention in these matters since it may jeopardize the countless First Amendment victories that have been forged in recent years.

While Christians may disagree about the best path forward in these particular debates, we all must acknowledge that we live in a time where religious speech is increasingly seen as at odds with acceptable public discourse and free expression is often hampered in the pursuit of secularism. We need more believers engaged in this discussions who understand that the technology industry must be a major element in a full-orbed public theology. These types of decisions are crucial for the health of our democracy and the future of religion in the digital public square. 

Even with the immense complexity of these debates, one thing is abundantly clear: the dignity of our neighbor is at stake around the world. We must keep that truth central to this debate over digital governance, whether here in the United States or abroad under the repressive hand of authoritarian regimes. Though these issues may at times seem just to be about tweets, posts, and even the contours of particular content moderation policies, they must be seen as ways that human beings, created in God’s very image, are able to communicate, express themselves, and do life in an ever-increasing digital society.

Photo Attribution:

SOPA Images / Getty Contributor

Jason Thacker

Jason Thacker serves as senior fellow focusing on Christian ethics, human dignity, public theology, and technology. He also leads the ERLC Research Institute. In addition to his work at the ERLC, he serves as assistant professor of philosophy and ethics at Boyce College in Louisville Kentucky. He is the author … Read More

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone. 

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ 5 human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Romans 13:1-7; 1 Peter 2:13-14

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7 

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, non-maleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Fur- 3 thermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being. 

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24