By / Sep 16

My family lives just outside of a small town in Tennessee with a historic downtown district. Like many small towns throughout our nation, we have a downtown square that serves as a hub. In prior generations, these public squares were gathering places for everyone. People regularly traveled in from the outskirts of town to shop, eat, and do business. They would also come together for community events and to freely engage with one another. While many historic downtown public squares have been abandoned in light of the growth of suburbs, there is a renewed interest in revitalizing these historic neighborhoods and to provide a place for communities to gather once again — especially in a digital age that has led to increasing isolation.

These public gathering places serve as an apt metaphor for a period when much of our daily communication, commerce, and community are facilitated in the digital public square of social media and online connectivity. With the rise of the internet and various social media platforms — such as Facebook, Twitter, and TikTok, and massive online retailers and internet companies like Amazon and Google — these new digital public squares promised to bring about a vibrant era of connectivity and togetherness across distances, more diverse communities, and more access to information. Many of these initial promises were made in light of oppressive regimes throughout the world that stifled free speech, suppressed human rights, violated religious freedom, and limited access to information in order to maintain control over other human beings made in the very image of God. 

Ethical challenges in the digital age

While technology has brought incredible benefits and conveniences into our lives, it also has led to countless unintended consequences and deep ethical challenges that push us to consider how to live out our faith in a technological society. Each day we are bombarded with fake news, misinformation, conspiracy theories, ever growing polarization, and more information than we could ever hope to process. We are regularly faced with challenges where wisdom and truth are needed, yet faith is not always welcomed in the public square and in the important debates over digital governance. In truth, technology has always been used and abused by those who seek to hold on to power and wield it to suppress free expression all around the world. But today, these threats seem more visceral and dangerous to our way of life than ever before.

One of the most challenging ethical issues of our day with technology is centered around the proper role of digital governance and the ethical boundaries of free expression in the digital public square. Many have recently begun to question the role of the technology industry over our public discourse, as well as the responsibilities of individuals, third-party companies, and even the role of the government in digital governance. While much of the dangerous, illegal, and elicit content is rightly moderated, questions remain as to what kind of ideas or speech are to be welcomed in the digital public square and how we’re to maintain various ethical boundaries as we seek to uphold free expression and religious freedom for all. 

On one hand, our digital public squares are very public and have an incredibly diverse group of community members. But on the other hand, there is often immense pressure to conform to certain secular ethical principles that tend to push people of faith out of public conversations and debates simply based on their deeply held beliefs about God, the nature of humanity, and how we are to navigate these challenges to free expression and religious freedom. 

A new research project

The complex nature of the questions surrounding ethics and religion in the digital age is exactly why I am excited to announce that the Ethics and Religious Liberty Commission is pioneering a new research project called the Digital Public Square. This project is designed to help provide the local church and the technology industry with thoughtful resources that will help everyone engage these important debates over digital governance and promote free expression as well as religious freedom for all. We seek to cast a robust vision for public theology and ethical engagement in a technological society — a vision grounded in a historical understanding of the role of the church in society and in the unchanging Word of God. While some believe that religion has no role to play in a modern society, we believe that our faith is central to how we engage these pressing issues and live faithfully in the digital age.

The Digital Public Square project will gather some of the best voices from across academia, journalism, public policy, think tanks, and most importantly, the local church to clarify the state of the digital public square and to cast a vision for Christian engagement in the areas of content moderation, online governance, and engagement with the technology industry as a whole. Just as Christians have sought to develop a robust public theology on matters of church and state relations for many generations, Christians must also think deeply about how God would call us to engage the challenges of technology and these companies that operate around the globe in vastly different cultural contexts. We will seek to answer questions surrounding the nature of free expression, the role of democratic values around the world, and best practices for cultivating a truly diverse digital society where people of faith are a vital part of these important conversations.

We will do so in a four-prong approach that will extend throughout 2021 and 2022. The project will include an in-depth report on the state of the digital public square, a set of guiding ethical principles for digital governance, and numerous resources for the local church to use in order to engage and bear witness to the gospel in the digital age. These resources will include two different book-length volumes: Following Jesus in a Digital Age with B&H Publishing, and The Digital Public Square: Ethics and Religion in a Technological Society from B&H Academic in 2022. The latter will feature contributions from 14 leading thinkers from across society addressing the pressing issues of digital governance, such as the nature of the public square, US and international technology policy, religious freedom, hate speech/violence, seuxality and gender issues, pornography and other objectionable content, misinformation, fake news, conspiracy theories, and the rise of global digital authoritarianism. 

To learn more about the Digital Public Square project and to receive project updates, along with our weekly content on technology ethics, visit ERLC.com/digital.

By / Jul 6

As things start moving back to a post-pandemic “normal,” many parents are looking forward to their children returning to in-person learning. In addition to improving their concentration, reconnecting with in-person friends, and reestablishing rigorous standards, one of the key benefits will be less time on screens. None of this will be without effort and intentionality, but what may prove most difficult is dialing back kids’ dependence on screens.

The battle over devices

The battle over devices was already a problem before the pandemic. Books like Naomi Scaeffer Riley’s Be the Parent, Please: Stop Banning Seesaws and Start Banning Snapchat sounded the alarm in January 2019. Real harm comes to children of all ages from unsupervised, unfiltered access to all things online and virtual, confirms Riley. The pandemic only made that worse. Once schools went online, there was little hope for limitations. Not only were children expected to be on their iPads or computers for all of each school day, they were typically given looser restrictions during after-school hours by parents who, scrambling to get their own work done and anxious about all the bad news, were glad for their children, who had nowhere to go, to have something to do. 

Last spring, when most kids didn’t have a choice about being on a connected device for hours a day, experts tried to be reassuring. They said some screen time is okay, but still agreed that too much is detrimental. “Spending an hour or two a day with devices during leisure time doesn’t seem to be harmful for mental health,” wrote psychology professor Jean Twenge, at the Institute for Family Studies. “And doing homework or educational activities on devices for a few hours a day is a virtual necessity and is unlikely to be harmful, so we can cross that off our list of worries as well.” 

Even when screen time was considered essential, Twenge wasn’t giving unqualified support. “[This] doesn’t mean parents should give up on managing kids’ screen time during this extended period of staying at home. Watching videos and scrolling through Instagram all day might keep them quiet, but it’s not the best for their mental health or development.” As virtual school winds down, it’s time to revisit prior concerns about how much screen time is too much, and even more urgently, how much of what’s online is harmful, regardless of time limits.

In addition to the angst all parents generally feel about what kids are watching and doing on social media these days, Christian parents have a biblical imperative to disciple their children — to oversee not just their mental and physical health, but most importantly, their spiritual growth (Deut. 6:6–9; Eph. 6:4). That includes shepherding their media use. We need renewed vigor to reclaim — or introduce for the first time — God-honoring digital habits. 

The Wall Street Journal’s family and tech columnist, Julie Jargon, says, “After more than a year of being glued to their devices, a lot of kids will have trouble easing up on the tech that brought them comfort and connection during the pandemic.” It’s not just children who will have to work at this. Parents, too, likely spent more time online and on devices in 2020, and their modeling is a primary influence on their kids. 

Digital reset

Jargon’s article, “How to Wean Your Kids—and Yourself—Off Screens,” recommends a family “digital reset” including things like phone-free times and spaces (the dinner table, car rides), shared rather than solo screens, and even a one-day-a-week tech sabbath. She suggests going back to pre-COVID tech rules. “Use the start of summer as an opportunity to re-establish any tech rules you let slide during the pandemic, like allowing devices in bedrooms at night or allowing videogames before homework or chores are done.”

Assuming you had pre-COVID tech rules, that’s a good place to start. But many Christian parents need to honestly ask themselves what their kids ’— and their own — habits were before the pandemic. What’s needed may not be a return to pre-pandemic normal, but a better, more biblical, normal. That includes a better rhythm of shared family culture, analog learning, creative real-life (not virtual) endeavors, and using technology for the glory of God. Some examples include reading books aloud together, asking good questions to foster substantive conversations at meal time, going outside to explore nature together, re-engaging with or developing shared hobbies, playing instruments and singing, playing board games, cooking together, exercising as a family, and the list could go on. 

It is up to parents to set expectations for life together in the family. That life is shaped in large part by how much, or how little, time is given to screens. Children need us to help them answer questions like: What does it look like to faithfully steward our time? How does social media use affect our thoughts, our affections, our desires? What might we do together if we put down our phones? And in the absence of those phones, how might we advance the kingdom of God in our childrens’ hearts and minds?

Here’s what might that look like in everyday life:

Meet with God before you meet with people: My husband and I both wait until after we’ve met with the Lord, praying and reading our Bibles, to even pick up our phones. Giving our first thoughts to what’s essential, seeking God’s will for the day, meditating on his revealed truth — all of this grounds us in what’s most important and makes us less vulnerable to the voices of the world that flood our phones (Psa.1:1-2).

Study the Bible and pray together: After seeking God personally, we seek him together as a family. Last fall we started spending between 10–15 minutes together on weekday mornings before we all headed in different directions, reading Lord Teach Us to Pray, a family study on the Lord’s Prayer. With our kids’ help, we read the text selections together and answer the questions provided in the study about what we just read in the Bible. 

Use screens in community: Proverbs 18:1 says, “Whoever isolates himself seeks his own desire; he breaks out against all sound judgment.” That reality is a warning against giving kids connected devices to use by themselves. We limit screen use to shared family spaces where they can be easily seen by more than just the person using them. 

Model what you require: (Or plan to when your children are old enough). Let your children see you stewarding your phone, your iPad, and your other smart devices the way you want them to steward theirs. 

Put screens to bed early: Rather than scrolling ourselves to a fitful sleep, we spend the last hour of most days together reading a story aloud, or reading books to ourselves, unwinding the stress of the day with restful “slow” entertainment, and closing the day’s activity with a family prayer.

As we celebrate a return to normal, these, and other similar embodied, relational practices can keep us from losing our way in the fog of media that grows thicker by the day.

By / Jun 1

On Monday, May 24, Florida Gov. Ron DeSantis (R) signed a new bill into law regulating content moderation and online governance in the state on social media platforms. This bill is the first state bill to become law on these issues, with other states including Arkansas, Kentucky, Oklahoma, and Utah currently considering similar legislation. 

DeSantis championed the bill as a collaborative effort at the press conference where he signed the bill into law, highlighting how these major social media companies have inconsistently applied their often ill-defined content moderation policies or have not been transparent in the application or design of those policies.

What is in the bill?

The Florida legislature passed SB 7072 Stop Social Media Censorship Act the week before, which includes multiple provisions curtailing content moderation in the state, such as empowering the state election commission to impose fines — up to $250,000 per day if a statewide candidate is banned from the platforms, with lesser fines for candidate of local office. 

The bill contains other major provisions like prohibiting the platforms from taking “any action to censor, deplatform, or shadow ban a journalistic enterprise based on the content of its publication or broadcast” and forbidding the removal of content from news outlets above a certain size. It also empowers Floridians to sue these platforms as individuals if they believe that content moderation standards or policies have been inconsistently applied to them. The stated intent of the bill is to regulate the powerful social media companies that some argue have unfairly moderated content and users on an ideological basis, often interpreted through a partisan lens.

DeSantis tweeted, “Floridians are being guaranteed protection against the Silicon Valley power grab on speech, thought, and content. We the people are standing up to tech totalitarianism with the signing of Florida’s Big Tech Bill.” The governor decried that these “Big Tech” companies act as a “council of censors” and mentioned that they should be treated like common carriers

Similar arguments were made by Supreme Court Justice Clarence Thomas in an April concurrence released alongside a decision on a 2017 lawsuit brought concerning President Trump’s blocking of individual users on Twitter. Of note, the Florida bill would not apply to Trump’s permanent suspension from Twitter and the indefinite ban from Facebook, which has been a controversial decision recently upheld by the newly created Oversight Board. The Florida bill only applies to candidates for state office, but its wide-ranging effects will likely be seen throughout the rest of the nation.

Big Tech censorship

Social media and the outsized influence of technology companies on our public discourse is one of the rare bipartisan points of agreement in society today. But there is little agreement on the particulars. Progressives traditionally argue for more content moderation, especially with the growing influence of misinformation, fake news, and hate speech online. Conservatives, though, have long argued for less moderation due to the notion that conservative speech and values have been unfairly taken down or suppressed — with some users being banned or even specific social media platforms being shut down completely, simply because of the prevailing ideological agenda in Silicon Valley.

These debates are often categorized under the moniker of “Big Tech,” which is designed to signify the outsized influence and ubiquity of these media platforms in the public sphere, though the term fails to account for some of the largest “big tech” companies in the United States, including Microsoft, Disney, Comcast, Verizon, and others. It also is focused on American companies, excluding tech and media giants such as Tencent and Alibaba of China who have concerning records on free speech and religious expression due to the rule of the Chinese Communist Party. The term is specifically intended to include companies like Facebook, Alphabet (Google/Youtube), and Amazon, as well as companies with much smaller user bases that have enormous influence in the digital public square, such as Twitter.

The Florida bill immediately drew criticism from across the ideological perspectives, but for very different reasons. More progressive outlets mocked the bill for its blatant disregard for free speech and spoke of a plethora of lawsuits to be filed challenging the constitutionality of the bill. The Washington Post interviewed Santa Clara University law professor Eric Goldman, who “described the bill as bad policy and warned that some of its provisions are ‘obviously unconstitutional’ because they restrict the editorial discretion of online publishers.”

Goldman also pointed out that the Florida bill may run afoul of Section 230 of the Communications Decency Act, which is designed to shield these companies from litigation over third-party content on their platforms. Section 230 was enacted in 1996 on a bi-partisian basis to encourage these platforms to moderate content under “good faith” policies, removing content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Conservative and free speech attorney David French argues, “the bill’s provisions compel private corporations to host (and also promote through application of their algorithms) speech they would otherwise reject. Not only do these provisions of state law conflict with (Section 230), they violate key First Amendment precedents that grant private citizens broad protections against compelled speech, protect the independent political speech of private corporations, and protect all Americans against vague and overbroad statutes.”

While some conservatives support the intent of the bill, they spoke of the overly broad nature of the bill and the interesting carve outs for certain companies operating in Florida. The carve out is for “any information service, system, Internet search engine, or access access software provider operated by a company that owns and operates a theme park or entertainment complex.” This would exempt Disney and Comcast/Universal from these new moderation and content rules, even as they operate in the mass media space.

Other noted conservatives such as Henry Olsen of the Ethics and Public Policy Center and Andrew Walker of Southern Seminary argue that this bill is needed in order to ensure access to all political speech, similar to the common carrier regulations on television, radio, and print media. Walker notes that in a world of competing visions of the goods, certain rights can and should be curtailed in the pursuit of the common good of access to information, especially political speech. It should be mentioned that while this access to political speech is a public good, we also should call our public and civic leaders to higher standards of truthfulness and decorum given the role and responsibilities they have by nature of their position in society.

Though this debate isn’t actually over access to all political speech in general but particular access to speech that is deemed by these companies as inciting violence or spreading misinformation that negatively affects the common good of safety and truthfulness. 

But there is a concerning track record of these companies labeling certain religious and social beliefs as inherently bigotted and hateful, in particular issues surrounding transgenderism and human sexuality. As a people who claim an objective understanding of truth and human nature, we must be cautious to not label speech we disagree with as misinformation, which is common throughout our increasingly polarized and tribalistic society. Misinformation is not in the eye of the beholder, even if it has become a partisan tool.

The role of government in seeking the common good

As many noted public and political theologians have argued, the government does have a role in protecting the rights of citizens and the common good but up to a point. David VanDrunen argues in his recent book, Politics After Christendom, “every human community and institution must reckon with the degree of diversity it will embrace, or at least tolerate. No institution can stand completely open.” The question about the role of government in these debates is to what extent should the government be involved and what degree of toleration will be applied when disparate views of the common good and human dignity clash in the digital public square.

Does it actually serve the best interest of the public if a politician or user pushes misinformation to the extent that it actually leads to violence? Does the good of safety ever outweigh the good of free speech? Is free speech actually an instrumental good whose goal is to push back on the over reaching hand of government instead of private entities with their own speech rights? Obviously governments are accountable to the public in ways that the technology industry isn’t, but are we comfortable with that level of power over private entities and speech residing with the government, especially if those in charge may change with the next election cycle?

These are complicated questions that are often layered in partisan politics and talking points that need to be addressed in a nuanced and careful manner, particularly by those in the conservative movement. Many of these exact questions have been debated for decades by those specializing in content moderation and digital governance, well before many of these flashpoint issues arose to public awareness. 

While I am unable to expand on each of these issues in this essay, it is important for Christians to understand the nuance of this debate and the potential ramification of these decisions to the common good. One of the key areas of work to be done is building out a public theology for the digital age, which includes a policy oriented advocacy effort with these influential companies rather than simply relying on the government to dictate and set the rules.

While the coverage of this Florida bill has primarily focused on access for politicians, it is much broader than that and will have far-reaching implications on the relationship between the government, the people, and these companies who provide these platforms for society. The bill actually is reminiscent at certain parts to the privacy laws implemented in the GDPR and CCPA giving individual citizens the right to sue these companies for violating their “rights”. In the case of GDPR and CCPA, it’s the often ill-defined right to privacy grounded in the unfettered pursuit of expressive individualism, and in the case of this Florida law, the unfettered pursuit of free speech. Though, all rights must be balanced in this broken world and oriented to the good, the problem is that our society and the larger world have very different visions of the good. 

This leads to very different approaches to solutions for the rise of these platforms and their influence in the digital public square. Even amongst conservatives, there are radically different understandings of the role of government, free speech, and regulation. But we must keep in mind that while there are differences in approach, many of those involved in these debates have the same overall goals. Demonizing or outlandishly mocking friends will not push the conversation forward or achieve the goals of balancing these freedoms in the digital public square. The differences often lie in engagement, rather than the content of the actual issues. 

Need for policy-oriented engagement

While this debate continues, two areas of involvement are crucial from Christians: we need a more robust public engagement on these moderation policy issues and a way to rally together for a common change. One element of this vision for the digital public square is significant investment in key institutions that are equipped to work with the policy and moderation teams at these companies, instead of simply opting for social media activism. 

This means earning a seat at the table through long-term nuanced and thoughtful engagement on particular policy issues such as privacy, hate speech, violence, international governance, and more. Historically, this is exactly how the conservative movement has seen such progress on issues such as abortion, free speech, and religious freedom. These policy issues typically involve NGOs and think tanks devoted to governmental affairs. But what if these institutions took a similar approach to the technology industry by building our teams to organize engagement and develop resources to better inform these companies on faith perspectives and common good accommodations in a pluralistic society?

Instead of defaulting to a government that must step in to solve all of our problems, we need to seek policy-oriented solutions and common good accommodations if we are to see true and lasting change in better policies that better reflect the diversity of thought on some of the most important issues of the day and champion free expression for all.

By / May 7

In this episode, Josh, Lindsay, and Brent discuss a big dip in U.S. fertility rates, Biden’s July 4th vaccination goal, COVID-19 in children, mask mandates on planes, Trump Facebook ban, Amy Bockerstette’s college title, and the Malian woman who gave birth to nine babies. Lindsay gives a rundown of this week’s ERLC content including Alex Ward with “Why reading classics can help us answer age-old questions: An interview with Karen Swallow Prior,” Jared Kennedy with “Conversations about gender should begin with humility: Helping parents navigate hard topics with their children,” and Rachel Lonas with “Why it’s important to value neurodiversity in the Church: And three ways you can help.” Also in this episode, the hosts are joined by Elizabeth Graham for a conversation about life and ministry. 

About Elizabeth

Elizabeth Graham serves as Vice President of Operations and Life Initiatives for the ERLC. She provides leadership, guidance and strategy for life and women’s initiatives and provides oversight to other strategic projects as needed. Additionally, she directs the leadership, management and operations for all ERLC events. Elizabeth is a graduate of the University of Tennessee and Southeastern Baptist Theological Seminary. She is married to Richmond, and they have a son and a daughter. You can connect with her on Twitter: @elizabethgraham 

ERLC Content

Culture

  1. U.S. fertility dips to its lowest rate since the 1970s
  2. Biden sets goal of fully vaccinating 160 million Americans by July 4
  3. Children Now Account For 22% Of New U.S. COVID Cases. Why Is That?
  4. Pfizer vaccine expected to be approved for children ages 12-15 by next week
  5. TSA Extends Mask Mandate Aboard Flights Through Summer As Travel Increases
  6. Trump Facebook Ban Upheld by Oversight Board
  7. Amy Bockerstette to Be 1st Athlete With Down Syndrome to Compete for Collegiate Title
  8. Malian woman gives birth to nine babies

Lunchroom

 Connect with us on Twitter

Sponsors

  • Brave by Faith: In this realistic yet positive book, renowned Bible teacher Alistair Begg examines the first seven chapters of Daniel to show us how to live bravely, confidently, and obediently in an increasingly secular society. | Find out more about this book at thegoodbook.com
  • Every person has dignity and potential. But did you know that nearly 1 in 3 American adults has a criminal record? To learn more and sign up for the virtual Second Chance month visit prisonfellowship.org/secondchances.
By / Apr 12

Last week was a particularly busy week for the technology industry at the nation’s highest court. First, the United States Supreme Court ruled in Google’s favor in a decadeslong court battle with Oracle over the use of certain software code to build the Android operating system. Oracle claimed that Google’s use of the code violated federal copyright law. Then, the high court released its decision in the case Biden vs. Knight First Amendment Institute at Columbia University. This particular case was ruled moot, and the lower decision was dismissed. The case was originally titled Trump vs. Knight. It was changed with the inauguration of Joseph R. Biden since the case revolved around the question of the president’s ability to block access to the public on a social media platform.

What was the case about?

The original lawsuit was filed back in July 2017 by the Knight First Amendment Institute and seven social media users against President Trump on account that he had blocked these seven individuals on Twitter after they criticized him or his policies. Being blocked by the president meant that these users could no longer see or respond to his posts on the platform. As veteran court reporter Amy Howe wrote, “The plaintiffs alleged that blocking them on Twitter violated the First Amendment, and the district court agreed. The U.S. Court of Appeals for the 2nd Circuit upheld that ruling.” The lower court ruled that the president’s Twitter account was a public forum and that the government violated the rights of these individuals by blocking access to it.

On Aug. 20, 2020, a petition for a writ of certiorari was filed. The Supreme Court agreed to review the case, but it was also during an election year. In January, the Trump administration filed a brief indicating to “the justices that, although the 2nd Circuit’s decision was worthy of their review, the case would become moot once Joe Biden succeeded Trump as president on Jan. 20.” Amy Howe explains, “Trump had been sued as the president, rather than in his personal capacity, the administration explained, but Biden would not have any control over Trump’s Twitter account.” Then after the attack on the United States Capitol over alleged election fraud, President Trump was permanently suspended from Twitter over the claim that he incited the violence (even though the administration said that this suspension could be overturned, so that fact should not have bearing on the case.) All of these shifting circumstances ultimately led the court to grant the petition for a writ of certiorari, vacate the judgement, and remand the case back to the Second Circuit with instruction to dismiss the case as moot.

What does this case have to do with online content moderation?

On April 5, Justice Clarence Thomas released a concurring opinion alongside the court’s ruling. Justice Thomas explained in detail the court’s deliberations and the reasoning behind the decision to grant the petition for a writ of certiorari. But he went on to connect this case to the larger questions surrounding the immense responsibility and control that certain technology companies have in civic discourse given our public dependence on and the massive size of technology companies such as Facebook, Twitter, Amazon, and Google.

Justice Thomas writes, “Today’s digital platforms provide avenues for historically unprecedented amounts of speech, including speech by government actors. Also unprecedented, however, is the concentrated control of so much speech in the hands of a few private parties. We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.” He went on to state that the government might have a compelling interest to intervene in this new power dynamic by possibly limiting the right of a private company to exclude. Justice Thomas explained, “If part of the problem is private, concentrated control over online content and platforms available to the public, then part of the solution may be found in doctrines that limit the right of a private company to exclude.” He submitted two possible legal doctrines for consideration, designating social media as “common carriers” or as “public accommodations,” both of which are highly controversial in digital governance debates, especially among legal media scholars.

Justice Thomas argued that the “common carrier” designation has been applied to other industries with considerable market size, such as those in transportation and communication. These industries are given special privileges by the government, but also have restrictions placed on their ability to exclude. “By giving these companies special privileges, governments place them into a category distinct from other companies and closer to some functions, like the postal service, that the State has traditionally undertaken.” This particular argument may overlook the difference between social media as simply a carrier of information, rather than a curator of that information posted by users. 

The other designation of “public accommodation” would apply regardless of the relative market size of the companies, given the ongoing scholarly debate about whether market power is a necessary aspect for a company to be considered a common carrier. Justice Thomas wrote that these companies may not “not ‘carry’ freight, passengers, or communications,” but nevertheless they could have their right to exclude curtailed given their public utility. “If the analogy between common carriers and digital platforms is correct, then an answer may arise for dissatisfied platform users who would appreciate not being blocked: laws that restrict the platform’s right to exclude.” While he acknowledges that technology companies do indeed have their own First Amendment rights, he nevertheless argues that these rights may need to be diminished in light of the influence this industry has over our public discourse. This is a complex situation, especially for conservatives who traditionally resist the government’s intrusion into the rights of individuals and corporations.

Overall, Justice Thomas explores each of these options as well as their potential pitfalls throughout the concurrence. He rightly points out that these decisions would need to be enacted by various legislatures, but they also might be under the prerogative of the courts depending on the contours of the cases brought forth. This opinion, while not holding any enforceable action, is significant because a sitting Justice of the Supreme Court is making these types of arguments to reign in the power of the technology industry—an issue that both Democrats and Republican have been pursuing , even if on different ideological grounds.

What does this mean?

Justice Thomas acknowledged the tenuous realities in the current public policy debates over the role that these digital platforms play in our public discourse in light of their immense size and influence, including their ability to moderate user content. He is correct in saying that applying old doctrines to the new challenges of digital platforms is an extremely complicated matter, whether it be on issues of free speech, questions of public accommodation, or the nature of religious expression online.

As legal expert and free speech attorney David French correctly states, “Millions of Americans are deeply concerned about the power and reach of America’s largest tech companies, but their concerns often diverge sharply depending on their partisan affiliation.” French goes on to say, “The two sides are increasingly united in wanting more government regulation. They’re deeply divided as to what those regulations should say.” French, as others have pointed out, is concerned about government intervention in these matters since it may jeopardize the countless First Amendment victories that have been forged in recent years.

While Christians may disagree about the best path forward in these particular debates, we all must acknowledge that we live in a time where religious speech is increasingly seen as at odds with acceptable public discourse and free expression is often hampered in the pursuit of secularism. We need more believers engaged in this discussions who understand that the technology industry must be a major element in a full-orbed public theology. These types of decisions are crucial for the health of our democracy and the future of religion in the digital public square. 

Even with the immense complexity of these debates, one thing is abundantly clear: the dignity of our neighbor is at stake around the world. We must keep that truth central to this debate over digital governance, whether here in the United States or abroad under the repressive hand of authoritarian regimes. Though these issues may at times seem just to be about tweets, posts, and even the contours of particular content moderation policies, they must be seen as ways that human beings, created in God’s very image, are able to communicate, express themselves, and do life in an ever-increasing digital society.

By / Apr 8

As 2020 ended, many anticipated that the turning of the New Year would bring with it a fresh dose of hope and a reprieve from the hardships that marked the last year. And, in some ways, it has. COVID-19, at least in America, seems to be trending in a promising direction, vaccinations continue at a rapid pace, and life is slowly beginning to look more normal. But while one pandemic seems keen on abating, another more insidious pathogen continues to intensify. 

I’m speaking of our “outrage culture” and the anger that fuels it. Outrage culture, sadly, is a phenomenon that has enticed us far and wide, even within the church. And, based on Tim Kreider’s commentary, “enticed” is the exact right word. He says, “Outrage is like a lot of other things that feel good but over time devour us from the inside out. And it’s even more insidious than most vices because we don’t even consciously acknowledge that it’s a pleasure” (emphasis added). Pete Ross calls our anger and outrage an “acceptable and addictive drug of society” which convinces us that we’re smart, we’re right, and “we have the necessary ideas to fix everything. That we’re the ones that need to be in charge.” We apparently can’t help but participate in outrage culture because doing so feeds a Pharisaical self-righteousness that feels good. It coddles the pride that, unless God grants repentance, will result in disgrace and, ultimately, our destruction.

Proverbs and the way of wisdom

Sadly, among the Christian community, our outrage and self-righteous Pharisaism is often aimed toward one another. Dan Darling calls this “a kind of performative self-flagellation incentivized by a social media environment that rewards hot-takes, shaming, and appealing to tribes,” all of which spills out of a heart angered by the internet controversy of the day. And day after day, Christians, with unbefitting outrage, continue to “rhetorically sacrifice” their own brothers and sisters in the faith. 

If our anger and outrage—forms of self-righteous pleasure-seeking—are rooted in pride, then the book of Proverbs shows us a better way. Proverbs 11:12 says, “When pride comes, then comes disgrace, but with the humble is wisdom.” The way of outrage culture is the way of belligerence, the way lacking in self-control, the way of slander and self-righteousness; it is the way of pride. But the way of wisdom is the way of humility and charity, of compassion, of patience and long-suffering; it is the way of holiness. 

But, the question remains, can Christians resist the enticements of outrage culture? From the Proverbs of Solomon to the book of James, the Bible answers this contemporary question with a resounding yes. By the power of the Spirit, humility and charity are the first two steps forward. 

  1. Humility

Rick Warren, in his best-selling book The Purpose Driven Life, said, “Humility is not thinking less of yourself; it is thinking of yourself less,” which is generally a fair statement. But, in the case of outrage culture, where the tendency is to lambaste our opponents because “we’re right and we need to be in charge,” thinking less of ourselves and the primacy of our expertise is an effective place to begin. Biblical humility, though, does not advocate for a self-deprecating view of oneself. Rather, it advocates for a right view of oneself, recognizing that we are creatures, recipients of God’s common grace who are offered God’s saving grace found in Christ, just like those we’re raging against.

Further, because we know that “pride goes before destruction,” as Solomon warns, we can be sure that if we practice the ethic of the outrage culture, with its furious fits and spats, any authority that we possess or hope to possess will ultimately be taken from us. In so doing, we will have proven ourselves unqualified. There is no attribute or behavior more unbefitting of the kingdom of God than the sin of pride.

Jesus tells us in his Sermon on the Mount, “Blessed are the humble, for they will inherit the earth” (Matt. 5:5). Unimaginable honor and authority await those who have humbled themselves before God. We will not show ourselves capable of entering God’s kingdom or exercising the rule he promises to entrust us with until humility becomes our fundamental orientation toward our Father in Heaven, our brothers and sisters, and our neighbors, whether online or in-person.

  1. Charity

Scrolling down a Twitter feed or a Facebook timeline, it’s often hard to imagine that Christians take Jesus’ words in the Sermon on the Mount seriously. Though he was clear on the mountainside that day that he expects his followers to love not only our neighbors but our enemies, this has proven to be an elusive standard. Even the most intuitive act of charity, “loving those who love you,” often seems too ambitious for the people of God in our online interactions.

But, Jesus and, later, the Apostle Paul, were not offering quips or suggestions to be implemented at our discretion. They were showing us the way of righteousness, the narrow way of the kingdom, the way of the children of God. “Love your enemies,” Jesus says, “For he (God) is gracious to the ungrateful and evil” (Luke 6:35). “Charity is kind,” says Paul, “it doth not behave itself unseemly . . . is not easily provoked, thinketh no evil . . . beareth all things, believeth all things, hopeth all things, endureth all things,” (1 Cor. 13:4, 5, 7, KJV). “This is the way,” God is telling us. “Walk in it” (Isa.30:21).

God the Father, through God the Son, by the power of God the Holy Spirit has commanded and empowered us to live our lives with a charity that is other-worldly and that we learn from him. Thus, as we seek to resist the lure of outrage culture and embody the way of Christ, let us take seriously these words of Andrew Murray: “Let our temper be under the rule of the love of Jesus. He alone can make us gentle and patient. Let the vow that not an unkind word about others will ever be heard from our lips (or read in our writing) be laid trustingly at His feet. Let the gentleness that refuses to take offense, that is always ready to excuse, to think and hope the best, mark our dealings with all.”

Outrage toward indwelling sin

Not all outrage is off-limits for the children of God, though. A Christian ought to be appalled at the lingering depravity and brokenness of the world; it is our native response. In fact, to pray “thy kingdom come,” as Jesus taught us, is itself a statement of outrage against the world’s fallenness. But woe to us if we believe it right to do violence against God’s image-bearers with uncharitable and outrageous words.

There is a place where the full force of our outrage can be levied: Toward indwelling sin. John Owen famously said, “Be killing sin or it will be killing you.” Rather than adopting the ethic of outrage culture and spewing rage at one another, and taking pleasure in it, we would do well to redirect our attention inward, toward the indwelling sin “waiting to destroy everything we love,” as Matt Chandler has said. The Apostle Paul says, “if you live according to the flesh, you are going to die. But if by the Spirit you put to death the deeds of the body, you will live” (Rom. 8:13). Life and death are before us. Shall we yield to the pride of our flesh and join the carnal chorus of outrage culture, a culture that loves its sin and hates its neighbor? Or shall we aim our outrage inward and, by the Spirit, put to death these self-righteous deeds of the body?

Brothers and sisters, may we be a people who embody the ethic of God’s kingdom, not that of outrage culture. May we be a people who keep the commands of Jesus, all of them. And, humbly, may we begin by loving our neighbors and hating our sin.  

By / Feb 3

Content moderation is difficult work for any social media company. Every day millions of posts and messages are shared on these platforms, most are benign in nature but as with anything there will be abusive, hateful, and sometimes violent content shared or promoted by certain individuals and organizations. Most social media companies expect their users to engage on these platforms within a certain set of rules or community standards. These content policies are often decided upon with careful and studied reflection on the gravity of moderation in order to provide a safe and appropriate place for users. It is an admittedly difficult and thorny ethical issue though because social media has become such a massive and integral part of our diverse society, not to mention the hyper politicization of such issues. 

Over the years, content moderation practices have come under intense scrutiny because of the breadth of the policies themselves as well as their misapplication—or more precisely the inconsistent application—of these rules for online conduct. Just last week, The Daily Citizen—the news arm of Focus on the Family—was reportedly locked out of their account due to a post about President Biden’s nomination of Dr. Rachel Levine to serve as assistant secretary of health for the U.S. Department of Health and Human Services (HHS). The Daily Citizen’s tweet was flagged by Twitter for violating its policy on hateful conduct, which includes but not limited to “targeted misgendering or deadnaming of transgender individuals.” This broad policy seems to include using the incorrect pronouns for individuals, using the former name of someone after they transition and identify by another name, or—in the case of The Daily Citizen’s tweet—stating the biological and scientific reality of someone’s sex even if they choose to idenitfy as the opposite sex or derivation thereof.

After The Daily Citizen appealed the decision, the request was subsequently denied by Twitter’s content moderation team and the organization was left with the choice of deleting the violating tweet or they would continue to be locked out of their account. It should be noted that the account was not suspended or blocked, which has been the case in other instances of policy violations, such as former President Trump’s recent suspension. The Daily Citizen decided to keep the tweet up and have been unable to use their account since.

The purpose of content moderation

The implementation of content moderation practices is actually encouraged by Section 230 of the 1996 Communication Decency Act, which was a bipartisan piece of legislation designed to promote the growth of the fledgling internet in the mid-1990s. Section 230 gives internet companies a liability shield for online user content—meaning users and not the platforms themselves are responsible for the content of posts—in exchange for encouraging “good faith” measures to remove objectionable content in order to make the internet a safer place for our society.

These “good faith” measures are designed to create safer online environments for all users. The debate over content moderation often center though on exactly what these measures are to entail, not the presence of the measures in the first place. Without any sort of content moderation, social media platforms will inevitably be used and abused to promote violence, true hateful conduct, and may become a breeding ground for misinformation and other dangerous content. Simply put, without moderation these platforms would not be a place anyone would truly feel comfortable engaging on each day nor would it be safe to engage in the first place. In general, content moderation policies are for the common good of all users, but the details and breadth of specific policies should at times be called into question as to their effectiveness or dangerous consequences for online dialogue.

Free speech

In these debates over content moderation, questions about the role of free speech abound. The First Amendment guarantees the freedom of speech for all people. But it only protects citizens from interference by the government itself. The First Amendment’s free speech protection does not apply to the actions of a third party, such as a private social media company governing certain speech or implementing various content moderation policies. A helpful way to think about free speech in this instance is how Christians have rallied around the ability of other third parties to act in accordance with their deeply held beliefs and use their own free speech not to participate in a same-sex wedding, as in the case of Baronelle Stutzman and Jack Phillips. The government does not have the right, nor the authority, to force a third party to violate their deeply held beliefs outside of a clear and compelling public interest that cannot be accomplished by a less invasive manner.

Twitter is within its rights to create content moderation policies and govern speech on their platforms as they see fit, but these policies should take into account the true diversity of thoughts in our society and not denigrate certain types of religious speech as inherently hateful or dangerous. And content moderation policies are actually encouraged by provisions in Section 230. But that does not in any way mean that those policies are not able to be scrutinized by the public who have a choice on whether or not to use a particular platform and the freedom to criticize policies they deem deficient or shortsighted.

Dangerous and misguided policies

Even though Twitter, as well as other companies like Facebook, cannot actually violate one’s free speech, they are accountable for the policies that they craft as well as the deleterious outworkings of misguided and at times poorly crafted policies. These overly broad policies often actually limit the free exchange of ideas online and—in the case of The Daily Citizen’s post removal—actually censor free expression and cut back on a robust public dialogue, which is vital to a functioning democracy and society. 

Twitter’s hateful conduct policy begins by stating “You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” This broad definition of hateful conduct is then subsequently expanded to include nearly every form of speech that one may deem offensive, objectionable, or even simply disagreeable.

To Twitter’s credit, they do seek “to give everyone the power to create and share ideas and information, and to express their opinions and beliefs without barriers.” They go on to say that “Free expression is a human right – we believe that everyone has a voice, and the right to use it. Our role is to serve the public conversation, which requires representation of a diverse range of perspectives.” But this lofty goal of free expression is actually stifled and in many ways completely mitigated by promoting some speech at the expense of other speech deemed unworthy for public discourse, even if that speech aligns with scientific realities which are taught and affirmed by millions of people throughout the world, including but not limited to people of faith.

Civil disagreements over the biological and scientific differences between a man and woman simply do not and cannot—especially for the sake of robust public discourse—be equated with hate speech. And any attempt to create and enforce these types of broadly defined policies continues to break down the trust that the public has in these companies and the immense responsibility they have over providing avenues for public discourse and free expression given the ubiquity of these platforms in our society. In a time where there is already a considerable amount of distrust in institutions, governments, and even social media companies themselves, ill-defined policies that seem to equate historic and orthodox beliefs on marriage and sexuality with the dehumanizing nature of real hate speech and violent conduct only widen the deficit of trust and increases skepticism over the true intention behind these policies.

Christian engagement in content moderation

When Christians engage in these important debates over content moderation and online speech, we must do so with a distinct view of human dignity in mind. It is far too easy in a world of memes, caricatures, and 280 character posts to dehumanize those with whom we disagree or seek to be disagreeable in order to gain a following. We must champion the dignity of all people because we know that all people are created in the image of God and thus are worthy of all honor and respect. And part of championing this dignity is also speaking clearly about the dehumanizing effects of ideologies like transgenderism that tend to equate someone’s identity solely on the basis of their sexual preference or desires. We should advocate for better and more clearly defined policies because these policies affect our neighbors and their ability to connect with others.

When we engage on these important matters of social media and content moderation, we also must do so informed on the complexity of the situations at hand with clarity, charity, and most of all respect even for those with whom we deeply disagree. The Bible reminds us that “we do not wrestle against flesh and blood, but against the rulers, against the authorities, against the cosmic powers over this present darkness, against the spiritual forces of evil in the heavenly places” (Eph. 6:12). Spiteful, derogatory, arrogant, and dehumanizing remarks about fellow image bearers are unbecoming of the people of God and this is not limited to issues of sexuality or transgenderism. These types of statements are becoming all too common online in our social rhetoric, even among professing Christians. It is past time for each of us to heed the words in the letter of James and seek to tame our tongue lest it overcome us with its deadly poison (James 3:8) and lead us down the same path of those in which we disagree over fundamental matters of sexuality and even issues of content moderation.

When we engage in these important issues and seek to frame debates over online speech, we must also do so with an understanding of the immense weight and pressure that many in content moderation face each day. While we may think that the tweet or post that was flagged is perfectly appropriate, we must remember that often the initial decisions on moderation are made with help of algorithmic detection. Often these AI systems are used to cut down on the amount of violating content but these systems do make mistakes. Upon appeal, these decisions are then handed over to human reviewers who may only have an extremely short window to make a call given the sheer amounts of content to review. This does not mean that these decisions are always correct or even that the policies driving these content decisions are helpful or clearly defined. The question isn’t whether discrimination or bias exists in these discussions, but where the lines are drawn, by whom, what worldview drove their creation, and the ability to appeal decisions on the merits.

Christians must also realize that in a rapidly shifting and secularizing culture, we will naturally be at odds with the mours of the day but that should not deter us from speaking truth, grounding in love and kindness, as we engage in the heated debates over online speech, social media, and content moderation. But our hope and comfort doesn’t come from better policies or consistent application across these platforms. Even if it feels as though the ground is shifting right beneath us and as there are vapid calls to “get on the right side of history,” we can know and trust that biblical truth and human anthropology isn’t about power or control but about pursuing the good of our neighbor in accordance with the truth of the One who created us and ultimately rescue each of us from our own proclivities toward sin and rebellion.

By / Jan 18

If you’ve used social media in the past year—and over 75% of Americans have — you’re probably in an online bubble without even realizing it. Thanks mostly to COVID-19, we’re living in a world where most of our connections are through screens. And that’s not a good thing. 

Social media use among American adults has been steadily rising for years, but as stay-at-home orders rolled out across the country earlier this year, it exploded. Platforms like Facebook saw up to 27% more daily users during the first few months of the pandemic. Zoom went from 2 million users to 6 million, almost overnight. And local apps like Nextdoor saw their users grow by almost 80%. 

As our work, school, and social life all moved online, we became even more disconnected from the world outside our screens. This rapid move to online communities was at least partially responsible for drastic increases in mental health issues. 

Approximately one in three Americans reported suffering from anxiety or depression in 2020, up from one in 12 in 2019. On a more concerning note, the CDC reports that 25% of young adults considered suicide at some point during 2020. 

While these numbers are staggering, we’ve overlooked the way our digital isolation has caused many people to lose their grip on reality. Conspiracy theories have exploded online. Both conservatives and liberals have become convinced that the success of the other side would mean the end of the republic. And the disconnect between the laptop class — those who see the world from their comfortable work-from-home lives—and the working class—the waiters, cashiers, and blue collar laborers who have been directly affected by restrictions shutting down their places of work—has grown larger than ever. 

Why is this happening? 

Simply put, we’ve lost the real-world human connections that keep us grounded. We’ve been forced into online bubbles on platforms designed to group us with people like us. 

When you open Facebook, Twitter, Instagram, TikTok, or virtually any other social media site, you’re seeing posts selected just for you by the AI algorithm. These posts are designed to connect you with people who have the same interests, have similar beliefs, and think the same way you do. Why? The algorithm is designed to increase engagement and keep you from closing the app, and logically, if you see things you like and are interested in, you’ll stay on the site longer. 

If the only things you read are articles tailored by the algorithm to fit your interests and the only people you talk to are those who post things that align with your thoughts, it’s incredibly easy to get sucked into a version of reality that doesn’t exist in the real world. 

But social media algorithms aren’t new. As the hit documentary The Social Dilemma shows, they’ve been in place for years. So what’s changed? 

In a normal time, most of us have regular interactions with people who aren’t like us. We talk to friends or neighbors who are on opposite ends of the political spectrum. We visit with relatives who don’t share our faith or belief systems. We interact with co-workers who come from different backgrounds and see the world differently. Our real-world connections provide an unfiltered dose of reality that keeps us grounded. Last year, we lost that, and our online bubbles became more isolated than ever. 

Escaping our online bubbles

While our world may soon return to normal, our tendencies to withdraw into sheltered bubbles won’t disappear when we get the COVID-19 vaccine. So, here are three ways to escape our online bubbles in 2021:

1. Limit social media use carefully 

This might seem like obvious advice, but it’s not as easy as it seems. Social media apps are intentionally designed to keep us scrolling for as long as possible. Personally, I’ve found it’s helpful to limit notifications and block out periods of time where you don’t check social media. The only way to win the battle against mindless social media use is to be intentional about disconnecting. 

2. Get your news and information from multiple sources

It’s tempting to believe everything you read on the internet, but so much of what we see on our feeds just isn’t true. Take the time to research things before believing them, and especially before sharing them with others. Often, a quick Google search will provide the truth about something. 

It’s also helpful to seek out multiple sources to find the truth about issues. Don’t get all your news from one media outlet. Read and follow people who think differently, but who are thoughtful and sincere in their arguments.

3. Be intentional about making real-world connections 

Making a point to connect with people outside of social media — especially those who aren’t like us — is so important. Not every conversation has to be a political discussion or deep worldview debate. In fact, simple “small talk” can go a long way. Even in a time of social distancing, it’s possible to make these real-world connections. Video chats, texts, and phone calls are all far better than a Facebook message or Twitter DMs. 

These connections don’t happen on their own. Unlike social media, where the algorithm creates conversations, real-world connections require effort and intentionality. Pick up the phone and call an old friend. Text someone to see how they’re doing. Surround yourself (even virtually) with people you love and trust. They will keep you grounded — often without realizing it.

It might seem like our isolation is out of our control, but we can be purposeful about escaping our online bubbles. We don’t need the world to go back to normal to change the way we interact with others. In an online world designed to pull us apart, let’s choose to break out of our bubbles. In 2021, let’s scroll less and talk more. We might “like” fewer posts, but we’ll be free to love more people in the real world.

By / Jan 4

2020 was a year that not only challenged the fortitude of our families but also the fabric of our nation. Last year we saw many complex ethical issues arise from our use of technology in society and as individuals. From the debates over the proper use of social media in society to the adoption of invasive technologies like facial recognition that pushed the bounds of our concepts of personal privacy, many of the ethical challenges exposed in 2020 will flow into 2021 as our society debates how to respond to these developments and how to pursue the common good together as a very diverse community.

Here are three areas of ethical concern with technology that we will need to watch for if we hope to navigate 2021 well.

Content moderation and Section 230

Some of the most talked about ethical issues in technology, even as 2021 is just getting started, are the debates over online content moderation, the role of social media in our public discourse, and the merits of Section 230 of the 1996 Communications Decency Act. If you are unfamiliar with Section 230 and the debates surrounding the statute, it essentially functions as legal protection for online platforms and companies so they are not liable for the information posted to their platforms by third party users.

In exchange for these protections, internet companies and platforms are to enact “good faith” protections and are encouraged to remove content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” But what exactly does “good faith” and “otherwise objectionable” mean in this context of the raging debates over the role of social media today? 

This question is at the heart of the debate over Section 230’s usefulness today. Some argue that platforms like Facebook, Google, Twitter, and others must do more to combat the spread of misinformation, disinformation, and fake news online. As platforms have engaged in labeling misleading content and removing posts that violate their community policies, many argue that these companies simply aren’t doing enough.

But on the other side of the aisle, some argue that these 230 protections are being used as a cover to censor certain content online—often in a partisan manner, being inconsistently applied (especially on the international stage), and may amount to violations of users’ free speech. They argue that 230 must be repealed or modified substantially in order to combat bias against certain types of political, social, or religious views.

As technology policy expert and ERLC Research Fellow Klon Kitchen aptly states, “All of these perspectives are enabled by vagaries surrounding the text of the law, the intent behind it, and the relative values and risks posed by large Internet platforms.” Regardless of where one lands in this debate, we will likely see inflamed conversations over this statute and the extent to which it should be maintained if at all.

Facial recognition surveillance

In what may feel like a Hollywood thriller plot, facial recognition surveillance technology is being deployed around our nation and the world, often without us realizing or even understanding how these tools work. Last January, Kashmir Hill of the New York Times broke a story about a little known facial recognition startup called Clearview AI that set off a firestorm over the use of these tools in surveillance, policing, and security. Thousands of police units across the country were testing or implementing facial recognition in the hopes of providing better identification of suspects and to keep our communities safer.

But for all of their potential benefits, these tools also have a flip side with extremely complex ethical considerations and dangers, especially when used in volatile police situations. Many of these algorithmic identification tools were also shown to misidentify people with darker skin more often than others because the systems were not trained properly or had inherent weaknesses in their design or data sets.

Throughout 2020, municipalities and state governments completely banned or substantially limited the use of facial recognition in their communities over the potential misuses as well as the racial divisions in our nation. The tools were thought to be too powerful, overly relied upon which could lead to false arrests or worse, or too invasive into the private lives of citizens. In 2021, we will likely see this trend of legislation on facial recognition systems continue as well as increased pressure on the federal government to weigh in on how these tools should be and can be used, especially in policing and government.

Outside of policing, there is likely to be substantial debate over how these tools are used in public areas and businesses as our society begins to open back up after the COVID-19 vaccines are more widely available. The potential for these tools to be used in identification, health screening, and more will lead to renewed debate over the ethical bounds at stake and the potential for real-life harm to those in our communities.

Right to privacy?

Outside of the growing concerns with surveillance technologies like facial recognition, there is considerable debate about the nature and extent of digital privacy in our technological society. Last year, the California Consumer Privacy Act’s (CCPA) regulations went into effect, and we also saw the continued influence of the General Data Protection Regulation (GDPR) from the European Union throughout the world. These pieces of legislation have challenged how many people think about the nature of privacy and have also raised a number of ethical concerns regarding what is known about us online, who knows it, how it is used, and what we can do with that data. Nearly every device and technology today captures some level of data on users in order to provide a personalized or curated experience, but this data capture has come under scrutiny recently across the political spectrum.

Today, some are asking if personal privacy is simply an outdated or unneeded concept or if we as citizens actually have an actual right to privacy? If we have a right to privacy, where is that right derived, and how does it align with our other rights to life and liberty? Are we to pursue moral autonomy, or is privacy actually grounded in human dignity? Many questions remain about how we should view privacy as a society and to what extent we should expect it in today’s digital world. As COVID-19 challenged many of our expectations concerning privacy, there will likely be a renewed focus on the role of technology in our lives and the extent to which the government has a role in these debates.

It is far too easy to take a myopic view of technology and the ethical issues surrounding its use in our lives. Technology is not a subset of issues that only technologists and policy makers should engage. These tools undergird nearly every area of our lives in the 21st century, and Christians, of all people, should contribute to the ongoing dialogue over these important issues because of our understanding of human dignity grounded in the imago Dei (Gen. 1:26-28).

Thankfully 2020 brought some of these issues to the forefront of our public consciousness. While 2021 will likely have a plethora of things to engage with, we should address the pressing ethical challenges that technology poses in order to present a worldview that is able to address these monumental challenges to our daily lives.

By / Dec 14

In recent months, a new social media platform gained growing popularity in light of controversies over content moderation and fact-checking on traditional social media sites like Twitter and Facebook. Parler was launched in August of 2018 by John Matze, Jared Thomson, and Rebekah Mercer. While it still has a smaller user base than most social platforms at just over 2.8 million people, the app saw a surge in downloads following the November 2020 presidential election and has become extremely popular in certain circles of our society. It became the #1 downloaded application on Apple and Google devices soon after the 2020 presidential election, with over 4 million downloads in just the first two weeks of November, according to tracking by Sensor Tower.

Here is what you should know about this social media application and why it matters in our public discourse.

What is Parler?

Parler, named after the French word meaning to speak, is described as a “free speech” alternative to traditional social media sites like Twitter and Facebook. The company’s website describes the platform as a way to “speak freely and express yourself openly, without fear of being ‘deplatformed’ for your views.” Parler intentionally positions itself as the “world’s town square,” and CEO John Matze said of the app, “If you can say it on the street of New York, you can say it on Parler.”

Parler is a microblogging social service, very similar to Twitter, where users are encouraged to share articles, thoughts, videos, and more. The platform states that “people are entitled to security, privacy, and freedom of expression.” This emphasis on privacy is seen in the ways that Parler will keep your data confidential and won’t sell your data to third parties services, which is a complaint about the nature of other platforms and their business models based on ad revenue. Currently, Parler does not have advertisers on the platform, but they have plans to allow advertisers to target influencers instead of regular users.

Posts on the platform are called “parleys,” and the feed is broken up into two sections namely parleys and affiliate content, which functions like a news feed of content providers for the platforms. To share content from someone else, a user can “echo” a certain post or piece of content.

The platform also has a “Parler citizen verification,” where users can be verified by the service in order to cut down on fake accounts and ones run by bots. Users that submit their photo ID and a selfie are eligible for verification. Once verified, users will see a red badge on their avatar indicating that they are a Parler citizen. Parler also has a “verified influencer” status for those with large followings who might be easily impersonated, very similar to the “blue check” icon on Twitter.

Does Parler censor or moderate content?

The company claims that it does not censor speech or content, yet it does have certain community standards much like other platforms, even if those standards are intentionally set low. The community standards are broken into two principles: 

  1. Parler will not knowingly allow itself to be used as a tool for crime, civil torts, or other unlawful acts.
  2. Posting spam and using bots are nuisances and are not conducive to productive and polite discourse.

Outside of these two community standard principles, Parler does have a more detailed account of the type of actions that fall under the principles. The platform is intentionally designed in order to give users some tools to deal with spam, harassment, or objectionable content including “the ability to mute or block other members, or to mute or block all comments containing terms of the member’s choice.”

Overall, Parler is designed to be an alternative platform for those who do not agree with the community standards and policies of other social platforms. The company states that “while the First Amendment does not apply to private companies such as Parler, our mission is to create a social platform in the spirit of the First Amendment.” This is an important point in the debate over content moderation on other platforms though because as the company points out, the First Amendment does not apply to private companies but was written to reflect the relationship between individuals and the state. 

Why is Parler controversial?

As the platform has gained prominence in certain segments of American life, Parler has expanded its user base in large part as a reaction to the content moderation policies on other platforms. Because it has promised to allow and highlight content that other services deem misinformation, contested claims, and at times hate speech, Parler has been characterized by what it allows its users to post without fear of removal or moderation.

Relying on users to moderate or curate their own feeds, Parler seeks to abdicate themselves of any responsibility of what is posted on their platform. The application has also become incredibly partisan, with a large number of users joining the platform after the 2020 presidential election amidst the growing distrust in the ways that other social media label controversial content, misinformation, and fake news.

Currently, Parler has a large number of users from one side of the political spectrum, which can at times lead to a siloing effect where a user only sees one side of an argument. This was one of the issues of traditional social media that Parler set out to overcome with its lax moderation policies in the first place.

Is it a safe platform?

Parler states that any user under 18 must have parental permission to gain access to the application, and all users under 13 are banned. But the service does not currently have an age verification system. Users can also change settings on their account to keep “sensitive” or “Not Safe for Work” content from showing in their feeds automatically. The Washington Post also reports that Parler does not currently have a robust system for detecting child pornography before it is viewed or potentially flagged and reported by users. A company spokesman has said, “If somebody does something illegal, we’re relying on the reporting system. We’re not hunting.”

Given its lack of robust content moderation policies, Parler has drawn a considerable number of users from Twitter and other platforms who decry that their views were censored or their accounts banned. Many conservative elected officials and news organizations have joined the platform, which hopes to attain a critical mass of users large enough to sustain the platform moving forward. Parler currently does not have the amount of brands or companies that other platforms have, which can be needed for a platform to flourish as an information source and connectivity tool for users.

Parler banned pornography on the platform but in recent months changed its content moderation policies to allow for pornography on the platform. This aligns it more with Twitter’s policy allowing this graphic content online. Parler’s approach to moderation can be seen in recent comments by COO Jeffrey Wernick to the Post in response to allegations of the proliferation of pornography on the site. Wernick responded that he had little knowledge of that type of content on the platform, adding, “I don’t look for that content, so why should I know it exists?” He later added that he would look into the issue.

Since the shifts in policy in recent months, Parler has suffered from issues surrounding the proliferation of pornography and spam, which should come as no surprise as the pornography industry has been using innovative technology from the early days of the internet. Parler states that it allows anything on its platform that the First Amendment allows. The United States Surpreme Court has declared that pornography is constitutionally protected free speech.

It should be noted that Facebook, Instagram, and YouTube ban all pornographic imagery and videos from their platforms. Facebook and Instagram use automated systems to scan photos as they are posted and also rely on a robust reporting system for users to flag content that may violate the company’s community standards. While Twitter’s policies allow for pornography, it does employ automated systems to cut down on rapid posting and other spam-related uploads as well as the use of human moderators to cut down on abuse from users and bots.

Should social media companies be able to censor speech and enforce content moderation policies on users?

This is at the heart of the debate over free speech and social media, especially centering around Section 230 of the Communications Decency Act, which is a part of the Telecommunications Act of 1996. Section 230 has been called the law that gave us the modern internet. The law allowed a more open and free market of ideas and for the creation of user-generated content sites.

As the ERLC wrote in 2019, many social conservatives, worried about the spread of pornography, lobbied Congress to pass the the Communications Decency Act, which penalized the online transmission of indecent content and protected companies from being sued for removing such offensive content. Section 230 was written with the intention of encouraging internet companies to develop content moderation standards and to protect them against liability for removing content in order to have safer environments online, especially for minors. This liability protection led to the development of community standards and ways to validate information posted without the company being liable for user-generated content.

Controversy over the limits of Section 230 and ways to update the law have been center stage in American public life for the last few years, especially as the Trump administration issued an Executive Order on the prevention of online censorship. Both sides of the political aisle are debating if it should simply be updated or if the statute should be removed completely.