By / Feb 26

On Sunday afternoon, conservative scholar and president of the Ethics and Public Policy Center Ryan T. Anderson received an online message from a would-be reader that his book When Harry Became Sally: Responding to the Transgender Moment was no longer available for purchase on Amazon’s website. The 2018 release from Encounter Books had been pulled by Amazon, without any prior notification to the author or publisher, for violating Amazon’s offensive content policy (though it would not clarify the reason for the move for several days). By Wednesday morning, and after considerable public outcry, the company released a statement about the book being removed from the marketplace. Amazon said it reserved the right not to sell certain products that violated its content guidelines. The statement claimed “All retailers make decisions about what selection they choose to offer and we do not take selection decisions lightly.”

When the book was originally removed by Amazon, search results recommended other works on transgenderism but from a very different ideological perspective, including a work written specifically as a rebuttal to Anderson’s 2018 book. As of this writing, the link is now a “dead link,” or 404 page on Amazon’s website, but the book has yet to be relisted. In the book, Anderson seeks to answer many of the questions that arise from the transgender movement and seeks to offer a scientific, philosophical, and ethical look at how transgenderism is seeking to rewrite human nature and reject biological realities.

In an essay about Amazon’s action to remove the book, Anderson noted that the book was praised by “the former psychiatrist-in-chief at Johns Hopkins Hospital, a longtime psychology professor at NYU, a professor of medical ethics at Columbia Medical School, a professor of psychological and brain sciences at Boston University, a professor of neurobiology at the University of Utah, a distinguished professor at Harvard Law School, an eminent legal philosopher at Oxford, and a professor of jurisprudence at Princeton.”

While there are many questions still unanswered about why this book was removed and why the decision was made three years after the book’s initial publication (and multiple reprints), one thing remains abundantly clear: a private company that sells nearly three out of every four books is using its outsized influence to shift the public conversation on a critical issue. There is little doubt that Amazon’s decision to silently remove the book from its cyber shelves was intentional. In the short term, this move will only help Anderson’s work on transgenderism gain a wider audience. But in the long term, it will have a chilling effect on the free exchange of ideas in our society. It is also likely to silence voices who dissent from the progressive agenda of the sexual revolution.

Conflicting content guidelines

Amazon, like many technology companies, including popular social media platforms, has a set of content guidelines about what it will allow on its platform. The guidelines begin by saying that “As a bookseller, we believe that providing access to the written word is important, including content that may be considered objectionable.” This is a laudable practice for a book retailer, especially for a company that began in 1994 with the goal of selling books online across the nation. Broad access to the written word allows for the free exchange of ideas and ultimately strengthens the social fabric of our nation as we openly debate important issues and engage ideas contrary to our own, even those ideas we find controversial or disagreeable.

But further down in their content guidelines, Amazon clearly walks this statement back. Apparently, “content that may be considered objectionable” does not include specific types of objectionable content. Amazon goes on to state, “We don’t sell certain content including content that we determine is hate speech, promotes the abuse or sexual exploitation of children, contains pornography, glorifies rape or pedophilia, advocates terrorism, or other material we deem inappropriate or offensive.” On balance, most of these exceptions appear to be reasonable and beneficial to society as a whole. However, “other material we deem inappropriate or offensive” is a vague and expansive statement that completely undermines the earlier goal of tolerance for opposing viewpoints.

This exception purportedly gives license for Amazon to remove any number of items from the marketplace, including three-year-old high selling titles that present a contrary viewpoint to the reigning secular opinions about human sexuality. Anderson’s book is now completely unavailable on Amazon, even to those who might want to engage the work in order to debunk his arguments or present an alternative viewpoint consistent with the tenets of the sexual revolution. All of this from a company that itself profited from the sales of the work for over three years and still allows other “intolerable” works that denigrate entire groups of people, including people of faith, for their view of human sexuality and human flourishing.

A better vision for the public square

Recently, many questions have arisen concerning the actions of these nascent technological marketplaces and social media companies to regulate content on their platforms. These questions include concerns about the stifling of free speech, the role of government in regulating private corporations like Amazon, Facebook, and Twitter, and the extent to which such companies are free to determine and enforce these policies on their own.

At present, Amazon’s removal of Anderson’s book from the marketplace does not technically involve issues of free speech under the First Amendment. And it is important to note that Anderson’s work is currently sold by other online retailers such as Barnes and Noble, independent bookstores, and even on his publisher’s website. But Amazon’s removal of a popular book under this overly broad—and easily abused—“inappropriate or offensive” policy is deeply distressing. It also raises pressing questions Christians must answer as we seek to build out a public theology for this technological age.

Digital content moderation or removal often leads to claims that a person’s freedom of speech or even freedom of religion is being violated. But this view fails to recognize that the First Amendment specifically protects individuals from the overreaching hands of government, not from content policies of private companies (no matter how errant or ill-advised such policies might be). Again, in this case there is no doubt that Amazon sought to wield its influence to shape public opinion on a critical matter of public concern by silencing dissenting voices. And given Amazon’s size and influence, it is possible that actions like this could result in inquiries about antitrust or lead to federal oversight, which could override Amazon’s ability to set its own content policies. 

In our view, Amazon is completely wrong for removing this book from the marketplace. Not only did Amazon violate its own stated policy of including content it deems objectionable, but it did so to deny users access to a countervailing argument to the ideology it deems in vogue. No one needs to be protected from a robust and informed public debate. As Alan Jacobs puts it, “Amazon clearly believe(s) there is only one reason to read a book. You read a book because you agree with it and want it to confirm what you already believe.” In this age of tolerance and inclusion, it is abundantly clear that only certain “acceptable” ideas will be tolerated, which is actually no form of tolerance at all. 

Time will tell if Amazon decides to reverse course and restore the book. Regardless of this particular outcome, it is obvious that we are living in a new era of human history—one in which powerful and often unrivaled technology companies wield enormous amounts of power over our public discourse. As Christians, the proper response is not fear or panic but to engage with convictional kindness, even as we work to maintain an open digital public square. We can engage these pressing concerns from a place of steadfast hope and confidence knowing that while our beliefs may not always be popular or fashionable, our beliefs reflect reality and ultimately lead to human flourishing.

By / Jan 9

On Friday evening, Twitter officially suspended the 45th President of the United States, Donald Trump, from its platform for violating its stated community policies related to inciting violence and spreading false information. This suspension comes after the heinous attack on the United States Capitol on Wednesday, inspired by the president and his key supporters, following a rally on the National Mall. The protest, which culminated in both violence and rioting, was organized in response to the congressional certification of the results of the 2020 presidential election also taking place on Wednesday. 

According to the Associated Press, “Twitter has long given Trump and other world leaders broad exemptions from its rules against personal attacks, hate speech and other behaviors.” But since the election in November 2020, many of the president’s tweets were labeled for promoting conspiracy theories alleging election fraud and the stealing of votes as well as encouraging violence. Twitter utilized these warning and fact check labels to inform the public of the potential misinformation, while the content remained available online due to the compelling public interest of having direct access to communication from the president of the United States. 

But as the Capitol Police and National Guard were clearing the building after the insurrection was quelled, Twitter disabled the president’s account temporarily and deleted certain tweets deemed as encouraging further violence. The temporary suspension also came with a warning that continued violation of Twitter’s policies may lead to a permanent ban from the platform. The account was reenabled on Thursday, Jan. 7. But due to continued policy violations by the president, his account @RealDonaldTrump was permanently suspended on Friday night. 

Community policies and compelling interest

Many prominent technology critics including a number of lawmakers, press, and public figures have called for social media platforms to take a firmer stance with the president concerning his violations of their stated content policies for users. But until recently, Twitter and other social media platforms, such as Facebook, allowed the president to continue posting due to the compelling public interest surrounding his speech given the gravity and responsibilities of the Oval Office. Yet after continued violations of these policies, which this week stoked violence and an attempted coup, Twitter took the unprecedented step of permanently suspending the president’s personal account. It should be noted that this official suspension only applies to @RealDonaldTrump, and not to official White House accounts such as @POTUS, @WhiteHouse, and other U.S. government accounts. However, because Twitter does not allow users with banned accounts to operate alternate accounts, some tweets have already been removed from @POTUS after the president chose to post to that account following his suspension.

Many have questioned the wisdom and timing of this suspension, as well as the potential fallout of such a monumental decision to suspend the sitting president of the United States. Much of the concern lies in the fact that these social media platforms have become ubiquitous in our society. In our digital age, social media sites now represent a primary vehicle of communication. Twitter serves as a news platform for many users and is a significant conduit of real-time information, including news and reporting about the very events that led to this moment.

Each social media platform has its own set of community standards, policies, or rules to govern user activity. Twitter for example allows pornography on its platform, while Facebook and Instagram ban nudity. Other platforms, such as Parler, market themselves as free speech alternatives and have very loose or even nonexistent content moderation policies. The implementation of community policies and content moderation is actually encouraged by Section 230 of the 1996 Communication Decency Act, which was a bipartisan piece of legislation designed to promote the growth of the fledgling internet in the mid-1990s. Section 230 gives internet companies a liability shield for online user content—meaning users and not the platforms themselves are responsible for the content of posts—-in exchange for enacting “good faith” measures to remove objectionable content in order to make the internet a safer place for our society.

Free speech and content moderation

The First Amendment guarantees the freedom of speech for all people. But it only protects citizens from interference by the government. The First Amendment’s free speech protection does not apply to the actions of a third party such as a private social media company governing certain speech. A helpful way to think about these issues is to compare them to the many religious liberty cases litigated in recent years, including Baronelle Stutzman (a florist) and Jack Phillips (a baker) who were taken to court based upon their refusal to render speech or use their creative gifts in ways that violated their consciences. These cases involved the government taking action to override the civil liberties of these individuals, compelling them to violate their deeply held religious beliefs or face civil penalties.

In these cases, the ERLC argued that the government did not have a compelling interest to violate their First Amendment freedoms by forcing them to participate in same-sex wedding ceremonies. The key to these cases is the idea of a compelling interest, which also ties into the issues of content moderation with social media.

Content moderation online is an admittedly difficult and thorny ethical issue. And this is because of the ways that social media has become such a massive and integral part of our society, not to mention the hyper politicization of such issues. An internet or social media platform without any type of moderation or rules would quickly devolve into a dangerous environment filled with misinformation, and interminable unfiltered or illegal content. Even with such rules, it is undeniable that social media has been utilized in ways leading to real world harm

In the case of this particular suspension, a line was crossed when the president knowingly endangered members of the public as well as law enforcement and elected officials by inciting physical violence and destruction. In response, Twitter determined that the potential threat of further violence and physical harm overrides the compelling public interest by which they previously justified allowing the president’s account to remain active and his posted content to remain online even in violation of its policies. And though this was a significant action—the president’s speech is of great importance—it was not a violation of the Constitution’s guarantee of free speech.

Slippery slope?

As news of this monumental suspension broke, many rightfully questioned how this type of action by a social media giant could or would be used against views that are outside of the mainstream such as those by conservative Christians. This is an understandable concern based on the unequal and often controversial application of content moderation by the platforms. Undoubtedly, action of this kind opens the door for further censorship. But even so, Twitter’s actions must be seen in light of the full picture. Throughout his term, the company had extended significant latitude to the president despite the regular posting of false, misleading, and potentially threatening and dangerous information. But after the grievous display of brutality and loss of life at the United States Capitol—where five people died including one Capitol Hill police officer—public interest gave way to grave public safety concerns.

Still, among the most alarming elements of this suspension is not the suspension itself, but the inconsistency of Twitter’s policy enforcement across the board. While the company is well within its rights to enforce suspensions due to policy violations, Twitter has also allowed posts from accounts representing authoritarian leaders around the world, such as from Chinese and Iranian governments, that clearly violate the same policies used to ban the president’s account. These oppressive and authoritarian regimes promoted by these accounts incite and perpetrate devastating violence and human rights abuses beyond anything we’ve witnessed firsthand in the United States. 

In China, over one million Uighur Muslims have been detained, persecuted, and even sterilized in “reeducation camps.” But social media platforms like Twitter often turned a blind eye to these atrocities. Deceptive tweets from Chinese officials often do not even carry a label of danger, or even misinformation, yet nearly every tweet by President Trump has been marked by such since the election in November. (It is worth noting that Twitter did act to remove certain content from foreign leaders following the announcement about its permanent ban of the President.)

However reasonable or necessary Twitter’s decision in this particular instance might be, its inconsistency in content moderation is harmful to our social fabric, which is sustained by ideals like trust and equality. It is impossible to gain public trust by overlooking such egregious violations, even if the company has the right to enforce its rules as it sees fit. 

Moral courage and responsibility require the equal application and enforcement of stated policies. Taking difficult but necessary action is only meaningful if such actions are carried out consistently. If administrators at Twitter felt compelled to curtail the President’s speech in the name of public safety, it is only right that they follow suit by banning the accounts of other known offenders, including officials within the Chinese Communist Party—the single greatest human rights abusers on the planet.

Christians are wise to be vigilant about matters related to censorship. But it is important to recognize that the difference between censoring speech that is disagreeable and limiting speech that threatens or elicits physical harm. Going forward, careful attention should be paid to the actions of social media platforms like Twitter and Facebook regarding content moderation and censorship, but in itself, Twitter’s decision to ban the President’s account should not be seen as an existential threat to free speech in our democracy.

By / Dec 14

In recent months, a new social media platform gained growing popularity in light of controversies over content moderation and fact-checking on traditional social media sites like Twitter and Facebook. Parler was launched in August of 2018 by John Matze, Jared Thomson, and Rebekah Mercer. While it still has a smaller user base than most social platforms at just over 2.8 million people, the app saw a surge in downloads following the November 2020 presidential election and has become extremely popular in certain circles of our society. It became the #1 downloaded application on Apple and Google devices soon after the 2020 presidential election, with over 4 million downloads in just the first two weeks of November, according to tracking by Sensor Tower.

Here is what you should know about this social media application and why it matters in our public discourse.

What is Parler?

Parler, named after the French word meaning to speak, is described as a “free speech” alternative to traditional social media sites like Twitter and Facebook. The company’s website describes the platform as a way to “speak freely and express yourself openly, without fear of being ‘deplatformed’ for your views.” Parler intentionally positions itself as the “world’s town square,” and CEO John Matze said of the app, “If you can say it on the street of New York, you can say it on Parler.”

Parler is a microblogging social service, very similar to Twitter, where users are encouraged to share articles, thoughts, videos, and more. The platform states that “people are entitled to security, privacy, and freedom of expression.” This emphasis on privacy is seen in the ways that Parler will keep your data confidential and won’t sell your data to third parties services, which is a complaint about the nature of other platforms and their business models based on ad revenue. Currently, Parler does not have advertisers on the platform, but they have plans to allow advertisers to target influencers instead of regular users.

Posts on the platform are called “parleys,” and the feed is broken up into two sections namely parleys and affiliate content, which functions like a news feed of content providers for the platforms. To share content from someone else, a user can “echo” a certain post or piece of content.

The platform also has a “Parler citizen verification,” where users can be verified by the service in order to cut down on fake accounts and ones run by bots. Users that submit their photo ID and a selfie are eligible for verification. Once verified, users will see a red badge on their avatar indicating that they are a Parler citizen. Parler also has a “verified influencer” status for those with large followings who might be easily impersonated, very similar to the “blue check” icon on Twitter.

Does Parler censor or moderate content?

The company claims that it does not censor speech or content, yet it does have certain community standards much like other platforms, even if those standards are intentionally set low. The community standards are broken into two principles: 

  1. Parler will not knowingly allow itself to be used as a tool for crime, civil torts, or other unlawful acts.
  2. Posting spam and using bots are nuisances and are not conducive to productive and polite discourse.

Outside of these two community standard principles, Parler does have a more detailed account of the type of actions that fall under the principles. The platform is intentionally designed in order to give users some tools to deal with spam, harassment, or objectionable content including “the ability to mute or block other members, or to mute or block all comments containing terms of the member’s choice.”

Overall, Parler is designed to be an alternative platform for those who do not agree with the community standards and policies of other social platforms. The company states that “while the First Amendment does not apply to private companies such as Parler, our mission is to create a social platform in the spirit of the First Amendment.” This is an important point in the debate over content moderation on other platforms though because as the company points out, the First Amendment does not apply to private companies but was written to reflect the relationship between individuals and the state. 

Why is Parler controversial?

As the platform has gained prominence in certain segments of American life, Parler has expanded its user base in large part as a reaction to the content moderation policies on other platforms. Because it has promised to allow and highlight content that other services deem misinformation, contested claims, and at times hate speech, Parler has been characterized by what it allows its users to post without fear of removal or moderation.

Relying on users to moderate or curate their own feeds, Parler seeks to abdicate themselves of any responsibility of what is posted on their platform. The application has also become incredibly partisan, with a large number of users joining the platform after the 2020 presidential election amidst the growing distrust in the ways that other social media label controversial content, misinformation, and fake news.

Currently, Parler has a large number of users from one side of the political spectrum, which can at times lead to a siloing effect where a user only sees one side of an argument. This was one of the issues of traditional social media that Parler set out to overcome with its lax moderation policies in the first place.

Is it a safe platform?

Parler states that any user under 18 must have parental permission to gain access to the application, and all users under 13 are banned. But the service does not currently have an age verification system. Users can also change settings on their account to keep “sensitive” or “Not Safe for Work” content from showing in their feeds automatically. The Washington Post also reports that Parler does not currently have a robust system for detecting child pornography before it is viewed or potentially flagged and reported by users. A company spokesman has said, “If somebody does something illegal, we’re relying on the reporting system. We’re not hunting.”

Given its lack of robust content moderation policies, Parler has drawn a considerable number of users from Twitter and other platforms who decry that their views were censored or their accounts banned. Many conservative elected officials and news organizations have joined the platform, which hopes to attain a critical mass of users large enough to sustain the platform moving forward. Parler currently does not have the amount of brands or companies that other platforms have, which can be needed for a platform to flourish as an information source and connectivity tool for users.

Parler banned pornography on the platform but in recent months changed its content moderation policies to allow for pornography on the platform. This aligns it more with Twitter’s policy allowing this graphic content online. Parler’s approach to moderation can be seen in recent comments by COO Jeffrey Wernick to the Post in response to allegations of the proliferation of pornography on the site. Wernick responded that he had little knowledge of that type of content on the platform, adding, “I don’t look for that content, so why should I know it exists?” He later added that he would look into the issue.

Since the shifts in policy in recent months, Parler has suffered from issues surrounding the proliferation of pornography and spam, which should come as no surprise as the pornography industry has been using innovative technology from the early days of the internet. Parler states that it allows anything on its platform that the First Amendment allows. The United States Surpreme Court has declared that pornography is constitutionally protected free speech.

It should be noted that Facebook, Instagram, and YouTube ban all pornographic imagery and videos from their platforms. Facebook and Instagram use automated systems to scan photos as they are posted and also rely on a robust reporting system for users to flag content that may violate the company’s community standards. While Twitter’s policies allow for pornography, it does employ automated systems to cut down on rapid posting and other spam-related uploads as well as the use of human moderators to cut down on abuse from users and bots.

Should social media companies be able to censor speech and enforce content moderation policies on users?

This is at the heart of the debate over free speech and social media, especially centering around Section 230 of the Communications Decency Act, which is a part of the Telecommunications Act of 1996. Section 230 has been called the law that gave us the modern internet. The law allowed a more open and free market of ideas and for the creation of user-generated content sites.

As the ERLC wrote in 2019, many social conservatives, worried about the spread of pornography, lobbied Congress to pass the the Communications Decency Act, which penalized the online transmission of indecent content and protected companies from being sued for removing such offensive content. Section 230 was written with the intention of encouraging internet companies to develop content moderation standards and to protect them against liability for removing content in order to have safer environments online, especially for minors. This liability protection led to the development of community standards and ways to validate information posted without the company being liable for user-generated content.

Controversy over the limits of Section 230 and ways to update the law have been center stage in American public life for the last few years, especially as the Trump administration issued an Executive Order on the prevention of online censorship. Both sides of the political aisle are debating if it should simply be updated or if the statute should be removed completely.

By / Jun 26

Communist China’s stand against freedom is becoming increasingly aggressive with both the persecution of their own citizens and the forced changes in Hong Kong. Chelsea Patterson Sobolik and Travis Wussow welcome David Curry of Open Doors USA to the roundtable to discuss these recent developments and how it affects religious freedom in this part of the world.

This episode is sponsored by The Good Book Company, publisher of Beautifully Distinct: Conversations with Friends on Faith, Life, and Culture, edited by Trillia Newbell

Guest Biography

David Curry is the CEO of Open Doors USA, which is a non-profit dedicated to providing support for persecuted Christians around the world. For over 60 years, Open Doors has worked in the world’s most oppressive regions, empowering and equipping persecuted Christians in more than 60 countries by providing Bibles, training, and programs to help strengthen the church. Since assuming the role of CEO in August 2013, Curry has traveled extensively to encourage those living under persecution and support the work of Open Doors. In addition, Curry is often present in Washington, D.C., advocating for religious freedom at the highest levels of our government. He has testified before the House Foreign Affairs Committee and met with a wide range of policymakers in Washington from both sides of the aisle, including at the White House, in the Senate and at the U.S. State Department.

Resources from the Conversation