Content moderation is difficult work for any social media company. Every day millions of posts and messages are shared on these platforms, most are benign in nature but as with anything there will be abusive, hateful, and sometimes violent content shared or promoted by certain individuals and organizations. Most social media companies expect their users to engage on these platforms within a certain set of rules or community standards. These content policies are often decided upon with careful and studied reflection on the gravity of moderation in order to provide a safe and appropriate place for users. It is an admittedly difficult and thorny ethical issue though because social media has become such a massive and integral part of our diverse society, not to mention the hyper politicization of such issues.
Over the years, content moderation practices have come under intense scrutiny because of the breadth of the policies themselves as well as their misapplication—or more precisely the inconsistent application—of these rules for online conduct. Just last week, The Daily Citizen—the news arm of Focus on the Family—was reportedly locked out of their account due to a post about President Biden’s nomination of Dr. Rachel Levine to serve as assistant secretary of health for the U.S. Department of Health and Human Services (HHS). The Daily Citizen’s tweet was flagged by Twitter for violating its policy on hateful conduct, which includes but not limited to “targeted misgendering or deadnaming of transgender individuals.” This broad policy seems to include using the incorrect pronouns for individuals, using the former name of someone after they transition and identify by another name, or—in the case of The Daily Citizen’s tweet—stating the biological and scientific reality of someone’s sex even if they choose to idenitfy as the opposite sex or derivation thereof.
After The Daily Citizen appealed the decision, the request was subsequently denied by Twitter’s content moderation team and the organization was left with the choice of deleting the violating tweet or they would continue to be locked out of their account. It should be noted that the account was not suspended or blocked, which has been the case in other instances of policy violations, such as former President Trump’s recent suspension. The Daily Citizen decided to keep the tweet up and have been unable to use their account since.
The purpose of content moderation
The implementation of content moderation practices is actually encouraged by Section 230 of the 1996 Communication Decency Act, which was a bipartisan piece of legislation designed to promote the growth of the fledgling internet in the mid-1990s. Section 230 gives internet companies a liability shield for online user content—meaning users and not the platforms themselves are responsible for the content of posts—in exchange for encouraging “good faith” measures to remove objectionable content in order to make the internet a safer place for our society.
These “good faith” measures are designed to create safer online environments for all users. The debate over content moderation often center though on exactly what these measures are to entail, not the presence of the measures in the first place. Without any sort of content moderation, social media platforms will inevitably be used and abused to promote violence, true hateful conduct, and may become a breeding ground for misinformation and other dangerous content. Simply put, without moderation these platforms would not be a place anyone would truly feel comfortable engaging on each day nor would it be safe to engage in the first place. In general, content moderation policies are for the common good of all users, but the details and breadth of specific policies should at times be called into question as to their effectiveness or dangerous consequences for online dialogue.
Free speech
In these debates over content moderation, questions about the role of free speech abound. The First Amendment guarantees the freedom of speech for all people. But it only protects citizens from interference by the government itself. The First Amendment’s free speech protection does not apply to the actions of a third party, such as a private social media company governing certain speech or implementing various content moderation policies. A helpful way to think about free speech in this instance is how Christians have rallied around the ability of other third parties to act in accordance with their deeply held beliefs and use their own free speech not to participate in a same-sex wedding, as in the case of Baronelle Stutzman and Jack Phillips. The government does not have the right, nor the authority, to force a third party to violate their deeply held beliefs outside of a clear and compelling public interest that cannot be accomplished by a less invasive manner.
Twitter is within its rights to create content moderation policies and govern speech on their platforms as they see fit, but these policies should take into account the true diversity of thoughts in our society and not denigrate certain types of religious speech as inherently hateful or dangerous. And content moderation policies are actually encouraged by provisions in Section 230. But that does not in any way mean that those policies are not able to be scrutinized by the public who have a choice on whether or not to use a particular platform and the freedom to criticize policies they deem deficient or shortsighted.
Dangerous and misguided policies
Even though Twitter, as well as other companies like Facebook, cannot actually violate one’s free speech, they are accountable for the policies that they craft as well as the deleterious outworkings of misguided and at times poorly crafted policies. These overly broad policies often actually limit the free exchange of ideas online and—in the case of The Daily Citizen’s post removal—actually censor free expression and cut back on a robust public dialogue, which is vital to a functioning democracy and society.
Twitter’s hateful conduct policy begins by stating “You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” This broad definition of hateful conduct is then subsequently expanded to include nearly every form of speech that one may deem offensive, objectionable, or even simply disagreeable.
To Twitter’s credit, they do seek “to give everyone the power to create and share ideas and information, and to express their opinions and beliefs without barriers.” They go on to say that “Free expression is a human right – we believe that everyone has a voice, and the right to use it. Our role is to serve the public conversation, which requires representation of a diverse range of perspectives.” But this lofty goal of free expression is actually stifled and in many ways completely mitigated by promoting some speech at the expense of other speech deemed unworthy for public discourse, even if that speech aligns with scientific realities which are taught and affirmed by millions of people throughout the world, including but not limited to people of faith.
Civil disagreements over the biological and scientific differences between a man and woman simply do not and cannot—especially for the sake of robust public discourse—be equated with hate speech. And any attempt to create and enforce these types of broadly defined policies continues to break down the trust that the public has in these companies and the immense responsibility they have over providing avenues for public discourse and free expression given the ubiquity of these platforms in our society. In a time where there is already a considerable amount of distrust in institutions, governments, and even social media companies themselves, ill-defined policies that seem to equate historic and orthodox beliefs on marriage and sexuality with the dehumanizing nature of real hate speech and violent conduct only widen the deficit of trust and increases skepticism over the true intention behind these policies.
Christian engagement in content moderation
When Christians engage in these important debates over content moderation and online speech, we must do so with a distinct view of human dignity in mind. It is far too easy in a world of memes, caricatures, and 280 character posts to dehumanize those with whom we disagree or seek to be disagreeable in order to gain a following. We must champion the dignity of all people because we know that all people are created in the image of God and thus are worthy of all honor and respect. And part of championing this dignity is also speaking clearly about the dehumanizing effects of ideologies like transgenderism that tend to equate someone’s identity solely on the basis of their sexual preference or desires. We should advocate for better and more clearly defined policies because these policies affect our neighbors and their ability to connect with others.
When we engage on these important matters of social media and content moderation, we also must do so informed on the complexity of the situations at hand with clarity, charity, and most of all respect even for those with whom we deeply disagree. The Bible reminds us that “we do not wrestle against flesh and blood, but against the rulers, against the authorities, against the cosmic powers over this present darkness, against the spiritual forces of evil in the heavenly places” (Eph. 6:12). Spiteful, derogatory, arrogant, and dehumanizing remarks about fellow image bearers are unbecoming of the people of God and this is not limited to issues of sexuality or transgenderism. These types of statements are becoming all too common online in our social rhetoric, even among professing Christians. It is past time for each of us to heed the words in the letter of James and seek to tame our tongue lest it overcome us with its deadly poison (James 3:8) and lead us down the same path of those in which we disagree over fundamental matters of sexuality and even issues of content moderation.
When we engage in these important issues and seek to frame debates over online speech, we must also do so with an understanding of the immense weight and pressure that many in content moderation face each day. While we may think that the tweet or post that was flagged is perfectly appropriate, we must remember that often the initial decisions on moderation are made with help of algorithmic detection. Often these AI systems are used to cut down on the amount of violating content but these systems do make mistakes. Upon appeal, these decisions are then handed over to human reviewers who may only have an extremely short window to make a call given the sheer amounts of content to review. This does not mean that these decisions are always correct or even that the policies driving these content decisions are helpful or clearly defined. The question isn’t whether discrimination or bias exists in these discussions, but where the lines are drawn, by whom, what worldview drove their creation, and the ability to appeal decisions on the merits.
Christians must also realize that in a rapidly shifting and secularizing culture, we will naturally be at odds with the mours of the day but that should not deter us from speaking truth, grounding in love and kindness, as we engage in the heated debates over online speech, social media, and content moderation. But our hope and comfort doesn’t come from better policies or consistent application across these platforms. Even if it feels as though the ground is shifting right beneath us and as there are vapid calls to “get on the right side of history,” we can know and trust that biblical truth and human anthropology isn’t about power or control but about pursuing the good of our neighbor in accordance with the truth of the One who created us and ultimately rescue each of us from our own proclivities toward sin and rebellion.