fbpx
Articles

Explainer: What is Parler, and why does it matter?

/
December 14, 2020

In recent months, a new social media platform gained growing popularity in light of controversies over content moderation and fact-checking on traditional social media sites like Twitter and Facebook. Parler was launched in August of 2018 by John Matze, Jared Thomson, and Rebekah Mercer. While it still has a smaller user base than most social platforms at just over 2.8 million people, the app saw a surge in downloads following the November 2020 presidential election and has become extremely popular in certain circles of our society. It became the #1 downloaded application on Apple and Google devices soon after the 2020 presidential election, with over 4 million downloads in just the first two weeks of November, according to tracking by Sensor Tower.

Here is what you should know about this social media application and why it matters in our public discourse.

What is Parler?

Parler, named after the French word meaning to speak, is described as a “free speech” alternative to traditional social media sites like Twitter and Facebook. The company’s website describes the platform as a way to “speak freely and express yourself openly, without fear of being ‘deplatformed’ for your views.” Parler intentionally positions itself as the “world’s town square,” and CEO John Matze said of the app, “If you can say it on the street of New York, you can say it on Parler.”

Parler is a microblogging social service, very similar to Twitter, where users are encouraged to share articles, thoughts, videos, and more. The platform states that “people are entitled to security, privacy, and freedom of expression.” This emphasis on privacy is seen in the ways that Parler will keep your data confidential and won’t sell your data to third parties services, which is a complaint about the nature of other platforms and their business models based on ad revenue. Currently, Parler does not have advertisers on the platform, but they have plans to allow advertisers to target influencers instead of regular users.

Posts on the platform are called “parleys,” and the feed is broken up into two sections namely parleys and affiliate content, which functions like a news feed of content providers for the platforms. To share content from someone else, a user can “echo” a certain post or piece of content.

The platform also has a “Parler citizen verification,” where users can be verified by the service in order to cut down on fake accounts and ones run by bots. Users that submit their photo ID and a selfie are eligible for verification. Once verified, users will see a red badge on their avatar indicating that they are a Parler citizen. Parler also has a “verified influencer” status for those with large followings who might be easily impersonated, very similar to the “blue check” icon on Twitter.

Does Parler censor or moderate content?

The company claims that it does not censor speech or content, yet it does have certain community standards much like other platforms, even if those standards are intentionally set low. The community standards are broken into two principles: 

  1. Parler will not knowingly allow itself to be used as a tool for crime, civil torts, or other unlawful acts.
  2. Posting spam and using bots are nuisances and are not conducive to productive and polite discourse.

Outside of these two community standard principles, Parler does have a more detailed account of the type of actions that fall under the principles. The platform is intentionally designed in order to give users some tools to deal with spam, harassment, or objectionable content including “the ability to mute or block other members, or to mute or block all comments containing terms of the member’s choice.”

Overall, Parler is designed to be an alternative platform for those who do not agree with the community standards and policies of other social platforms. The company states that “while the First Amendment does not apply to private companies such as Parler, our mission is to create a social platform in the spirit of the First Amendment.” This is an important point in the debate over content moderation on other platforms though because as the company points out, the First Amendment does not apply to private companies but was written to reflect the relationship between individuals and the state. 

Why is Parler controversial?

As the platform has gained prominence in certain segments of American life, Parler has expanded its user base in large part as a reaction to the content moderation policies on other platforms. Because it has promised to allow and highlight content that other services deem misinformation, contested claims, and at times hate speech, Parler has been characterized by what it allows its users to post without fear of removal or moderation.

Relying on users to moderate or curate their own feeds, Parler seeks to abdicate themselves of any responsibility of what is posted on their platform. The application has also become incredibly partisan, with a large number of users joining the platform after the 2020 presidential election amidst the growing distrust in the ways that other social media label controversial content, misinformation, and fake news.

Currently, Parler has a large number of users from one side of the political spectrum, which can at times lead to a siloing effect where a user only sees one side of an argument. This was one of the issues of traditional social media that Parler set out to overcome with its lax moderation policies in the first place.

Is it a safe platform?

Parler states that any user under 18 must have parental permission to gain access to the application, and all users under 13 are banned. But the service does not currently have an age verification system. Users can also change settings on their account to keep “sensitive” or “Not Safe for Work” content from showing in their feeds automatically. The Washington Post also reports that Parler does not currently have a robust system for detecting child pornography before it is viewed or potentially flagged and reported by users. A company spokesman has said, “If somebody does something illegal, we’re relying on the reporting system. We’re not hunting.”

Given its lack of robust content moderation policies, Parler has drawn a considerable number of users from Twitter and other platforms who decry that their views were censored or their accounts banned. Many conservative elected officials and news organizations have joined the platform, which hopes to attain a critical mass of users large enough to sustain the platform moving forward. Parler currently does not have the amount of brands or companies that other platforms have, which can be needed for a platform to flourish as an information source and connectivity tool for users.

Parler banned pornography on the platform but in recent months changed its content moderation policies to allow for pornography on the platform. This aligns it more with Twitter’s policy allowing this graphic content online. Parler’s approach to moderation can be seen in recent comments by COO Jeffrey Wernick to the Post in response to allegations of the proliferation of pornography on the site. Wernick responded that he had little knowledge of that type of content on the platform, adding, “I don’t look for that content, so why should I know it exists?” He later added that he would look into the issue.

Since the shifts in policy in recent months, Parler has suffered from issues surrounding the proliferation of pornography and spam, which should come as no surprise as the pornography industry has been using innovative technology from the early days of the internet. Parler states that it allows anything on its platform that the First Amendment allows. The United States Surpreme Court has declared that pornography is constitutionally protected free speech.

It should be noted that Facebook, Instagram, and YouTube ban all pornographic imagery and videos from their platforms. Facebook and Instagram use automated systems to scan photos as they are posted and also rely on a robust reporting system for users to flag content that may violate the company’s community standards. While Twitter’s policies allow for pornography, it does employ automated systems to cut down on rapid posting and other spam-related uploads as well as the use of human moderators to cut down on abuse from users and bots.

Should social media companies be able to censor speech and enforce content moderation policies on users?

This is at the heart of the debate over free speech and social media, especially centering around Section 230 of the Communications Decency Act, which is a part of the Telecommunications Act of 1996. Section 230 has been called the law that gave us the modern internet. The law allowed a more open and free market of ideas and for the creation of user-generated content sites.

As the ERLC wrote in 2019, many social conservatives, worried about the spread of pornography, lobbied Congress to pass the the Communications Decency Act, which penalized the online transmission of indecent content and protected companies from being sued for removing such offensive content. Section 230 was written with the intention of encouraging internet companies to develop content moderation standards and to protect them against liability for removing content in order to have safer environments online, especially for minors. This liability protection led to the development of community standards and ways to validate information posted without the company being liable for user-generated content.

Controversy over the limits of Section 230 and ways to update the law have been center stage in American public life for the last few years, especially as the Trump administration issued an Executive Order on the prevention of online censorship. Both sides of the political aisle are debating if it should simply be updated or if the statute should be removed completely.

Photo Attribution:

OLIVIER DOULIERY / Getty Contributor

Jason Thacker

Jason Thacker serves as senior fellow focusing on Christian ethics, human dignity, public theology, and technology. He also leads the ERLC Research Institute. In addition to his work at the ERLC, he serves as assistant professor of philosophy and ethics at Boyce College in Louisville Kentucky. He is the author … Read More

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone. 

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ 5 human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Romans 13:1-7; 1 Peter 2:13-14

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7 

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, non-maleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Fur- 3 thermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being. 

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24