fbpx
Articles

3 ethical issues in technology to watch for in 2021

/
January 4, 2021

2020 was a year that not only challenged the fortitude of our families but also the fabric of our nation. Last year we saw many complex ethical issues arise from our use of technology in society and as individuals. From the debates over the proper use of social media in society to the adoption of invasive technologies like facial recognition that pushed the bounds of our concepts of personal privacy, many of the ethical challenges exposed in 2020 will flow into 2021 as our society debates how to respond to these developments and how to pursue the common good together as a very diverse community.

Here are three areas of ethical concern with technology that we will need to watch for if we hope to navigate 2021 well.

Content moderation and Section 230

Some of the most talked about ethical issues in technology, even as 2021 is just getting started, are the debates over online content moderation, the role of social media in our public discourse, and the merits of Section 230 of the 1996 Communications Decency Act. If you are unfamiliar with Section 230 and the debates surrounding the statute, it essentially functions as legal protection for online platforms and companies so they are not liable for the information posted to their platforms by third party users.

In exchange for these protections, internet companies and platforms are to enact “good faith” protections and are encouraged to remove content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” But what exactly does “good faith” and “otherwise objectionable” mean in this context of the raging debates over the role of social media today? 

This question is at the heart of the debate over Section 230’s usefulness today. Some argue that platforms like Facebook, Google, Twitter, and others must do more to combat the spread of misinformation, disinformation, and fake news online. As platforms have engaged in labeling misleading content and removing posts that violate their community policies, many argue that these companies simply aren’t doing enough.

But on the other side of the aisle, some argue that these 230 protections are being used as a cover to censor certain content online—often in a partisan manner, being inconsistently applied (especially on the international stage), and may amount to violations of users’ free speech. They argue that 230 must be repealed or modified substantially in order to combat bias against certain types of political, social, or religious views.

As technology policy expert and ERLC Research Fellow Klon Kitchen aptly states, “All of these perspectives are enabled by vagaries surrounding the text of the law, the intent behind it, and the relative values and risks posed by large Internet platforms.” Regardless of where one lands in this debate, we will likely see inflamed conversations over this statute and the extent to which it should be maintained if at all.

Facial recognition surveillance

In what may feel like a Hollywood thriller plot, facial recognition surveillance technology is being deployed around our nation and the world, often without us realizing or even understanding how these tools work. Last January, Kashmir Hill of the New York Times broke a story about a little known facial recognition startup called Clearview AI that set off a firestorm over the use of these tools in surveillance, policing, and security. Thousands of police units across the country were testing or implementing facial recognition in the hopes of providing better identification of suspects and to keep our communities safer.

But for all of their potential benefits, these tools also have a flip side with extremely complex ethical considerations and dangers, especially when used in volatile police situations. Many of these algorithmic identification tools were also shown to misidentify people with darker skin more often than others because the systems were not trained properly or had inherent weaknesses in their design or data sets.

Throughout 2020, municipalities and state governments completely banned or substantially limited the use of facial recognition in their communities over the potential misuses as well as the racial divisions in our nation. The tools were thought to be too powerful, overly relied upon which could lead to false arrests or worse, or too invasive into the private lives of citizens. In 2021, we will likely see this trend of legislation on facial recognition systems continue as well as increased pressure on the federal government to weigh in on how these tools should be and can be used, especially in policing and government.

Outside of policing, there is likely to be substantial debate over how these tools are used in public areas and businesses as our society begins to open back up after the COVID-19 vaccines are more widely available. The potential for these tools to be used in identification, health screening, and more will lead to renewed debate over the ethical bounds at stake and the potential for real-life harm to those in our communities.

Right to privacy?

Outside of the growing concerns with surveillance technologies like facial recognition, there is considerable debate about the nature and extent of digital privacy in our technological society. Last year, the California Consumer Privacy Act’s (CCPA) regulations went into effect, and we also saw the continued influence of the General Data Protection Regulation (GDPR) from the European Union throughout the world. These pieces of legislation have challenged how many people think about the nature of privacy and have also raised a number of ethical concerns regarding what is known about us online, who knows it, how it is used, and what we can do with that data. Nearly every device and technology today captures some level of data on users in order to provide a personalized or curated experience, but this data capture has come under scrutiny recently across the political spectrum.

Today, some are asking if personal privacy is simply an outdated or unneeded concept or if we as citizens actually have an actual right to privacy? If we have a right to privacy, where is that right derived, and how does it align with our other rights to life and liberty? Are we to pursue moral autonomy, or is privacy actually grounded in human dignity? Many questions remain about how we should view privacy as a society and to what extent we should expect it in today’s digital world. As COVID-19 challenged many of our expectations concerning privacy, there will likely be a renewed focus on the role of technology in our lives and the extent to which the government has a role in these debates.

It is far too easy to take a myopic view of technology and the ethical issues surrounding its use in our lives. Technology is not a subset of issues that only technologists and policy makers should engage. These tools undergird nearly every area of our lives in the 21st century, and Christians, of all people, should contribute to the ongoing dialogue over these important issues because of our understanding of human dignity grounded in the imago Dei (Gen. 1:26-28).

Thankfully 2020 brought some of these issues to the forefront of our public consciousness. While 2021 will likely have a plethora of things to engage with, we should address the pressing ethical challenges that technology poses in order to present a worldview that is able to address these monumental challenges to our daily lives.

Jason Thacker

Jason Thacker serves as chair of research in technology ethics and leads the ERLC Research Institute. He writes and speaks on various topics including human dignity, ethics, public theology, technology, digital governance, and artificial intelligence. His book, The Age of AI: Artificial Intelligence and the Future of Humanity, released March 2020 with … Read More