By / Feb 13

Since 2020, I have sought to write about some of the top technology issues to be aware of and how we as Christians can address them in light of the Christian moral tradition rooted in the love of God and love of neighbor. There have been a couple prevailing themes over the years centered on the ways that technology is shaping us as people—namely our understanding of ourselves and those around us—and how we as a society are to think through the power it holds in our lives.

Already, it seems 2023 is going to be an interesting year as we deal with an onslaught of emerging technologies like advanced AI systems and virtual reality, as well as continue to navigate pressing challenges of digital privacy and the role of faith in the digital public square.

Artificial Intelligence and ChatGPT

Back in 2020, there was already social buzz about AI and how it was shaping our society. I published my first book, The Age of AI: Artificial Intelligence and the Future of Humanity, with the goal of thinking about how some of these technologies might affect our understanding of human dignity and our common life together. As 2022 came to a close, there was a major release of a ChatGPT (chatbot) from OpenAI that caught the attention of our wider culture and confirmed that AI is (and will continue to be) a major part of our lives, especially in education and business. 

The introduction of advanced AI systems like these in recent years has fundamentally challenged much of what we have assumed about the uniqueness of humanity. These systems are now performing tasks that only humans could in past generations.

In an age like ours, we all need to be reminded that the value and dignity of humans isn’t rooted in what we do but who we are as those uniquely made in the image of our Creator.

AI systems like ChatGPT have deeply concerning elements but also afford the opportunity for educators and students to evaluate with fresh eyes the purpose and design of education. Education is not simply about information transfer but whole-person transformation. These types of tools require that administrators, professors, and students alike learn about how these systems work, their advantages and limitations, and how we might seek to prioritize the transformation of their students above a simple letter grade. 

Similar to the classroom, these tools may have limited use in local church ministry but must be thought through with the utmost care and wisdom. They may be used to aid one in research, writing reflection questions, or even rudimentary copy for church functions. However, one must keep in mind their limitations as well as be on guard for the temptation to simply pass off the output as their own work. 

Current limitations with these systems are myriad and must be taken into account as one thinks through the ethical ramifications of their use. They are limited by data sets and human supervision used in training the system; are widely known to falsify information, misapply concepts, or even alter their answers based on the political and social views of their creators; and rarely account for nuance and complexity, leading to, at best, the production of entry level and/or basic material. 

Privacy rights and children

A second issue we should be aware of is one that will inevitably be perennial. With the ubiquity of technology and our growing dependence on it, there is the vast and growing concern over personal privacy and how this data will be used by individuals, governments, and especially the technology industry.

We live in a data-saturated world, and there can be a lot of money made by harvesting troves of data and creating predictive products or optimizing our interactions with our daily technology use. Governments around the world are beginning to or have already regulated the flow of data and who has access to it, often focusing on a supposed right to privacy—a term that has competing definitions and proposed safeguards.

Christians, specifically, need to think deeply about what a right to privacy is and what it is not

In 2023, four American states—Colorado, Connecticut, Utah, and Virginia—will follow a similar pattern to California’s groundbreaking privacy law, the California Consumer Privacy Act (CCPA) which went into effect in January of 2020, and begin implementing new General Data Protection Regulation (GDPR) on data collection and use. These new state laws share many of the same types of protections as the CCPA and GDPR of the European Union. 

This year, there will be increasing pressure across the board for federal legislation focused on privacy as it specifically relates to children, as is seen with the bipartisan Kids Online Safety Act (KOSA) and more broadly with other proposals. 

Regardless of where these policies end up, the framing of privacy soley in terms of moral autonomy and personal consent often makes it easy to overlook data privacy as such a central concern to Christian ethics.

Instead, Christians need to be the ones asking the hard questions about how we as a society want to protect and guard the rights of the individual but in ways that also promote the common good.

Virtual reality and augmented reality

One of the technologies that was discussed significantly in 2022 and will likely continue to be in 2023 is virtual reality (VR) and augmented reality (AR). In recent years, we have seen a surge of new VR devices and an increasing number of wearable devices such as smart glasses. As the devices become more commonplace in our society and societal norms continue to shift, it seems likely that they will grow in prominence in our lives.

Some of the pressing ethical questions about their use are not as straightforward as ethical issues in technology, but a host of new challenges will arise, especially in light of the new mediums and means of connection that VR has created. 

Aside from the more common concerns of data privacy, including the use of advanced biometric data such as eye-tracking and more, there are also novel challenges to long-standing understandings of free speech and religious freedom in these digital spaces. These developments are often spoken of in terms of the wisdom of VR churches and gatherings. However, I think the more pressing questions will be over how religious groups who may hold to culturally controversial beliefs—especially on topics like sexuality, gender—will be treated in these digital environments. 

These spaces are not truly public because they are often hosted or even created by technology companies themselves. This represents a new angle on the continued debate over free speech, content moderation, and the nature of faith in the public square.

Overall, 2023 will be a year where Christians are continually pressed to think about how we will live out our faith in the public square amid an increasingly secular culture. One of the temptations when faced with complex or challenging ethical questions with technology is to rush to a position of full adoption or rejection. 

Wisdom, which is at the core of the Christian moral tradition, calls us to slow down and think deeply about the nature of these tools, and discern if their many uses can help us better love God and love our neighbors as ourselves.

By / Jul 15

In the weeks following the historic Dobbs decision, a good bit of misinformation has been propagated concerning what many pro-life laws across the nation actually do in protecting the life of the preborn and caring for vulnerable women in crisis. Along with this misinformation about the devastation of ectopic pregnancies and lamentable instances where the physical life of the mother is at risk, there has also been a torrent of speculative musings about the dystopian society we will now inhabit in a post-Roe context. As the ripple effects of this life-saving court decision continue to be felt throughout our society and as many states are enacting new laws concerning the practice of abortion, one aspect of the debate might surprise some who have followed the pro-life movement over the last 49 years.

Since the Dobbs ruling, opinion pieces, Twitter threads, and a host of comments from privacy scholars have raised the alarm and generated wildly speculative notions about the dangers to personal data privacy in a country without Roe. Even the White House Director of Gender Policy Jen Klein has urged caution on the grounds of data privacy for millions of Americans, though the actual details of her comments and other reporting on the matter often do not coincide with the clickbait headlines.

From alarmist calls for women to delete their menstrual cycle tracking apps to demands that technology and social media companies like Google delete and/or stop tracking sensitive location data like abortion clinic visits, there has been a deluge of fear-inducing information. This speculation is primarily about how troves of data collected in a digital society might be used by some in potential lawsuits or criminal filings against women seeking an abortion depending on their state. While much of this is uncharted territory and there are some legitimate questions that need to be asked by all citizens including state lawmakers, it must be noted that many if not all of the calls to immediate action are built on hypothetical situations and strained correlations to prior cases. Many if not all of the states enacting pro-life laws are rightfully seeking to prosecute those who prescribe the abortion medications or who perform abortions, not women in crisis who have long been preyed upon by the abortion industry and been led astray by the lies of the sexual revolution.

Personal privacy and moral autonomy

The connections between personal privacy and abortion are deeply intertwined in our modern moral order, given how our abortion-on-demand culture was built upon the discovered “right to privacy” in the “penumbras” of the Bill of Rights, infamously articulated by Supreme Court Justice William O. Douglas in the Supreme Court’s 1965 Griswold v. Connecticut decision.

In this 1965 decision, the right to privacy was applied specifically to the right of married couples to obtain contraceptives. However, this “implied constitutional right to privacy” soon became the foundation for a number of subsequent Supreme Court decisions such as Roe and later Lawrence v. Texas, where the court established the right to privacy as an inherent element of self-determination and complete moral autonomy, devoid of any reference to religion or faith lived under God.

In delivering the Lawrence opinion, Justice Anthony Kennedy stated “liberty protects the person from unwarranted government intrusions into a dwelling or other private places.” He went further to argue that liberty presumes that the state should not have a dominant presence in the homes of Americans, as well as an “autonomy of self that includes freedom of thought, belief, expression, and certain intimate conduct.” While many Christians may agree with Justice Kennedy on the concept of liberty where the state does not have unlimited authority, we must recognize that the modern notions of autonomy and self-determination are directly contrary to the biblical ethic rooted in the dignity of all, including the preborn. This supposed right to self-determination is deeply woven into the modern right to privacy, abortion culture, and throughout contemporary culture.

But by design of the Founders, the Bill of Rights established a framework that recognizes certain pre-political rights which the state is bound to recognize and uphold, including the right to life. This runs contrary to many of the current debates over abortion and privacy—debates that are often framed in light of our society’s ideas of moral autonomy and self-determination. This shift in the nature and foundation of rights represents a stark break from the transcendent framework they were originally rooted in so that now the individual has the right to define their own realities, no matter the cost to the moral order including our neighbors or even the life of a child in the womb.

Misleading hypotheticals and the right to privacy

In light of this modern notion of a right to privacy, the continued calls for state and federal privacy legislation in our post-Roe world, and the growing concerns over data privacy, how should Christians think about these issues—especially in light of the pressing questions of digital privacy and our concern for upholding the dignity of both the preborn and their mothers?

First, we must seek to deal in facts, not simple hypotheticals designed to instill fear. Not only did the Dobbs decision rightfully return the question of abortion to the states (where it resided prior to Roe) and rule that states have a compelling interest in protecting their citizens, including the youngest among us, it is clear that much of what we already know about these state laws is that they seek the criminalization of abortion providers, not women. While it is incumbent on lawmakers to think through the myriad ramifications of these laws on questions regarding digital privacy and data collection, it should be noted that the use of this type of data in criminal cases against women is exceedingly rare.

In recent years, there have been at least two known cases of personal data being used under a court order in an abortion-related cases. In 2013, an Indiana woman was arrested on grounds of feticide after seeking medical attention at a local hospital for “profuse bleeding after delivering a 1½-pound baby boy in a bathroom and putting his body in a dumpster behind her family’s restaurant.” In this case, text messages to a friend about abortion pills were used by prosecutors in the conviction of the woman even though the 2015 conviction was later overturned by the Indiana Court of Appeals.

A second case involved a Mississippi woman indicted on a second-degree murder charge in January 2018 after giving birth at home to a baby boy who was later transported to a local hospital with cardiac arrest. He subsequently died at the hospital. The defendant confessed to medical professionals that she learned she was pregnant the month before at an annual OB-GYN appointment but failed to make any follow-up appointments for prenatal care or an ultrasound. 

She told investigators that she didn’t want any more children, couldn’t afford any more, and that she “simply couldn’t deal with being pregnant again.” She was at least 35 weeks pregnant when it was revealed that she illegally procured the abortion medication misoprostol through online searches. After taking the medication without the approval of doctors and well past the approved usage, her husband called for paramedics after finding her and their son in the bathroom. Medical examiners determined through an autopsy that the baby boy was born alive and died due to asphyxiation.

Both of these cases indicate that online data was used by prosecutors in what would more rightly be called disturbing instances of infanticide. These particular cases and criminal proceedings should remind us of the vital advocacy of pro-life organizations for the Born Alive Abortion Survivors Protection legislation. The proposed protections would see those who are born after a failed abortion receive all the medical care necessary for them to survive. 

The vast majority of states enacting pro-life legislation post-Roe are seeking to outlaw abortion or tighten the window in which abortions are legal. Most of the legislation that has been proposed does not seek to criminalize abortion-vulnerable women but rather those who perform abortions or prescribe these medications which are increasingly dangerous to the life of the mother as well as to the life of the innocent child being aborted.

Second, we must understand that the right to privacy, which should be a central concern for the Christian church in a digital society, must not be framed as at odds with a rich conception of human dignity that values all human life, including the most vulnerable among us. A central facet of the pro-life movement and its 49+ years of advocacy is that vulnerable mothers should not be criminalized. Instead, those who provide abortions—whether through medical procedures or prescription drugs—should be prosecuted to the fullest extent of the law. The Southern Baptist Convention, the nation’s largest Protestant denomination, has repeatedly affirmed the value of preborn life and the priority of caring for vulnerable women in crisis through over 20 resolutions over the course of 40 years, including this past summer during the anticipation of the Dobbs decision.

A Christian understanding of privacy is that of a penultimate right that supports other pre-political rights, including the fundamental right to life rooted in how God has made us as human beings in his very image (Gen. 1:26-28). The imago Dei is the backbone of a robust Christian ethic which recognizes the dignity of all people including the preborn, their mothers, and their families. Human dignity is central to our conception of the moral order and our social ethic. While Christians should rightfully stand against the manipulative and abusive use and collection of personal data in our digital society, we need to remember that a biblical vision of privacy runs contrary to modern notions of privacy built upon moral autonomy and self-determination rather than a full conception of human dignity rooted in God’s design.

Privacy is an instrumental good that should serve the overall common good of both individuals and communities. In order for this to happen, it must be framed in light of our true nature as created beings who are under the authority of an omniscient and omnipotent God. Now more than ever we must not shrink back in fear but seek to retrieve a biblical understanding of personal privacy, which accords with the dignity of every individual and cares for the most vulnerable among us.

By / Feb 1

Each year on January 28, organizations and governments from around the world come together to highlight Data Privacy Day and raise awareness of the immense challenges to personal privacy in our technologically driven society. Data Privacy Day was originally started by the Council of Europe in 2007 and then two years later, the United States Congress passed two resolutions recognizing January 28 as National Data Privacy Day in the U.S. as well. Increasingly throughout our society, there is a growing conversation and debate over personal privacy and its purpose in our society, as seen in the recent controversial moves by Apple and their push for more transparency on data collection by apps, as well as the continued push for a federal digital privacy law similar to that found in the European Union with the GDPR and states like California with the CCPA. But among the many challenges of digital privacy today, privacy can mean very different things across segments of our society and is often left undefined, misunderstood, and misapplied in our lives.

Moral autonomy

Law professor Daniel J. Solove states in Understanding Privacy, “Privacy is a concept in disarray. Nobody can articulate what it means.” He goes on to say, “privacy is a sweeping concept, encompassing freedom of thought, control over one’s body, solitude in one’s home, control over personal information, freedom from surveillance, protection of one’s reputation, and protection from searches and interrogations. Philosophers, legal theorists, and jurists have frequently lamented the great difficulty in reaching a satisfying conception of privacy” (1). As individuals across society and varying cultural contexts seek to define the concept of privacy, it often remains elusive because of the many ways we seek to ground privacy in the human experience, namely in the modern understanding of self and personal autonomy.

While it was not the beginning of privacy talk in America, a 1965 United States Supreme Court decision often is seen as a watershed moment for privacy and personal moral autonomy. In Griswold vs. Connecticut, Justice William Douglas—writing for the majority—famously applied this sense of personal moral autonomy to the controversies of the sexual revolution and found an “implied constitutional right to privacy”, which was used to justify the ability of married couples to buy and use contraceptives without government restriction. This understanding of an “implied constitutional right to privacy” has significantly influenced the modern-day debates over personal digital privacy and the role of government in moral decision making. But as opposed to a more historic and transcendent understanding of human rights, these individual rights are now seen as cut off completely from concepts of human dignity as seen in the Christian moral tradition based in the image of God. 

A right to privacy is not derived from the moral autonomy of the individual but from the dignity of all people with the understanding that each life is precious and valued by God himself who created us as individuals in his image.

Human rights and privacy

Discussing the modern claims of individual rights, theologian John Kilner in his work Dignity and Destiny states, “It is important to keep rights closely tied to a clear sense of the dignity/sacredness of all people. Otherwise, rights claims can degenerate into mere assertion of self with no regard for others. Human rights are really God’s rights over humanity more than one person’s rights over another” (318). Human rights are the rights of all people as fellow human beings and advocating for human rights, such as privacy, intrinsically means standing for the dignity of other people and their rights rather than claiming our own. This corporate aspect of human dignity is articulated well by Kilner when he says, “just treatment of all requires taking account of personal and societal relationships in which people live, rather than merely viewing people as individuals” (320). Kilner’s words here directly contradict much of the common discussion around human rights and privacy today because of the current emphasis on moral and personal autonomy. If we merely speak of a right to privacy as a personal autonomy, we miss the fullness of the human dignity grounding of privacy.

A right to privacy is not derived from the moral autonomy of the individual but from the dignity of all people with the understanding that each life is precious and valued by God himself who created us as individuals in his image. One of the functions of privacy in this world is a way to care for the vulnerable among us and uphold their dignity as image bearers in a technologically rich society. As we see each day, data and information about our fellow image bearers can be and will be used, abused, and manipulated toward selfish ends because of the prevailing nature of sin in the world. Technology will be used to control and strip others of their dignity and one of the main ways this will be done in our digital society is through the misuse of data and information, thus the great need for a right to privacy grounded in a transcendent reality of human dignity rather than the pursuit of autonomy and individual freedom.

A Christian moral theory of privacy must be grounded in the Christian understanding of human dignity as opposed to theories grounded in persistent pursuit of complete moral autonomy and individualistic freedom. The Christian moral tradition shows that privacy is an instrumental and foundational right of all human beings, as individuals and communities, that serves the end of upholding dignity for all which is grounded in the Christian doctrine of the imago Dei. Armed with this understanding of privacy grounded in the imago Dei, Christians can be equipped to navigate the challenges of this technological society knowing that personal privacy is a God-given right, a right that speaks to the created reality of a life lived under God’s reign and rule where we can be known but also loved as fellow image bearers. Privacy then is to be upheld, respected, and honored in this world of increasing digital surveillance and data collection.

By / Jan 4

2020 was a year that not only challenged the fortitude of our families but also the fabric of our nation. Last year we saw many complex ethical issues arise from our use of technology in society and as individuals. From the debates over the proper use of social media in society to the adoption of invasive technologies like facial recognition that pushed the bounds of our concepts of personal privacy, many of the ethical challenges exposed in 2020 will flow into 2021 as our society debates how to respond to these developments and how to pursue the common good together as a very diverse community.

Here are three areas of ethical concern with technology that we will need to watch for if we hope to navigate 2021 well.

Content moderation and Section 230

Some of the most talked about ethical issues in technology, even as 2021 is just getting started, are the debates over online content moderation, the role of social media in our public discourse, and the merits of Section 230 of the 1996 Communications Decency Act. If you are unfamiliar with Section 230 and the debates surrounding the statute, it essentially functions as legal protection for online platforms and companies so they are not liable for the information posted to their platforms by third party users.

In exchange for these protections, internet companies and platforms are to enact “good faith” protections and are encouraged to remove content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” But what exactly does “good faith” and “otherwise objectionable” mean in this context of the raging debates over the role of social media today? 

This question is at the heart of the debate over Section 230’s usefulness today. Some argue that platforms like Facebook, Google, Twitter, and others must do more to combat the spread of misinformation, disinformation, and fake news online. As platforms have engaged in labeling misleading content and removing posts that violate their community policies, many argue that these companies simply aren’t doing enough.

But on the other side of the aisle, some argue that these 230 protections are being used as a cover to censor certain content online—often in a partisan manner, being inconsistently applied (especially on the international stage), and may amount to violations of users’ free speech. They argue that 230 must be repealed or modified substantially in order to combat bias against certain types of political, social, or religious views.

As technology policy expert and ERLC Research Fellow Klon Kitchen aptly states, “All of these perspectives are enabled by vagaries surrounding the text of the law, the intent behind it, and the relative values and risks posed by large Internet platforms.” Regardless of where one lands in this debate, we will likely see inflamed conversations over this statute and the extent to which it should be maintained if at all.

Facial recognition surveillance

In what may feel like a Hollywood thriller plot, facial recognition surveillance technology is being deployed around our nation and the world, often without us realizing or even understanding how these tools work. Last January, Kashmir Hill of the New York Times broke a story about a little known facial recognition startup called Clearview AI that set off a firestorm over the use of these tools in surveillance, policing, and security. Thousands of police units across the country were testing or implementing facial recognition in the hopes of providing better identification of suspects and to keep our communities safer.

But for all of their potential benefits, these tools also have a flip side with extremely complex ethical considerations and dangers, especially when used in volatile police situations. Many of these algorithmic identification tools were also shown to misidentify people with darker skin more often than others because the systems were not trained properly or had inherent weaknesses in their design or data sets.

Throughout 2020, municipalities and state governments completely banned or substantially limited the use of facial recognition in their communities over the potential misuses as well as the racial divisions in our nation. The tools were thought to be too powerful, overly relied upon which could lead to false arrests or worse, or too invasive into the private lives of citizens. In 2021, we will likely see this trend of legislation on facial recognition systems continue as well as increased pressure on the federal government to weigh in on how these tools should be and can be used, especially in policing and government.

Outside of policing, there is likely to be substantial debate over how these tools are used in public areas and businesses as our society begins to open back up after the COVID-19 vaccines are more widely available. The potential for these tools to be used in identification, health screening, and more will lead to renewed debate over the ethical bounds at stake and the potential for real-life harm to those in our communities.

Right to privacy?

Outside of the growing concerns with surveillance technologies like facial recognition, there is considerable debate about the nature and extent of digital privacy in our technological society. Last year, the California Consumer Privacy Act’s (CCPA) regulations went into effect, and we also saw the continued influence of the General Data Protection Regulation (GDPR) from the European Union throughout the world. These pieces of legislation have challenged how many people think about the nature of privacy and have also raised a number of ethical concerns regarding what is known about us online, who knows it, how it is used, and what we can do with that data. Nearly every device and technology today captures some level of data on users in order to provide a personalized or curated experience, but this data capture has come under scrutiny recently across the political spectrum.

Today, some are asking if personal privacy is simply an outdated or unneeded concept or if we as citizens actually have an actual right to privacy? If we have a right to privacy, where is that right derived, and how does it align with our other rights to life and liberty? Are we to pursue moral autonomy, or is privacy actually grounded in human dignity? Many questions remain about how we should view privacy as a society and to what extent we should expect it in today’s digital world. As COVID-19 challenged many of our expectations concerning privacy, there will likely be a renewed focus on the role of technology in our lives and the extent to which the government has a role in these debates.

It is far too easy to take a myopic view of technology and the ethical issues surrounding its use in our lives. Technology is not a subset of issues that only technologists and policy makers should engage. These tools undergird nearly every area of our lives in the 21st century, and Christians, of all people, should contribute to the ongoing dialogue over these important issues because of our understanding of human dignity grounded in the imago Dei (Gen. 1:26-28).

Thankfully 2020 brought some of these issues to the forefront of our public consciousness. While 2021 will likely have a plethora of things to engage with, we should address the pressing ethical challenges that technology poses in order to present a worldview that is able to address these monumental challenges to our daily lives.

By / Aug 10

Over the last few weeks, there has been a cultural firestorm over the viral video sharing app TikTok and a potential ban in the United States. TikTok’s usage surged during the COVID-19 pandemic lockdowns with millions of users finding reprieve during this difficult season of isolation and social distancing. My colleague Conrad Close and I recently wrote about this application that has taken the world by storm. It is the first major mobile application to be built specifically for the smartphone era and has been wildly successful, with rival social media companies seeking to catch up or even ride the momentum of its innovative approach to video sharing. From Instagram’s newly released Reels to the promised Youtube Shorts, major technology companies see the success of TikTok and desire to be a part of this shift in the way people connect and share information.

Alongside the enjoyable family dance videos, jokes, and even political activism on TikTok, there is a considerable threat to freedom, human rights, and personal privacy that often flies under the radar based on TikTok’s contentious relationship with the Chinese Communist Party (CCP) and their involvement with private companies. This is one of the main reasons that the United States government has been exploring options of banning or encouraging the sale of the U.S. TikTok operations to a non-Chinese company like Microsoft. 

Often threats to freedom and human rights are characterized as an overreaction to legitimate competition from rival technology companies, but this is a truncated view of the power and influence not only of the CCP in China but also throughout the world. The argument centers on the idea that Chinese companies should have the right to export their values and compete on the open market as anyone else. But should these Chinese technology companies be treated differently on the world stage?

State-backed technology

In The Third Revolution: Xi Jinping and the New Chinese State, Elizabeth C. Economy writes on how President Xi Jinping’s Chinese state embraced technology early on to strengthen the power and influence of the state, while at the same time limiting the freedom and democratization of information for its people. Under Xi Jinping, the CCP wants to embrace the global political influence through China’s innovation hubs, social media companies like TikTok, and the growing economic output through manufacturing. But it also rejects the fundamental democratizing aspects that come with the free flow of information in the public square.

From the “Great Fire Wall,” which filters internet access and content by only allowing “acceptable” content to the Chinese people, to the use of facial recognition technologies powered by artificial intelligence to track and detain government dissidents (including religious minorities like the persecuted Uighur Muslims), China’s heavy-handed approach to technology and state leadership has allowed it to emerge as a global superpower on the world stage without any true accountability. The CCP’s influence in global affairs has also led nations around the world to passively accept Chinese dogma such as the controversial “One China” policy in relation to Taiwan, growing influence in Hong Kong through the recent enacted security law, and insider access to valuable data and information collected by its rapidly growing technology sector. This influence includes control over companies and their data collection such as TikTok’s parent company ByteDance, 5G network and telecom provider Huawei, and even the popular messaging app WeChat.

U.S. Secretary of State Mike Pompeo said last Wednesday, “With parent companies based in China, apps like TikTok, WeChat, and others are significant threats to personal data of American citizens, not to mention tools for CCP content censorship.” These threats to personal data and privacy have to do with the unique relationship between Chinese companies and the Communist party. Chinese technology companies like ByteDance are required to cooperate with “state intelligence work” per the 2017 Chinese National Intelligence Law. This type of agreement not only allows Chinese interference in personal data captured by these applications, but also gives the government wide-reaching control and power over how these companies operate and with whom they associate.

According to the Canadian government’s assessment of the 2017 intelligence law, “the law’s vague definition of intelligence in the opening articles suggests intelligence includes both information collected and activities conducted in support of comprehensive state security.” This broad and overreaching authority in a company’s affairs is concerning on a number of fronts, but none more important than issues surrounding basic human rights and freedoms. In an ironic twist given the Chinese restrictions on a open and free internet, Reuters reports that in response to the U.S. government’s actions last week, the Chinese foreign minister, Wang Yi, said the United States “has no right” to set up a “Clean Network” and calls the actions by Washington as “a textbook case of bullying.”

Standing for the oppressed

It is understandable that there would be controversy surrounding the potential ban of TikTok throughout the world, especially in the United States, because of the popularity of the app and the relative freedoms we experience. Our nation is based on a democratic form of government, where our government leaders are accountable to the people and our nation’s laws are subject to our elected representatives. While questions and concerns abound about the proper role of government and of technology companies in the public square, the United States (and other Western nations) have the ability to enact change and even protest the presence of repressive measures in ways that the Chinese people simply do not. We also have the ability to publicly disagree with our government’s position and decisions. This is part of what separates our nation from authoritarian regimes, like China.

Chinese citizens are denied basic human rights and are subject to draconian laws that seek to dehumanize and suppress any dissidents against the CCP’s power and control. This is clearly seen in the recently enacted Hong Kong security law which bans sedition, secession, and subversion. As my colleague Chelsea Patterson Sobolik has said, “China is remaking Hong Kong in its own image, and freedom-loving men and women on the island-city and around the world are concerned. Hongkongers have watched how the communist government treats its citizens, severely restricting their freedoms of religion, assembly, and speech.”

China has a long record of blatant human rights and religious liberty violations that have been thrust on the world stage with the continued revelations of the treatment of Uighur Muslims and other minorities. These men and women created in God’s image have been subjected to concentration like camps, forced work, renunciation of their faith, and government propaganda, all in hopes of strengthening Chinese “national security”—a cover for authoritarian power grabs and state control.

The security and privacy concerns with Chinese technologies like TikTok, Huawei, and others do not only concern American citizens. These issues also extend to the treatment of other image-bearers who do not experience the same freedoms we are guaranteed in this country. Christians should be among the first to stand up against and speak clearly on these blatant violations of human rights because we believe that every person—no matter where they live or what they believe—are image-bearers of God himself and deserve the utmost respect and dignity.

Often standing up for the oppressed means giving up some of your own freedoms and opportunities as you seek to see justice enacted and human life valued. These sacrifices pale in comparison to the lack of freedom and opportunities experienced by our fellow image-bearers, especially those under the heavy authoritarian hand of the Chinese Communist Party. We stand with the oppressed, all of whom are created in the image of our Maker, because our Savior bled and died for us when we had nothing to offer and stood oppressed by our own.

By / Oct 7

On Jan. 1, 2020, the California Consumer Privacy Act (CCPA) will take affect in the Golden State, but its reach will go much further than you might expect. Signed into law by Gov. Jerry Brown on June 28, 2018, the CCPA is a groundbreaking piece of legislation that will forever change how each of us use technology products and how U.S. companies use our consumer and business data. Regardless of your political views on privacy and data issues, this California law will likely become the de facto law of the land because most technology companies like Twitter, Facebook, Google, and Apple are headquartered there. Thus, they must adhere to California law as they offer services to the rest of the nation as well as the larger international community.

What is the CCPA?

CCPA is a piece of legislation that was designed to give technology users enhanced privacy rights and consumer protections surrounding the use of personal data. CCPA will essentially allow you to see what personal data a company has collected on you, how it is being used, and allow you to delete that data or stop the company from selling it to third parties. The legislation was introduced on Jan. 3, 2018, in the California legislature by Rep. Ed Chau and State Sen. Robert Hertzberg. It was passed by both houses of the California legislature and signed into law on June 28, 2018, by Gov. Brown to amend Part 4 of Division 3 of the California Civil Code, which is a set of statutes that governs obligations of those who reside in California. Prior to the CCPA being signed into law, there was a strong effort among many California residents for some form of privacy regulation. The passage of the CCPA headed off a ballot initiative that would have gone before the voters during the midterm elections in November 2018, led by privacy advocate and real estate developer, Alastair Mactaggart.

The current law has come under intense scrutiny by privacy advocates and others. Privacy advocates argue that the bill does not go far enough in establishing personal data privacy rights for individuals and corporations as other laws such as the GDPR, while others argue that it will do irreparable damage to businesses and their ability to sell their services. Opponents argue that the sheer cost of implementation outweighs the potential benefits to consumers, who have already given consent for the capturing of data.

The CCPA has six major components. It gives users the ability to: 1) know what data has been collected on them; 2) know if this data has been sold and to whom; 3) say no to the sale of this data; 4) access this personal data; 5) request the deletion of this data; and 6) not be discriminated against for exercising these rights.

What does this mean?

The U.S. does not currently have any federal privacy regulations pertaining to the collection, use, and sale of personal data as broad as CCPA. While many federal statutes regulate the collection and consent of data on minors, the U.S. has historically sought to let the market decide these tools as opposed to the more regulatory frameworks found in the European Union and other countries. The EU enacted the General Data Protection Regulation (GDPR) on May 25, 2018, which has already affected many U.S. companies doing business in the EU. 

You likely have seen various aspects of the GDPR implemented as you browse the web and use technological services. In conjunction with the GDPR, many sites implemented detailed privacy policies, sought to reaffirm personal consent for the use of tracking data on the internet in terms of cookies, and publicized their privacy policies on their websites and through email correspondence. This all was to ensure that these companies and organizations complied with the GDPR rules even though they reside in the U.S. because of the global use of the internet and these services. Many companies expanded these privacy tools to the wider public as they complied with the GDPR, such as described in Microsoft president Brad Smith’s new book, Tools and Weapons: The Promise and the Peril of the Digital Age.

U.S. retailers are estimated to spend almost $100 million to provide these services to consumers because this level of data access requires rethinking and rebuilding their services and systems to comply with the law. The stakes for noncompliance are high, as consumers and the California attorney general can now bring lawsuits for data breaches and regulatory action including potential fines. After the first year of new regulations under the GDPR in the EU, the European Data Protection Board reported that €55,955,871 ($61,227,564.00 US) in fines were levied against companies for not complying with the GDPR, including a single fine of €50,000,000 against Google.

Since the CCPA was passed in the California legislature, there has been a concerted effort among many in the privacy and technology sectors pushing for a federal privacy law. Some advocate for a federal version of the CCPA giving all U.S. consumers the same level of protection and transparency, while others have pushed for a more neutral privacy regulation that is less taxing on companies while providing more limited consumer-level access to data. In September 2019, more than 50 CEOs have urged the U.S. Congress to pass a federal privacy law.

Why does this matter?

Each day countless pieces of data are collected about us from the online services we use. Every bit of data is captured by technology companies and used to strengthen their systems and products. The things we share have also become a powerful resource for companies to leverage as they provide predictive products to marketers and other companies. We often trade some level of privacy to have access to these tools and services because they provide immense benefits to our everyday lives.

This data can include personal information such as name, email, race, sex, gender identity, and various other data points which are used to market services and products. Essentially anything put online can be stored, analyzed, and sold by the companies whose products we use. But recently many have called into question the ethical bounds of marketing and even what data is being captured on our children and the effects on their privacy.

CCPA and other forms of future privacy legislation will affect how each of us use technology and even potentially alter our interaction with these companies, for good and bad. With the high costs of operating systems, some companies may choose not to offer certain services or tools to consumers. But it is also possible that privacy legislation will allow us to use technology with greater transparency and openness. Time will tell the exact impact CCPA will have on businesses and consumers, but we must be aware of the contours of it as it goes into effect on January 1, 2020.