Apple, FBI, and Cybersecurity: The Basics

March 22, 2016

Apple CEO Tim Cook’s letter about the FBI brought forth both emotional outrage and joyful fist-pumping from the tech community. It also has brought a very public spotlight to a conflict between two great problems that plague the modern era: security against threats and the safety of our personal data. The implications of this public debate for cybersecurity, law enforcement, case law, and digital privacy are worth considering.

But if you’re just tuning in, here are the basics, the debate so far, and what is ahead. There are a lot of interlocking pieces so it’s OK if you’re just getting a handle on this issue.

In December 2015, Syed Rizwan Farook and Tashfeen Malik murdered 14 people at a party at the Inland Regional Center in San Bernardino, California. It is now known they had ties to jihadi groups and this was the deadliest terrorist attack on American soil since 9/11. Farook was an employee of the San Bernardino County health department and was issued a work phone, an iPhone 5C running iOS 9 operating system software. Farook and his wife-accomplice destroyed their personal phones on which presumably they did much of the coordinating for these attacks. Farook’s work phone, the phone in question, has been in law enforcement possession since Farook’s death.

However, Farook’s iPhone is locked by the famous iPhone passcode screen. (iPhone 5Cs do not have fingerprint security). Furthermore, Farook may have enabled a feature that would scramble the phone’s information should multiple incorrect passcode guesses be made. This proves a nearly insurmountable obstacle for the FBI. Given these circumstances, the FBI considered the best course of action was to request that Apple help bypass this security feature by creating special software to disable the passcode and scrambling feature. So, on February 16 a federal judge in Riverside, California, mandated through a court order that Apple help the FBI by essentially two means: 1) bypassing or disabling the auto-erase function and 2) enable the FBI to guess the password however it wants, as quickly as it wants.

The authority that law enforcement is relying on is the All Writs Act, federal statute the government has historically used to compel cooperation in law enforcement activities.The FBI says it wants Apple to create this workaround software for only this one deceased terrorist’s iPhone and offered Apple to have complete control of the phone, the software, everything. They just want the data on the physical phone in the chance there’s any additional evidence in this case.

Before Tim Cook’s public letter, Apple complied with law enforcement’s initial request for information from the iPhone 5C by handing over its iCloud records. However, since the iCloud records only went up to October 19, 2015, this left an obvious gap in time. Adding to the complexity, in a March 10 filing the government said there is evidence that the auto backup feature for iCloud may have been disabled  somewhere between October 19 and his death on December 2. The FBI, desiring to do its due diligence, thinks additional information could be located on the phone but they can’t get into it because of the aforementioned combination of passcode lock and potential information scramble if the passcode is entered incorrectly. And that’s if that feature is even enabled. The FBI would rather not try their luck.

Apple built this security feature into its operating software since iOS 8 was released in September 2014. If this were any earlier version of the iPhone software, then, from a technical perspective at least, the FBI wouldn’t be facing this issue in quite the same way. With a simple password and no auto-scramble feature, the FBI could “brute force” its way into the phone by hooking it up to a more powerful computer and running all the 4-digit password combinations (10,000 by the way) until it guessed the right one. But with iOS 9, the operating system on Farook’s iPhone, a user can set a 4-digit, 6-digit or alphanumeric passcode. This makes guessing the correct password from a few hours to 5.5 years.

A quick aside: A brute force hack is just guessing passwords until the correct one is discovered. This is why websites encourage you to have passwords more complex than 1234 or 1122. Those are easy to brute force. With those passcodes or similarly simple ones, a thief could break into your account just sitting on the couch letting a computer do all the work in a matter of minutes depending on computing power.

Some other important events have also occurred since this very public debate burst on the scene February 16. For a full timeline of events, here is a good report from USA Today. On February 29, a New York magistrate ruled that the FBI could not use the All Writs Act to ask Apple to unlock an iPhone 5S running iOS 7 that is evidence in a drug case. While this case is completely unrelated–the request, circumstances, and software are different–it is significant in that the judge’s ruling said the All Writs Act was not sufficient to compel Apple’s cooperation.

The legal back and forth on this case has been very public. Apple, a New York District Attorney, and the FBI appeared before the House Judiciary Committee on March 1 to present their sides of the case. There has also been an appeal from Apple, a Justice Department response, and a response from Apple. On March 22, the Justice Department and Apple will present their cases in a public hearing in Riverside, California. No matter the outcome for either side, this case could proceed all the way to the Supreme Court.

From a cybersecurity standpoint there are a couple key factors to keep in mind:

First, digital information resists containment. Say Apple complies and creates software to bypass its security measures. This will require a team of 6-10 engineers to develop this software. Software is inherently different than a physical good in that it could be copied or distributed infinitely with no discernible effect on the end product. Of course Apple and FBI employees can and should be trusted, but human nature being what it is does not guarantee the software will not get out. In cybersecurity, human error or malfeasance is the number one weakness. An institution can set up all the right protections from outside attack, but as we saw in 2013 with Edward Snowden, it just takes one person on the inside to release the information to the world.

Second, insecurity for one means security for none. In the realm of electronic information, creating software to bypass a security feature essentially nullifies that security feature for everyone. Back doors to encryption (encryption being a feature that makes information unreadable and nearly unguessable without a key) means that very encryption is now worthless because someone out there can get into it. This means this is not merely a domestic issue but an international one since Apple operates around the globe. Internationally there are likely hundreds, if not thousands, of locked iPhones possessed by governments both well-meaning and not-so-well-meaning. Were Apple to comply, this would bring other countries to Apple’s door with requests for the software to bypass the security. What the United States does, the land of the free with those pesky first ten amendments to protect civil liberties and human rights, draws attention and emulation from other governments.

Finally, how should a Christian consider this issue? In Matthew 10:6, Jesus told his followers to be “as wise as serpents yet as harmless as doves." When considering this issue we must take into account not just the terrible tragedy and families affected by the December attack, but also the further ramifications of security in this information age. This is not an isolated case free of international or future ramifications. Lives everywhere, including those of persecuted Christians who use iPhones and other encryption serves to protect their missions, may be at risk now and in the future without strong information security. It as an issue for all the Church to consider prayerfully and carefully.

Update:  In a very surprising move, yesterday afternoon the FBI requested the much anticipated March 22 hearing be canceled because an "outside party" has provided a possible way of unlocking Farook's iPhone without the need for Apple's help. Although this particular case could possibly be resolved if this "outside party" is successful — the FBI will provide a status update to the court by April 5 — certainly the broader debate will continue as law enforcement and technology are increasingly intertwined.

Taylor Barkley

Taylor Barkley is the Government Affairs Manager at the Competitive Enterprise Institute where he works to develop relationships with U.S. lawmakers and agency staff, expand CEI’s policy opportunities on Capitol Hill, and educate government officials and regulators through briefings and other events. Prior to joining CEI in February 2017, Barkley … Read More

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone. 

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ 5 human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Romans 13:1-7; 1 Peter 2:13-14

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7 

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, non-maleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Fur- 3 thermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being. 

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24