New HHS Rules Would Simplify Emergency Data Sharing Procedures

Officials at the Department of Health and Human Services (HHS) are proposing modifications to the HIPAA Privacy Rule to ease information sharing in emergencies. Some of the modifications being discussed would provide patients affirmative legal rights, and others would loosen restrictions on medical personnel. The timing of the discussions poses a challenge to the likelihood of implementation. Changes to the rules cannot be finalized before January 20, 2021, when President-elect Joe Biden will take office. Public comments are due 60 days after its publication in the Federal Register. HHS is seeking input from HIPAA-covered entities, other healthcare and technology stakeholders, consumers, activists, and patients. Trump Administration officials have touted these proposed changes as displaying a commitment to providing individuals with greater access to their health information and as a way to deregulate the health care industry.

The proposed rules give individuals greater rights. For example, individuals are permitted to use personal resources to view and capture images of their personal health information (PHI) when utilizing their right to inspect their PHI. The proposed rules would also require covered entities to inform individuals of their right to obtain copies of PHI when a summary of PHI is offered instead.

A change that would require significant guidance is the proposal to reduce the identity verification burden on individuals exercising their access rights considering HIPAA requires a covered entity to take reasonable steps to verify the identity of an individual making a request for access. The method of verification is left to the judgment of the covered entity. The proposed rules would require covered healthcare providers and health plans to respond to certain records requests received from other covered healthcare providers and health plans when directed by individuals pursuant to the right of access. Providers would have less time to respond to individual access requests as the threshold would go from thirty days to fifteen days.

The proposed rules would allow covered entities’ greater agency to disclose PHI to “social services agencies, community-based organizations, home and community-based service providers, and other similar third parties that provide health-related services.” Currently, covered entities are permitted discretionary disclosures of PHI based on their professional judgment. The proposed rules would make the measure more subjective with a standard based on that entity’s “good faith belief that the use or disclosure is in the best interests of the individual.”

HHS officials believe that the proposed rules would help “reduce the burden on providers and support new ways for them to innovate and coordinate care on behalf of patients” while ensuring HIPAA’s promise of privacy and security.

Advertisement

Newly Formulated Contract Terms Are the Key to EU-US Data Transfers

On December 2, 2020, the European Commission and the European Union (“EU”) foreign affairs service issued a joint statement with goals for the EU’s relationship with the United States. The statement highlighted areas of shared interest including cooperation on “cybersecurity capacity building, situational awareness, and information sharing.” The countries could coordinate to combat attributed attackers from third countries. The EU groups also welcomed greater parallel action in dealing with artificial intelligence, seeing it as an opportunity to express their common “humancentric approach.” However, pertaining to privacy and data governance, the statement was clear in calling EU and American views divergent.

A recent Politico article suggests that getting a new data protection agreement between the EU and the United States is critical to repairing the transatlantic relationship. Without a legal mechanism for American entities to transfer EU personal data to the United States, companies will have to store data on their European customers in Europe, which is very costly and may be unaffordable for small and medium-sized enterprises. On July 6, 2020, the Court of Justice of the European Union (“CJEU”) invalidated the Privacy Shield, which was an agreement between the EU and the United States that allowed data to be transferred between the two countries.

The Privacy Shield was the EU’s and the United States’ replacement for an agreement called Safe Harbor. Safe Harbor allowed companies sending EU citizens’ data to the United States to be subject to EU’s privacy regulations which were enforced by the United State government. Revelations about the US NSA’s access to data led to greater scrutiny from the EU about American privacy practices. In particular, Austrian privacy activist Max Schrems challenged the Safe Harbor agreement, arguing that American surveillance made the Safe Harbor agreement invalid because it was in conflict with EU law. The CJEU agreed with Schrems and ruled that Safe Harbor did not properly protect EU data.

Despite divergent regimes for protecting personal data, the United States and the EU had previously been able to come to terms to allow data transfers between the countries. First with the Safe Harbor, which existed from 2000 until 2013, and then with the Privacy Shield which was invalidated by the CJEU in 2020. The CJEU’s repeated unwillingness to trust America’s privacy regime leads to a natural skepticism that a third deal would provide a different ending even if it is pivotal to transatlantic relations. Max Schrems has likened deals between the two countries as the United States telling Europe that its citizens have no rights.

In the same decision that invalidated the Privacy Shield, the CJEU stated that Standard Contractual Clauses (“SCCs”) remained a legal means to transfer data from the EU to countries that had not been designated as “adequate” data protection jurisdictions by the European Commission. The CJEU did caveat that there would be instances, particularly where government surveillance created risks for data subjects, that additional risk mitigation measures be put in place to supplement the SCCs.

On November 11, 2020, the European Data Protection Board (“EDPB”) evaluated the CJEU’s ruling and issued guidance. The EDPB requires businesses to evaluate whether foreign governments could access an EU data subject’s personal data, without relying on a specific entity’s history of being subject to such government access in determining that the risk was low. Since then, the European Commission has also released a new draft of the SCCs, broadened to recognize the complexities of international business relationships. The draft set of clauses permits two novel processing relationships, namely: EU-based processor to ex-EU processor, and EU-based processor to ex-EU-controller. The existing version of the SCCS addressed two data flow scenarios: an EU-based controller exporting data outside of the EU to other controllers, or to processors. The feedback period on these proposed new SCCs ends today. Barring a new agreement being executed by the EU and the United States, entities transferring EU personal data will be leaning heavily on SCCs using the aforementioned guidance.

Zoom Settles with the FTC on Video Surveillance and Encryption Overstatement

The Federal Trade Commission (“FTC”) conducted an investigation into Zoom Video Communications, Inc.’s (“Zoom”) privacy and security practices and announced a settlement agreement on November 9, 2020. As a part of the agreement, Zoom agreed to establish and implement a comprehensive security program and a prohibition on privacy and security misrepresentations.

Zoom’s popularity as a platform increased significantly as a result of the pandemic. The FTC stated that the company’s traffic went from 10 million users per day in December 2019 to 300 million daily in April 2020 at its peak.

The FTC alleged that Zoom’s claim that its video calls were protected by end-to-end encryption, was “deceptive and unfair practices that undermined the security of its users.” Zoom, according to its website and security white paper, represented that meetings that utilized computer audio was secured with end-to-end encryption, at least according to Zoom’s website. However, The Intercept reported that the encryption that Zoom uses to protect meetings  was actually transport encryption, which allowed Zoom service itself to access the unencrypted video and audio content of Zoom meetings. In the complaint, the FTC claimed that Zoom’s security practices were lacking, including for some data located on servers in China.

Further, while Zoom claimed that meeting data was being safeguarded in secure cloud storage, the FTC found that recorded meetings were being kept unencrypted on Zoom servers for up to 60 days before being transferred. Further, the FTC found that Zoom’s meeting launcher left consumers vulnerable to video surveillance.

The commission itself is divided on partisan lines on the strength of this settlement. FTC Democratic Commissioner Rohit Chopra issued a dissenting statement which said “The settlement provides no help for affected users. It does nothing for small businesses that relied on Zoom’s data protection claims. And it does not require Zoom to pay a dime. The Commission must change course.” Similarly, Democratic Commissioner Rebecca Kelly Slaughter also weighed in on the inadequacy of the settlement. She said that “Zoom is not required to offer redress, refunds, or even notice to its customers that material claims regarding the security of its services were false.”

In terms of actual course correction, back in May, Zoom announced the acquisition of Keybase which they believed would help Zoom build an end-to-end encryption to scale. Zoom will also be required to delete all copies of data identified for deletion be deleted within 31 days. The comprehensive security program Zoom is required to develop and maintain will include a review for security risks in all software updates and getting third-party assessments of its security program every two years for 20 years.

Zoom issued a public response to the FTC settlement which marked the agreement as part of a larger “commitment to innovating and enhancing” their product to “deliver a secure video communications experience.”

School Stakeholders Navigating Student Privacy

Parents, students, and educators are navigating a novel educational landscape. Some schools are relying on a virtual model that requires significant technological involvement, others have opened up their facilities for in-school learning, with significant testing and safety precautions, and others have created a fusion of the two. Each of these models comes with privacy risks of their own.

The Center for Democracy and Technology commissioned research on student privacy during COVID-19 and found that 86% of teachers have expanded the technology that they are using. This study also found that 1 in 5 teachers using technology that had not been approved by either their school or the district to which the school belongs.  About half of the teachers in the study have not received training at all. The other half received training focused on legal compliance.

The Future of Privacy Forum (FPF) and 23 other education, healthcare, disability rights, data protection, and civil liberties organizations released a report called “Education During a Pandemic: Principles for Student Data Privacy and Equity.” This report proposed privacy forward ways of navigating student data privacy in this pandemic environment.

The FPF report focuses on student health data because certain COVID-related data gets reported to local and state health departments. Pertaining to student health data, the FPF report recommends that schools implement a data minimization principle. “Any COVID-19 related requests for or collection of the health information of students, their families, or school staff must be narrowly tailored to the information necessary to determine whether an individual has or does not have COVID-19 or whether a requested reasonable accommodation or modification related to COVID-19 is necessary.”

The proposal also recommends that the information only be allowed to be shared with the community or general public if de-identified or aggregated. The FPF did caveat that privacy concerns should not prevent the disclosure of de-identified information about COVID-19 cases that would allow the community to “adequately protect themselves and policymakers to make evidence-based decisions.”

The Family Educational Rights and Privacy Act (FERPA) is a Federal law that protects the privacy of student education records. FERPA applies to all educational agencies and institutions that receive funds under any program administered by the Secretary of Education. The term “educational agencies and institutions” under FERPA generally includes school districts and public schools at the elementary and secondary levels, as well as private and public institutions of postsecondary education.

FERPA permits educational institutions to disclose, without prior written consent, personally identifying information (PII) from student education records to appropriate parties in connection with an emergency, if knowledge of that information is necessary to protect the health or safety of a student or other individuals. This is commonly known as the “health or safety emergency” exception to FERPA’s general consent requirement. The Department of Education’s FERPA & COVID-19 guidance provides that “law enforcement officials, public health officials, trained medical personnel, and parents (including parents of an eligible student) are the types of appropriate parties to whom PII from education records may be disclosed under this FERPA exception.” The health or safety emergency exception is not meant to apply to a “generalized or distant threat of a possible or eventual emergency for which the likelihood of occurrence is unknown, such as would be addressed in general emergency preparedness activities.” However, the guidance is clear that If an educational agency or institution determines that “an articulable and significant threat exists to the health or safety of a student in attendance at the agency or institution (or another individual at the agency or institution) as a result of the virus that causes COVID-19, it may disclose, without prior written consent, PII from student education records.”

When it comes to remote learning, the FPF proposes that schools have a data governance system including policies and procedures in place to monitor data access, sharing, and transfers. The FPF warns educational institutions not to make decisions solely based on results obtained from technology. They refer to the technologies adopted to combat the pandemic as “imperfect” and capable of producing false positives. Certain analytics tools, they claim, would be discriminative if used as the sole measure of a student’s performance or abilities.

FERPA would not apply purely to a student’s participation in virtual learning. However, where the setting requires recording and storing of a student’s image, name, or voice, it may become a FERPA protected education record. Educational institutions will need to evaluate the use of certain technologies on a case-by-case basis.

COVID-19 has presented stakeholders of schools with novel student privacy issues. There are solutions being presented, and some existing legal guidance to assist schools, districts, educators, parents, and teachers navigate these pandemic times.

Singapore Capture’s Citizen Faces to Manage Daily Life

Facial recognition technology has been controversial lately, as governments struggle with how to use this powerful tool. The city-state of Singapore has begun to build face placing detectors into the very fabric of society – the system that facilitates interaction with both the government and private sector.

Singapore currently has an advanced digital identity program called “SingPass” which allows users to verify their identities online and transact with various government agencies and particular private sector entities like banks and healthcare institutions. In total, residents can access more than 400 digital services that range from public housing to gaining access to tax returns. Access to SingPass has required a password.

Singapore’s Government Technology Agency (“GovTech”) was receiving over 150,000 password reset requests a month. A significant change will make passwords unnecessary. GovTech will be integrating a new facial verification feature into SingPass. Singapore made a $1.75 billion dollar investment in smart technology six years ago in an effort to modernize public efforts. Singapore is already a world leader in wireless connectivity. SingPass Face Verification will function on home computers, tablets, cell phones, and public kiosks.

Facial recognition technology leverages biometrics to map facial features to verify identities, delving into the geometry of an individual’s face. For example, it measures the distance between a “person’s eyes and the distance from their forehead to their chin.” These metrics create a “facial signature” which is compared to a database of known facial signatures.

SingPass has a mobile application. Singapore’s housing board (“HDB”) processed more than 53 million transactions online in 2019, which accounts for 99 percent of all its transactions. This change certainly reduces user friction, but also provides insight into how encompassing this biometric data capture is.

GovTech posits that this use of facial recognition technology is being integrated as a means to reduce the potential of deepfakes, and insists that the data that is collected is “purpose-driven.” The purpose is the particular transaction being processed. GovTech further claims that the facial recognition data would only be stored on secured government servers for thirty days. In trying to comfort concerned residents, GovTech claims that this data would not be shared with the private sector. However, there are reports that banks have tested SingPass’ facial recognition technology as recently as this summer.

We have seen governments take different approaches to facial recognition technology. After seeing the Chinese government use facial recognition as a law enforcement tool to subject more citizens to the criminal justice system for petty crimes, American cities have become resistant to enabling government use of this technology. Many have expressly prohibited police departments from using this technology for fear of invading an individual’s privacy. We recently wrote about Portland  even curtailing private use of the technology.

Consumers are used to having devices that integrate facial recognition technology to access services quickly. Many mobile devices and mobile applications provide users the option to bypass password inputs for facial recognition. Giving a government universal access to residents’ biometrics may test consumer sensibilities that balance technology friction and privacy. As in so many things that combine technological tracking in an allegedly free society, Singapore will be leading the way.

You Have Been Weighed in the Balance and Found Wanting (Or at Least, Inadequate)

Article 45 of the GDPR allows the transfer of personal data from the EU to a third country when the third country ensures an “adequate level of protection” (adequacy decision).  In determining “adequacy,” the GDPR provides specific factors to consider including the country’s respect for human rights, the effectiveness of its data protection authority, and its pre-existing obligations to other countries. Adequacy decisions are subject to periodic review (minimally, every four years) and require ongoing monitoring.

The European Commission adoption of an adequacy decision means that personal data can flow safely from the EU to the other country without being subject to any further safeguards or authorization. The adoption of an adequacy decision involves: (1) a proposal from the European Commission; (2) an opinion of the European Data Protection Board; (3) approval from representatives of EU countries; and (4) the adoption of the decision by the European Commission.

The European Commission has already recognized Andorra, Argentina, Faroe Islands, Guernsey, Israel, Isle of Man, Japan, Jersey, New Zealand, Switzerland, and Uruguay in years past as adequate keepers of data. In the wake of the Schrems cases and recent TikTok discussions, there has been significant scrutiny of government access to data around the world.  The ECJU rationalized invalidating the Privacy Shield mechanism of personal data transfers from Europe to the U.S. with the nature of U.S. government access to private-sector data. India and the United States has ramped up pressure on TikTok to either be divested of their Beijing based ownership or lose all access to their respective markets. Each nation referred to China’s cybersecurity law which preserves government access to private data. Yet, government demands for data held by the private sector are becoming commonplace.

In the United States, through sections of the FISA law, a special court order can be imposed on certain telecommunications service providers to disclose communications that may impact national security. Similarly, in Germany telecommunication providers are mandated to collect particular data from their customers. These data elements include name, address, and telephone number which German law refers to as “inventory information.” This inventory information is sent to the Federal Network Agency, with other agencies having the ability to make requests for that information as well.

The French surveillance law of 2015 goes even further, granting the French government expresses access to metadata in messaging, authorizing the production and use of algorithms to hunt for suspicious data that the government can capture and review, and allowing government access for multiple reasons, including economic espionage. The French law also authorized the government to analyze digital information affecting the national defense, foreign policy interests, major economic, industrial and scientific interests of the French government as well as to prevent terrorism, organized crime, and immediate threats to public order. Somehow the EU finds these to be adequate safeguards for individual privacy.

Under their cybersecurity law, the Chinese government has the right to obtain from any person or entity in China any information the Chinese government deems has any impact on Chinese security. The Indian government has developed a central monitoring system that has the means to intercept electronic communications and correspondence including e-mails, text messages, and voice calls.

The Brazilian Communications Agency intended to build technology to connect directly into telecommunication companies’ systems. In an effort to gain access to a very particular type of data including which numbers were dialed, the time and date the calls took place, and the duration of the calls. Some states like Russia, Thailand, and Malaysia simply provide no practical protection from government capture and the use of individual data.

With governments around the world enhancing their surveillance capabilities, perhaps we are heading to a perpetual state of inadequacy (at least for GDPR purposes).

The CPRA Will Bring New Rights, Responsibilities and Regulators to California Data Privacy Law

In less than a month, Californians will vote on a consumer privacy ballot initiative, the California Privacy Rights Act (“CPRA”).  The California Consumer Privacy Act (“CCPA”) went into effect on January 2, 2020, and state Attorney General (“AG”) began enforcing the law’s provisions on July 1, 2020. While the AG and others have touted the CCPA as “groundbreaking,” the activists behind the original CCPA initiative in 2018 maintained that California’s privacy law was baseline but that consumers deserve additional rights. If the CPRA initiative is successful, most of its provisions will go into effect on January 1, 2023, and the CCPA would remain effective until then.

In 2018, a ballot initiative was proposed to create consumer privacy protections. Activists, business interests, and state legislators were able to convince the creators of the ballot initiative to drop the proposal in favor of allowing the CCPA to be passed. Because the CCPA was legislatively enacted, significant amendments were considered and some passed.  However, if the CPRA is approved by California voters and become state law, it could not be readily amended without requiring further voter action.  Recent polling indicates that the CPRA is likely to pass.

Below are some of the more significant changes that the CPRA will bring to legal enforcement, consumer rights, and the obligations of the business community.

Enforcement Agency

The CPRA moves away from the existing American model of state Attorneys General enforcing the privacy law, proposing instead a new state agency, the California Privacy Protection Agency. Like the “Supervisory Authorities” under the GDPR, this agency would be charged with enforcing only the California data privacy law. This agency would also have a dedicated funding stream to meet its enforcement tasks. The proposed agency would be comprised of five members appointed by various governmental shareholders including the Governor, Attorney General, State Senate, and Speaker of the Assembly.

Sensitive Personal Information

The CPRA would create a new category for “sensitive personal information” which requires distinct treatment. Sensitive personal information is defined to include social security numbers, financial information, geolocation, genetic data, and other biometric information. The distinct treatment includes granting consumers the right to limit disclosure and use of sensitive personal information except as “necessary to perform the services or provide the goods reasonably expected by an average consumer who requests such goods and services.” Clear and conspicuous links would need to be provided so that consumers are able to exercise this right.

Disclosure Obligations

The CPRA requires businesses to abide by representations made in their novel disclosure requirements. Businesses would be required to provide the duration they will retain personal information, the purposes for which they collect personal information, and the volume of personal information collected. Misrepresentations or breaches of those representations would constitute a statutory violation.

GDPR Aspects

Like the GDPR, the CPRA would grant consumers the right to correct inaccurate personal information. Upon a verifiable consumer request, businesses would be required to use commercially reasonable efforts to correct the inaccurate personal information about a California resident. The CPRA also follows the GDPR’s lead in introducing data minimization on a larger scale.  Specifically, the CPRA would require that a business’s collection, use, retention, and sharing of a consumer’s personal information be “reasonably necessary and proportionate to achieve the purposes for which the personal information was collected or processed.” The GDPR prevents EU citizens from being subject to solely automated decision making processes. The CPRA would also grant consumers the right to opt-out of automated-decision making.

Violations

If the CPRA becomes law, the new enforcement agency can administer fines of $2,500 for each statutory violation, or up to $7,500 for intentional violation or violations involving children’s personal information.

Senate Republicans Stitch Together Safe Data Ideas into New Bill

Last week, the Republican members of the Senate Commerce Committee, including the chair of the committee, Roger Wicker, introduced the Setting an American Framework to Ensure Data Access, Transparency, and Accountability Act (“the Safe Data Act”). Last November, Senator Wicker had a working draft called the United States Consumer Data Privacy Act of 2019. The Safe Data Act resembles the 2019 proposal in most ways but includes a few significant changes.

Like the United States Consumer Data Privacy Act, the Safe Data Act provides the consumer rights that have been granted in the California Consumer Privacy Act (“CCPA”) and the GDPR, such as the rights to access, notice, deletion, opting-out, and correction as well as a right to data portability. The Safe Data Act also prohibits covered entities from discriminating against consumers who utilize some of the proposed rights. Organizations would be prohibited from denying goods or services to an individual because the individual exercised any of the rights afforded by the Bill.

The Safe Data Act is also aligned with its predecessor proposal in including requirements for companies to obtain affirmative express consent before processing or transferring individuals’ sensitive data. This bill partially incorporates some principals provided in the GDPR, such as requiring data minimization to large data-holding companies. This minimization would apply to all data collected, processed, and retained. Unlike Senator Wicker’s proposal last year, the non-discrimination provision only applies when an individual exercises the rights of access, correction, and portability. This bill also removes an exception provided in the previous proposal to retain and use data for internal purposes (research, service improvements, etc.).

The Safe Data Act finances the implementation of the bill through a $100 million appropriation to the Federal Trade Commission (“FTC”) to enforce the bill’s provisions. The FTC would gain the authority to impose injunctions and other equitable remedies for violations.

The Safe Data Act incorporates other bill provisions into the proposal as well. For example, the Safe Data Act integrates the Filter Bubble Transparency Act notice requirement on a public-facing website or mobile application using algorithmic ranking systems. Further, the Safe Data Act includes provisions from the Deceptive Experiences To Online Users Reduction (“DETOUR”) bill (a bipartisan proposal) which makes it unlawful for an online service with more than 100 million authenticated users to use a user interface to impair user autonomy. Like DETOUR, the Safe Data Act includes children protections such as banning user interfaces from purposefully targeting children to cultivate compulsive use.

Irish Data Case Against Facebook Could Complicate All Data Transfers to the US

Will the EU finally deny the right to transfer any personal data from its shores to the United States? Its privacy decisions have been inching closer to this determination for years, and an Irish case against Facebook may tip the balance.

For fifteen years, personal data being sent from the European Union (“EU”) to the United States were accepted under “Safe Harbor” principles. The Safe Harbor emerged in part to the EU’s 1995 Data Protection Directive being implemented and concerns that with the emergence of the internet, that the United States could not guarantee a sufficient level of protection for European citizens’ personal data.

In 2013, however, the Safe Harbor was challenged, due to Edward Snowden’s intelligence leak which indicated a significant American government surveillance program. The challenge to the Safe Harbor was rooted in the belief that the information of EU citizens stored in the US would be at risk of government surveillance. An Austrian citizen, Maximilian Schrems (“Schrems”), filed a complaint against Facebook with the Irish Data Protection Commission (“DPC”). The DPC declined to investigate the complaint because the data transfer at issue was in adherence to the Safe Harbor.

Schrems proceeded to challenge the Irish DPC’s refusal to investigate the complaint in court. The Irish High Court referred this challenge to the Court of Justice of the European Union (“CJEU”).  Facebook, like many companies, relied on Safe Harbor to process and transfer EU personal data. In October 2015, the CJEU declared the Safe Harbor invalid. In response, the United States and EU replaced the Safe Harbor with the U.S.-EU Privacy Shield, in order to allow companies to continue to transfer EU citizen’s personal data to the United States while still complying with the requirements outlined by the CJEU in the Schrems decision.

Recently, the CJEU invalidated the Privacy Shield mechanism for transferring data between the EU and the United States. The basis for the decision was once again governmental access to personal data. The recent decision (“Schrems II”) preserved an alternate legal mechanism for companies, Standard Contractual Clauses (“SCC”), when the data exporter puts in place appropriate safeguards to ensure a high level of protection for data subjects. Some local European data authority decisions and recent actions by the DPC against Facebook created concern around the use of SCCs as well.

In the DPC’s annual report last year, it disclosed that it had launched 8 investigations involving Facebook for GDPR violations.  A September 9, 2020 article in the Wall Street Journal reported that the DPC had issued Facebook a preliminary order to suspend transfers of EU personal data to the United States.

A spokesman for the Commission declined to comment on the report. Ireland’s data regulator has sent Facebook a preliminary order to stop transferring user data from the EU to the U.S. Though the DPC did not provide comment, Facebook stated that the DPC had “commenced an inquiry into Facebook controlled EU-US data transfers, and has suggested that SCCs cannot in practice be used for EU-US data transfers.” Facebook is also seeking judicial review of the Irish Data Protection Commission’s preliminary decision because the SCC is a widely accepted tool for transferring EU data to the United States, sans Safe Harbor, or Privacy Shield. This legal challenge will be significant to monitor as it has the potential to implicate every transfer of EU personal data to the United States going forward.

Portland Passes Unprecedented Restrictions on Facial Recognition Technology

Cities have taken the lead when it comes to regulating facial recognition technology. Currently, there is no federal regulation on this kind of technology, nor any policy in place to govern the use of the data obtained through facial recognition. Three cities in California (San Francisco, Oakland, and Berkeley) and Boston have banned the use of facial recognition altogether for their local law enforcement agencies. The state of Oregon has a law on the books banning police use of body cameras with facial recognition technology.

The city of Portland has drawn significant attention recently due to the city’s protests and civil unrest. The U.S. Marshals Service confirmed that it used surveillance technology near the heart of the protest at the Multnomah County Justice Center in downtown Portland.

A National Institute of Standards and Technology report released at the end of 2019 provided that the majority of facial recognition technologies have systemic issues that present varying levels of accuracy based on a person’s age, gender, or race. For example, the study showed Asian and African American people being up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities. These demographic differences extend to gender, as well. African American women were falsely identified more often when compared to large groups of others while conducting a police investigation and women generally were more likely to be falsely identified than men, and the elderly and children were more likely to be misidentified than those in other age groups. Meanwhile, middle-aged white men generally received the highest accuracy rates.

In an effort to curb some of the disparities, the Portland City Council unanimously passed what they purport to be the toughest facial recognition ban in the nation. In one of the two ordinances, the legislation is introduced by saying that “Portland residents and visitors should enjoy access to public spaces with a reasonable assumption of anonymity and personal privacy. This is true for particularly those who have been historically over surveilled and experience surveillance technologies differently.” The bills define facial recognition to mean “automated searching for a reference image in an image repository by comparing the facial features of a probe image with the features of images contained in an image repository.”

The Portland ban is distinct because it prohibits, with some exceptions, both public and private use of facial recognition technology. The three California cities and Boston prevent public institutions from using facial recognition. Portland’s legislation bans the use of facial recognition technology by private entities in public accommodations. While both the public and private prohibition are considered part of the same agenda, they did get passed through separate ordinances. Both will take effect in January 2021. This prohibition would not extend to the use of technology to unlock personal devices.

The city council passed these two bills to serve as a model for other cities in the nation. Lawmakers introduced Congressional legislation this June to ban federal government use of facial recognition software. The bills have not moved substantially. Without federal action, we may face piecemeal and varied responses to grand technology and privacy issues.