States Gear Up to Limit Use of Biometrics and Biological Data

This may be the year when the limitation of biometric capture goes national. Right now, company’s using biometrics are driven by one state law, but others could soon join.

As limits on biometrics cascade forth from Illinois in private cases based on the state’s Biometric Information Protection Act (BIPA), other state legislatures have decided to place limits on the capture and use of biometric information. The private right of action and statutory damages offered by BIPA have made Illinois the experimental lab where U.S. companies learn what counts as a biometric program and what their limits on that program may be.  Illinois may soon have company.

New York’s legislature is considering restrictions of consumer biometrics this term, and the proposed act looks like Illinois’ BIPA, requiring written notice of taking a biometric identifier, notice of how the identifier will be used and disposed of, and written permission from the subject to do so. It also contains a broadly worded “thou shalt not profit from anyone’s biometric identifier” that could eviscerate the entire biometric technology industry if it is interpreted in an expansive fashion. The disclosure prohibitions are also surprisingly broad and could lead to liability for simply using a biometric tech processor. The legislation also contains a private right of action and statutory damages that seem lifted straight from BIPA.

New York has also proposed a less comprehensive bill that would restrict companies from using biometric information in marketing. I don’t understand the driver for this particular bill, if the legislature is more concerned about a company only marketing to people grouped by biometric data, so selling to people with brown eyes or with single whorls in their thumbprints, or whether it is concerned with the manipulation of serving ads using your own voice or earlobes to sell material. However, there must be a concern because someone wrote an Act to consider.

Legislation proposed in Maryland regulates biometric identifiers and requires companies that are capturing such information to publish a written retention policy that will establish “guidelines for permanently destroying biometric identifiers and biometric information on the earlier of” three years or when the initial purpose for obtaining the biometric identifiers was satisfied. The Maryland act, like New York and Illinois, includes the same private right of action and statutory damages clauses.

Virginia has proposed a bill directed primarily at employers who chose to use biometric tools with their employees. The bill requires written informed consent from an employee before the capturing and storing of the biometric data. The bill would also restrict employers from profiting from the biometric data of their workers.

South Carolina’s entry into this race is a consumer protection act with very broad definitions of personal information and biometric information. This bill is almost a “CCPA for biometrics” which addresses consumer rights to prevent the sale of biometric data, protections for children, and the prohibition on discrimination against consumers for protecting their biometric data. This act seems to anticipate a future world where companies are using biometric data in more expansive ways than much of what I have seen, which is primarily biometric use for identification, authentication, or other security purposes. There is some voice stress analysis in use, but it seems the target of this bill is anticipatory, rather than reactionary.

California’s legislature passed one of the most thoughtful and constructive biometric laws last year, but it was vetoed by Governor Gavin Newsom. A similar law was introduced into this year’s legislative session. The Genetic Information Privacy Act (GIPA) placed limits on what companies could do with DNA information gathered from California residents, addressing a major privacy loophole that affects the DNA entertainment industry.

In the U.S., HIPAA protects biological information that a person would give to a doctor, hospital, or pharmacist to assist in medical treatment, so DNA provided for this purpose would be covered under federal privacy protections. However, millions of people have decided to swab themselves and hand this DNA data – the core information describing a person’s physical being – to unregulated private companies who reserve the right to use your DNA information for all kinds of purposes. Some of these recreational DNA mills provide your data to law enforcement and some to the pharma industry, and at least one has been recently bought by big private equity firms looking to expand the range of what can be done with volunteered DNA. So this is a significant privacy problem, in part because most people who swab themselves for the benefit of these private companies are unaware of the risks and likely exposure of their biological information.

The newly introduced California law, like GIPA, would require direct-to-consumer genetic testing companies to honor a consumer’s revocation of consent to use the DNA sample and to destroy the biological sample within 30 days of revoking consent. It would also provide consumers access to their genetic data. The law would not provide a provide right of action but could be enforced by state or local officials. It may be written to overcome Gov. Newsom’s objects, which he claimed were related to restricting COVID-fighting efforts.

These legislative actions may or may not be passed into law. In any case, it is clear that the use of biometrics by businesses for consumers, marketing, and employees has sparked the imagination of state legislatures, and we are only likely to see more action in biometrics for years to come.

Should CDA Section 230 Be Changed?

In the current environment of reckoning for the societal power of Big Tech, one threat seems ever-present on the tongues of those who would cut these companies down to size. Enacting this threat is likely to have the opposite effect that many people intend, but it is still worth our consideration as an answer to problems in the way the digital world affects society.

Lawmakers and regulators have threatened to transform the way social media operates on the internet by revoking a law that protects internet hosting companies from liability for third-party content posted on the sites of those companies. The law, known as Section 230 of the Communications Decency Act (CDA), has affected the way the internet treats people’s posts for nearly two decades. Changing it would likely bring unintended consequences.

Both US political parties, when evincing concern about the size and power of digital social media companies, claim that the protections for lawsuits afforded by the CDA should be abolished. Both President Biden and Trump have advocated for its revocation. Some see this as a simple way to punish Facebook, Google, and Twitter for specific disfavored behavior. But doing so would be substantially more far-reaching than anticipated, with effects on every company allowing third party comments or content on their websites. That may not be a bad thing.

In its well-crafted guide to CDA Section 230, The Verge explains that the 1996 provision says an interactive computer service cannot be treated as the publisher of third-party content, thus protecting websites from many types of lawsuits including defamation. “Sen. Ron Wyden (D-OR) and Rep. Chris Cox (R-CA) crafted Section 230 so website owners could moderate sites without worrying about legal liability. The law is particularly vital for social media networks, but it covers many sites and services, including news outlets with comment sections. . . The Electronic Frontier Foundation calls it “the most important law protecting internet speech.”

Do you think Facebook is more like a telephone company or a newspaper? It has aspects of each. The U.S. has a long history of tight regulations for the good of the general public on industries it considers to be utilities. Internet companies have tended to fight the characterization of the internet or services provided there as being utilities. The LA Times and academic sources have recently argued that the Internet should be considered a utility, and the pandemic experience of working and learning at home has made the argument stronger. Revoking Section 230 would be a quick method of forcing more public responsiveness from internet companies.

The arguments in favor of eliminating Section 230 protections for social media contain substantial hyperbole and obvious false equivalencies.  Some people are saying this needs to happen because social media companies exercised too much power by blocking Trump’s accounts, and shrieking about tyranny and Constitutional issues.

Let’s not pretend this is a First Amendment question or our country “has become China” when the exact opposite is what brought this crisis to a head. The First Amendment protects speech against state action. The state is not acting here, so the First Amendment is not at issue. The state – and the President in this instance – can say anything publically and those thoughts will be noted, published, and disseminated over dozens of channels and outlets. As the former president and still the presumptive head of his party, the press will continue to pay attention. Private companies can decide which content to allow on the platforms they pay for. Television news is not required to publish all of Joe Biden’s musings, and Twitter shouldn’t be required to publish all of Trump’s musings. It has been interesting to watch a group of Americans who simultaneously insisted that U.S. businesses be allowed to operate with minimum regulation also for deeper regulation of American companies in this particular circumstance.

It is also absurd to claim that private companies censoring the harmful lies and deadly provocations of a U.S. President is somehow authoritarian. In authoritarian regimes like Russian, North Korea, or China, the word of the Dear Leader is not allowed to be censored; dissenters are censored. In the present social media controversy, the most powerful person in the world is being de-platformed by some private companies for leading a terrorist movement that kills people while attempting to destroy proven democratic outcomes. This is the opposite of totalitarian state action – private actions taken in defiance of an attempted totalitarian takeover.[1] One of the world’s most powerful business executives, Jack Ma, may have just emerged from self-imposed exile or government censure related to the digital platforms he controls. The government, and its leaders, have a monopoly on physical and financial force. China, and other totalitarian states, threaten businesses who question them, businesses don’t censor the government.

Also keep in mind that any company that hosts comments online, from MSNBC to Fox News, from TMZ to the New York Times, and everyone in between, would be affected if Section 230 were removed as a defense to lawsuits. WordPress and other companies hosting bloggers would be affected. Even companies like AT&T and Verizon, who may be providing technical hosting to the entities offering online opinion spaces could be affected and may need to change business models.  This is not simply a direct attack on Facebook and Twitter with no collateral damage.

Now that we know how deeply some people can be manipulated and twisted by online content, maybe we have reached a point where we should require the hosts of that content to be more responsive to how they are affecting our society and more accountable for cleaning up the garbage. We worship free speech in this country, but the time may have arrived for us to be more aggressive about removing harmful lies and hate speech from our digital multilogue. We already moderate content, so this would be a change in degree, not a fundamental change in kind.

One of the ways to encourage this to happen is to remove Section 230 protections from internet service providers and social media companies. So the people calling the loudest for Section 230’s revocation will be the ones who scream the loudest at the clear effects of that revocation – an internet where lies and irresponsible provocations are open to a lawsuit and therefore policed much more severely than they have been. For example, we saw how quickly online sex offers dried up and disappeared from places like Craig’s list when the possibility of host liability for the ads became an issue.

Section 230 of the CDA has performed its desired function – it showed us what an internet free market of ideas could be.  But now that we know the downside of being swamped in toxic commercial and political manipulations, maybe we open this market to the American legal system, encouraging our market to be more carefully managed for the protection of the most vulnerable. Section 230 may have lived its useful life and be ready for retirement.

[1] This is also a textbook strategy for addressing the leaders of terrorist organizations like Al Qaeda. De-platforming the leadership can start to defuse the effect of inciting hate and lies.

The New Age of Content Moderation(?)

The huge search and social media platforms of the internet are reaching an inflection point. For decades they have been able to deflect attention from their role as content providers. The issue is now front and center in our national debate and the long-reigning status quo is likely to settle into a new mode of operation, if not consensus.

Current political polarization and the murderous result of an ocean of bald-faced unsupported easily-refutable lies have turned an otherwise dry topic – where do big companies draw the line when deciding which third-party information to host on their systems – into a crisis for our democracy. We have finally reached the juncture where the promulgation and repetition of lies created such an obvious and attributable result that we can’t ignore the causes. Like the production of sauerkraut, failure to scrape all the scum off the top can render the entire concoction sickeningly poisonous.

Make no mistake, the large search and social media companies have always moderated content. The easiest place to see this is their censoring of heavily sexualized content. Google and Bing’s search algorithms and the Facebook/Instagram rules restrict pornography and other content they believe that many people would find objectionable or inappropriate for children.  If they didn’t, their systems would be swamped by sex advertisements and solicitations for the deeper debasements of the human id. How do I know this? For one reason, Google and Facebook both tell us so.  For another reason, I was on the content restriction team at CompuServe – a digital media company that contracted third parties for content and provided digital spaces for people to congregate. I saw firsthand that if the sex isn’t moderated or cut completely, demand for it will overwhelm the rest of the content. People’s desires may be uncomfortable to discuss, but they are predictable.

In fact, the kinds of statements inciting violence that were recently banned have violated Twitter rules for ages, and Twitter has previously dropped accounts that advocate hatred and violence. However, the stated community standards have not been followed consistently, and social media sites have financial incentives to prioritize controversial and incendiary content on their services – experience has demonstrated that people will spend more time and energy on the sites when those people are angry or upset.

Manipulating people’s emotions and polarizing their populations has been great business for Facebook and Twitter, and it was only in the aftermath of the obvious Russian manipulations leading up to the 2016 elections that a significant percentage of the general public in the U.S. considered calling social media companies into account for the results of their policies of incitement. If you remember the reactions of Zuckerberg at the time, he seemed equally surprised that his networking company, which had been started to let college students share thoughts with each other, could sway elections and be manipulated with serious social cost, though he later acknowledged the naivety.

So the coming changes in content moderation are a matter of prioritizing social responsibility for the platform’s economic interest in polarization and emotional manipulation. There have always been rules here, and the social media companies have always given lip service to enforcing community standards, but now the community may coerce the companies into taking those standards – and their place in our society – seriously. Facebook has acknowledged that its platform has been used to incite and encourage violence in some instances.

Europe, with different laws and social priorities toward freedom of speech, started this discussion in earnest with the big tech companies.  When Google executives were criminally charged in Italy for not removing illegal content and a CompuServe executive was arrested in Germany for allowing illegal goods to be sold online, U.S. companies learned that they needed to consider the differing community standards of the countries where their customers resided. Instagram and Twitter operate with greater content limitations in majority Islamic countries and other more restrictive societies. They made these adjustments overseas, so why can’t they make appropriate adjustments to their content moderation in the U.S. and Canada? Google has made content accommodations for the “Right to be Forgotten” in the European Union, so we know that such moves are possible to meet the standards of important communities.

We now know not only that encouraging conflict and distress creates micro-scale problems for individuals – bullied teens, victimized women, sufferers of depression and anxiety – but on a macro scale for our society as a whole. So we need a responsible discussion of how digital content management can be adjusted for the benefits of our communities, even if the adjustments harm the profits of big tech.

It is time for a reckoning with the power and incentives for digital content control, but this should not be driven by the grievance of one political party or the other.  It should be driven by a desire to promote the best in our society while reducing manipulation, division, and hate.

Why Big Tech Wants Your Body

Your body may be a wonderland or a wasteland, but it is a goldmine of data. Collectors of information have noticed.

In our midwinter exploration of the economic and legal foundations of data regulation, we next turn to a natural tool for personal identification, for ongoing transactions (like breathing, walking, and heartbeats), and for categorization – your body. Big tech wants your body, and apparently, we are willing to offer our bodily data upon the altar of big tech.

Regulators know this and are beginning to address biometrics – the measurements taken from our physical presence – in law. States like Illinois, Texas, and Washington have laws requiring permission of the data subject for the capture and use of certain biometric indicators. The European Union classifies biometric information as sensitive data and companies and governments are fined for capturing this data outside the strict rules.

We have all read about privacy concerns with fitness technology. Locations of secret military bases revealed by the public display of soldier’s running/exercise routes on fitness tracking apps. Divorce lawyers and suspicious lovers using fitness tracking data to find cheating spouses. (NFL Network correspondent Jane Slater discovered her ex-boyfriend’s fitness monitoring proving too much sweating and heavy breathing away from home in the wee hours of the morning – Slater wrote, “His physical activity levels were spiking. Spoiler alert: He was not enrolled in an Orangetheory class at 4 a.m.”) I’m certain law enforcement officials this week are using fitness trackers and smartphone geolocation to confirm the locations of the U.S. Capitol rioters caught on camera (preserved on Bellingcat) and identified with facial recognition software. Criminal charges will follow.

But many people don’t know how a collection of physical information about your body can be used by large tech companies.  Facial recognition software has become controversial, but there is no reason that the government couldn’t create a database with body measurements as a foundation. Some security programs use gait recognition and matching. The way you move through space is as unique as a fingerprint, and computer software has been developed to compare the walk of a masked bank robber with the walks of the police’s suspects of the crime. The Chinese government has developed gait recognition software as part of its population control measures.

Virtual tailoring can be important in a pandemic, when you may not want to visit someone who will be close enough to measure your neck and inseam. The MTailor app uses smartphone cameras to customize clothes for the past six years.

Amazon has been especially keen to take your body measurements. In the fashion selfie app that Amazon Shopping Service closed down last year, its Halo fitness app which the Washington Post cited as the most intrusive tech it had ever tested, to the new clothes sizing app that claims to customize shirts just for you, Amazon has been producing “consumer benefits” that require us to submit measurements to the company. The newest tech “uses your height, weight, and two photos to create a precise fit” for clothes that you would order from Amazon. According to the Washington Post, the Halo fitness tracker “tells you everything that’s wrong with you. You haven’t exercised or slept enough, reports Amazon’s $65 Halo Band. Your body has too much fat, the Halo’s app shows in a 3-D rendering of your near-naked body. And even: Your tone of voice is “overbearing” or “irritated,” the Halo determines, after listening through its tiny microphone on your wrist.”  Too much truth may be frustrating for all but the most masochistic of consumers. But Amazon benefits from all of this data.

For the clothing app, Amazon deletes the pictures you send to assist in the sizing, but it keeps the data that includes a virtual model of your body. This information can clearly be helpful for the described function, but it serves other purposes as well. If this information falls into your data aggregation file, held by and on behalf of information management companies everywhere, it can be used both to identify you by body type and specific body information, but to keep tabs on the changes in your body shape over time.  This could trigger sales contact based on assumptions of pregnancy, illness, or other health emergencies, or even age. Whether your body shape changes over time can lead to assumptions about a fitness routine, consumption, and lifestyle issues. Companies like Amazon will make sales suggestions – books, vitamins, workout equipment, diapers – based on these assumptions. Target operated a 20,000 3-D body scan survey in Australia and is likely to use that information for analytics of all kinds.

Some companies will identify you by your body movement. Some will use body information to sell you things or to combine that data with other information that places you in helpful sales categories – for example, a fit, active 20-year-old will receive different sales pitches than a heavy, sedentary 50-year old. And we will only ever see the tip of the iceberg.  For example, Microsoft has applied for a patent that would use body movement and facial expressions to evaluate the success of business meetings. The possibilities are endless. And, as discussed in my blog posts from the past two weeks, body data provided to Amazon, Target, or other companies can be used by that company for almost any reason.

Your body may belong to you, but the story it tells belongs to big tech.

One Transaction Generates Data to Feed Multitudes

Last week I jumped from the starting point of the newest U.S. anti-trust action against Google into a discussion about the legal and economic status of data. I would like to carry the discussion of data further.

To briefly recap: data is history (it describes someone at a given time or it describes something that happened at a given time); history is not subject to ownership by anyone; and while we have some laws restricting use or financial exploitation of the information, generally anyone who can figure out how to use data legally will be allowed to use it productively. Data is considered to be a commodity by many, including the states AGs who just sued Google. Is this more like harnessing the wind, drilling for oil, issuing securities, or cultivating crops? We are still figuring out the appropriate analogies, and the correct analogy may depend on the type of data collected and how it is used.

What do I mean when I say anyone can use data productively? In the U.S., except for limited restrictions, many parties can claim control and use of the descriptions of a transaction. For example, if Anita purchased a pair of slippers from a local store to be shipped to her house, and she paid with the store’s online application accessed from her smartphone, a dozen parties have claims on some or all of that information. Under current law, all of these parties could use the information to effectuate the transaction, and likely for internal purposes, and some could transfer that data to others in most circumstances.

Anita thinks this data is hers because it describes her transaction, and California and the EU give her some rights to limit how the information is shared. But there are many more “first parties” who feel they were part of the transaction and will keep/use records of it. Who would collect data from this transaction?

The store she purchased the slippers from, of course, maintains this data as its own sales record.  But also that store’s merchant bank, it’s payment processing company and probably the shipping company used to deliver the goods.  By the way, the shipping company could actually be an entire series of primary shippers, fulfillment coordinators, warehouse operators, and trucking or delivery contractors, all of whom now have Anita’s name, address and probably know what was shipped to her house. They may work for one company or they may represent several separate entities. The store may have a special purchase points discount program with an outside marketing firm managing this program and keeping Anita’s information in their databases.

The store’s online presence is likely monitored by Google Analytics or a similar data company. If Anita came to the store through an online advertisement, then the site hosting the ad, the company managing the ad buy, and the ad placement network would likely have detailed information on Anita’s purchase and may receive funds from the clickthrough and/or the purchase. There could be several variations of this set up which could include other parties receiving Anita’s information.

But Anita’s side of the transaction also creates interested parties. Since she found and purchased the slippers over her phone, the company that operates the application she used will capture all of the transaction data. So may the company that provides the core software for the phone and allowed the app to be downloaded – likely Apple, Google, or Samsung. The phone company that connected the transaction – Verizon, T-Mobile, or AT&T – may collect information about the transaction too.  All of these companies can include location data of when and where the transaction was actually completed and may charge to pass this data to either the company’s mentioned on the retailer’s side of the transaction, third parties interested in this transaction or to data aggregators.

And, of course, Anita needs to pay for the slippers, so her bank will keep the data, and so will the company sponsoring the payment application she used – Venmo, PayPal, MasterCard, Visa, AmEx. All of these financial companies think of themselves as data companies now and make significant money packaging up the data about all of our transactions, analyzing them with machine learning programs, and selling the information – aggregated or otherwise – to anyone who might be interested. Some of these companies will package up the names of everyone who bought slippers or footwear over the past month and sell contact with these people to other retailers who want to find live slipper-buyers.  Maybe a retailer’s analytics show that people who just purchased slippers will soon purchase sweatpants or a robe, or even cocoa mix and marshmallows, so they want to send out a coupon when they know a buyer is ready. And, as with the shipping companies, there are lots of business structures to serve these markets, with financial processors and marketing consultants and data analytic specialists, so the number of companies in the chain is likely higher than you might think.

Nearly all of the companies I just mentioned have a first-degree relationship to the transaction – the company performed a service that most people would recognize as part of that transaction. As you move further out along the chain to second and third-degree relationships, or companies that were not involved at all in making the introduction, the sale, the payment, or the delivery, you still find people making their living off of the data generated by Anita’s purchase of slippers.

I describe this transaction and the data it generates to help explain why the attention economy is complex and why it is difficult for Anita to say “the information about my purchase belongs to me, and not to anyone else.” Not only is history, not something that one person can own, but a dozen parties have a legitimate claim to that same sliver of history, and dozens more are likely making use of it in an intricate data-focused economy.

Technologies Lost in 2020

In 2020, a more tragic year than most, we lost giants of the technology world. We lost Larry Tesler, PARC’s magician, later at Apple, who helped develop the computer commands that run our lives, like ‘search and replace’ and ‘copy and paste,’ and other concepts crucial for user-friendly software. We lost Russell Kirsch, creator of the pixel and the first digital photo. We lost Gideon Gartner, who pioneered rigorous research for companies buying computing technologies and left his name on his legacy consultancy. But we also lost important technologies.

Technologies, like animals and plants, have life cycles. Tech innovation is born, if it can sustain investment and interest it lives, and eventually it either disappears or gives way to a better method of solving the problem. I keep a very old cash register, telephone, typewriter, and adding machine in my office to remind me about the fleeting life of tech.

Due to the pandemic, 2020 saw the rise of technologies that might have foundered or that found a new life accelerating their development and acceptance. Videoconferencing finally hit its stride, not just for meetings, but webinars, conferences, parties, and weddings. Grocery delivery, which failed so spectacularly in the 1990s and had failed to catch on in many markets since then, has exploded and gone public. Online wellness apps have exploded.

But many technologies and tech services died in 2020, and we stand here to remember them.

Adobe ends support for its Flash Player today and will block content from running in Flash Player in two weeks. If you wanted to do anything fun on the internet in the 1990s and early 2000s, you needed the Abode Flash Player.  It gave us a glimpse of what the medium could be. But Flash had technical problems and security issues and had been replaced by better, more efficient alternatives. According to PC World “Flash actually held on far longer than anyone expected, considering Apple co-founder and CEO Steve Jobs fired the first shot at Flash way back in 2010 with his famous open letter. Its decline started officially in 2017 when Adobe said it would kill support for Flash by the end of 2020. Browser makers also started to restrict Flash, and eventually blocked it entirely.” But Flash nostalgia fans keep heart because the Wayback Machine still emulates Flash animations in its software collection.

“What’s a Quibi?” is now a historical question and not a hot new trend. How quickly things change. It was a hot new trend during last year’s Super Bowl. According to Finance & Commerce, “ Quibi, short for “quick bites,” raised $1.75 billion from investors including major Hollywood players Disney, NBCUniversal and Viacom. But the service struggled to reach viewers, as short videos abound on the internet and the coronavirus pandemic kept many people at home. It announced it was shutting down in October, just months after its April launch.” As companies and governments fight over TikTok, even the well-funded, well-researched big players did not have the ability to play in the youth-driven short video market. Its owners expected Quibi to dominate the commute to work – which millions of us stopped doing just before its product release date. Couldn’t be worse timing.

If, as a tech company, you sell billions of dollars worth of clothes to consumers then it might make sense to charge those consumers $200 for a selfie camera that gives consumers fashion advice and proposes what they should buy to complete any outfit. However, the cost is relatively high, and apparently receiving personalized fashion advice from your clothing store both feels manipulative and gets old fast. For these reasons, among others, the Amazon Echo Look service ceased to work this year on July 24. However, the Amazon Shopping app still spins out fashion advice and can be accessed by calling for Alexa on other devices. By the way, Amazon also abandoned its Dash Wand product, a hand-held device with a built-in scanner to read barcodes for groceries you want to reorder.

Believe it or not, AT&T was still operating and selling a new DSL connection until October of this year. AT&T is not cutting off current subscribers, but won’t be taking on new subscribers, which could mean a total lack of wire-connected internet access in some rural areas. Google Fiber is still running strong, but its Google Fiber TV Service was dropped in February, except for existing customers – with no report on how long those customers can continue the service. Google claims that its customers don’t need traditional television.

Other notable services leaving us this year include Windows 7, for which Microsoft has stopped sending security updates; Google’s Daydream Virtual Reality platform for mobile phones, which is no longer supported; Slingbox decided to discontinue all products and services this year, and its products will gradually lose functionality as apps are phased out. Technology marches on, always crowing winners and casting losers into the ditch, leaving most to hover in between hoping to turn a profit next year. Thinking about the tech that disappeared this year gives us context for the lives of our current favorite products and services. Nothing lives forever.

 

What Law, Economics and the Newest Anti-Trust Law Suit Ask About Data

Two weeks ago I collected the major recent anti-trust/competition lawsuits, by regulators and competitors alike, filed against U.S. big technology companies. My point was that, after a long fallow period where these giants received the benefit of the doubt for their successful competitive practices, the public trust has seemed to turn, supporting lawsuits on a wide variety of theories.

Although I wrote to mark the mid-beginning of a trend likely to continue for decades, my article was premature, as a day after our publication Google was sued in an anti-trust action by 38 states. This lawsuit is the first action where I have seen the term “attention economy” stated, defined, and used as the basis for claims. The states use the metaphor of data being a resource, like oil, that that can be captured and refined into something worth selling.

The states claim that Google “uses its gargantuan collection of data to strengthen barriers of expansion and entry, which blunts and burdens firms that threaten its search-related monopolies (including general search services, general search text advertising, and general search advertising).” Setting aside the fact that Google has a significant direct competitor in Microsoft – a company powerful enough to be the subject of its own set of anti-trust suits by regulators and competitors in the past couple of decades – the claims are similar at their core to the anti-trust cases made against AT&T starting in 1974. Google has built an enormous resource so valuable that everyone uses it – like the telephone network fifty years ago – and they are leveraging this resource to 1) enter other fields as a leader, and 2) keep competitors out of their own revenue streams.

There is much to unpack in this complaint and I intend to do so in a later post. Here, as we career toward the blessed end of our annus horribilis (and we hope, not into another), I want to revisit the metaphorical concepts underlying many of these lawsuits. What are data, really, as a legal concept?

First, we need to parse the term. What we call data is history – a description of what happened and who it happened to – and nobody owns history. Of course, only limited aspects of history are recorded for posterity, but the information captured in the modern world is growing exponentially with cameras and IoT devices at every bank and intersection. Fading memories can reduce the impact of history, but computers can keep their historic information for as long as their owners like.

The classification of information at the base of this and many other lawsuits includes two types of data: transactional data and descriptive data. The combination of the two is especially valuable. It helps to know that 100 people bought left-handed baseball gloves, but it can be much more valuable to know that Tommy bought a left-handed baseball glove.

I am using transactional data in its broadest interpretation right now, captured information about every move made in our world. I’m talking about any activity that can be noted and recorded. This includes online searches, browsing to particular websites, remaining at an internet page for ten minutes – or leaving within seconds, watching videos, requesting videos and not watching them, browsing books or cooking utensils, translating phrases. It includes attending church services, riding the bus, walking in the park, visiting friends, and learning to juggle. And of course, it includes financial transactions, both online and off, where you purchase diapers or stay in a hotel room.

Descriptive data is simply information that can help identify you, which can be as simple as a name, address, or email. But for sophisticated analysts like Google, two or three items of information like your birth date, your gender, or even particular search terms may be enough, in conjunction, to identify you. This is why legislators have such a difficult time defining “identifiable” information.  Lists of name, address, and social security number work well for laws concerned about restricting identity theft because this limited data is what the thieves need.  However, for laws restricting business use of personal data like the GDPR or CCPA, broad – in those two cases impossibly broad – definitions of personally identifiable information recognize that companies can identify a person from aggregations of data that legislators can’t predict ahead of time.

The concepts are not mutually exclusive, as transactional data can be descriptive – regular purchases of feline treats, food, and litter can describe a person as a cat owner – and descriptive data can have clear transactional implications – if we know where you live and work we are likely to know where you order coffee or buy groceries. But it helps to understand the differences between the two types of data if you are considering the legal implications of data ownership and use.

As a general rule, U.S. law does not recognize ownership of data. Neither transactional information nor descriptive information is copyrightable subject matter. There is a line of cases that protects the economic value of certain “hot news” transactional information like the play-by-play call of baseball games for the people who invested in creating those games in the first place, but only for a very limited time, maybe as short as a few minutes, and then the data is available to everyone.

So, no matter what you would like to believe, you don’t own data that describes you or data created by your own actions. It is not possible to own this information.  So, if this thing (information) that is no one’s property has value, who gets to exploit its value? As stated immediately above, not the person described or the person whose actions created the data. While the EU protects such information from certain kinds of exploitation and claims that people have a human right to keep certain parts of this information private, no one has seriously offered a regime where you could make money by selling your own data.

Why not? In part, because no one has recognized that you might have an economic interest in data about you or your life, and in part because recognizing and accounting to you for the use of the data would be difficult, and would involve policy decisions we haven’t seriously debated yet. Individuals would need to push Google and others to provide credits for using our data, and the information giants have no incentive to do so. It has been suggested that data subjects should form bargaining collectives to fight for the value of their data, but I haven’t seen any data unionization gain traction. The government would need to step in to make this idea gain serious traction.  The market is unlikely to provide us economic management of our own descriptive or transactional data.

Google doesn’t own it either.  But Google holds lots of it and can provide transactional data in a timely fashion. (That’s another issue about transactional data.  It loses economic value quickly – if I know someone wants to buy a book now, I can sell it now. If I know someone wanted to buy the book last year, that information has different, and likely lesser, value to me.) The new lawsuit compares this data to oil.  I don’t agree. I would argue that, if Google’s data is an economically viable resource, the kind of data used by Google is more like a crop that is harvested and milled into something valuable.  Google doesn’t pick its data out of the ground or the air, instead it creates and cultivates a place – its search engine – for transactions (searches) to be initiated by people, collecting the descriptive results of the transactions Google facilitated. Placing a camera at an intersection and collecting information about passing pedestrians is more like drilling for oil – you take whatever you find. Google has cultivated an entire ecosystem where people express their needs and desires, and they harvest the information that is expressed there.

So does the fact that Google has created a place and method for people to voluntarily express their information mean that Google has more of a right to that data than anyone else does? Economically and legally, both oil and wheat are commodities that can be sold by whoever holds them and sold first by the person who can collect them. The court will need to decide. The anti-trust laws can punish Google for the way it wields its market power, depending on how that power is defined.  But the legal and economic thinking about how data functions in our society can change the way we live our lives, and who gets a financial benefit for the things we do.

Silicon Valley Anti-Trust Review: Scorecard and Coming Attractions

Tim Wu, the bard of big tech, has written multiple books about the rise and coming fall of technology monopolies, oligopolies, and empires. In The Master Switch, Wu tells the story of how, in the 19th Century, the existing telegraph empire tried to smother telephone technology in its cradle. He continues with how the ascendant, then established, telephone monopoly destroyed rising competition for decades, with the tacit support of the government.

Wu’s latest book, The Curse of Bigness argues that, since the emergence of the digital economy, our government has abandoned a rich and socially beneficial history of trust-busting to promote the success of big companies dominating people’s lives. He advocates for the benefits of competition, especially among the digital industries that reach into our homes every minute.

Somebody was listening.

In the past two years, both public and private anti-trust actions have been initiated against the huge U.S. technology companies.  It is likely that more will arrive soon. I will use the occasion of last week’s landmark state and federal anti-trust enforcement filings against Facebook to examine some of the most significant anti-trust fights faced by the technology goliaths aimed to cut them down to size. Each of these battles affects different aspects of dominant digital tech, but they all arise from the argument that market size and position have been leveraged to stifle fair competition at a cost to consumers.

The major cases in this digital anti-trust law include the case filed in 1974 that broke up the AT&T telephone monopoly more than a decade later and the Department of Justice case that led to the 2001 settlement agreement to open Microsoft Application Programing Interfaces to competitors. Prior to 2018, very few legal efforts have been initiated in D.C. to rein in the burgeoning power of companies like Google, Apple, Facebook, or Amazon.

Last week’s cases against Facebook filed by the federal government and by 48 states will likely roll through the courts for years, possibly more than a decade. They seek to force Facebook to spin off WhatsApp and Instagram into their own companies, claiming that Facebook has purchased or destroyed emerging competing social media technologies while those technologies were starting to gain traction in the market. The FTC lawsuit includes a 2008 email from Facebook CEO Mark Zuckerberg that states, “it is better to buy than compete,” and a 2012 email where he wrote that facing Instagram in competition would be “really scary.”

According to Business Insider, “In addition to the divestitures, the filings are also seeking to keep Facebook from engaging in anticompetitive conduct. Such conduct could include Facebook preventing competing services from gaining access to its customer base, David Dinielli — an antitrust lawyer and a former special counsel with the antitrust division of the Department of Justice — told Business Insider. The ultimate goal, he said, is to restore competition in the market.” The scrutiny associated with these legal actions is also expected to limit Facebook’s bolder acquisitions and anti-competitive behavior into the future, requesting the court to restrain Facebook from making further acquisitions of more than $10 million without notifying the plaintiffs in advance.

But the Facebook government suits are far from the only anti-trust trouble for U.S. big tech companies. Less than two months ago the U.S. Justice Department, joined by 11 states, sued Google for “unlawfully maintaining monopolies through anticompetitive and exclusionary practices in the search and search advertising markets.” Google processes close to 90% of all online searches in the U.S. The government’s press release noted Google’s market value of a trillion dollars and highlights Google’s use of exclusivity agreements that forbid pre-installation of competing search services on hardware, use of tying agreements forcing pre-installation of its own applications as prominent and un-deletable features on hardware and using monopoly profits to create a self-reinforcing cycle of monopolization.

Google’s deals to place its search functions on Apple devices are especially sensitive and lucrative. According to a CNET article “Last year, almost half of Google’s search traffic came from Apple devices, according to the DOJ’s complaint. The agreement is so important that Google views losing it as a “Code Red” scenario, the lawsuit says.”   The New York Times notes, “The lawsuit, which may stretch on for years, could set off a cascade of other antitrust lawsuits from state attorneys general. About four dozen states and jurisdictions, including New York and Texas, have conducted parallel investigations and some of them are expected to bring separate complaints against the company’s grip on technology for online advertising. Eleven state attorneys general, all Republicans, signed on to support the federal lawsuit.”

Apple and Google are also defendants in anti-trust based lawsuits filed by Epic Games, covered by this blog here and here. Among other things, Epic accuses the tech giants of leveraging their dominant positions in electronic hardware 1) to exclude competitive app stores – overcharging application developers for the privilege of being available on the hardware – and 2) to exclude online payment competitors from offering alternate options to pay for those apps. The court, as a matter of law, has already thrown out two of Apple’s counterclaims based on the addition of an Epic direct payment option for the sale of its game apps to consumers using Apple hardware, rejecting Apple’s lawyer’s contention that the lesser fees consumers paid directly to Epic “should be in Apple’s hands.”

In a case that has rolled up and down the federal courts twice and is now before the U.S. Supreme Court (which refused to hear the case the first time), Oracle is trying to protect the Java Application Programming Interfaces – technology developed by a company Oracle purchased – from being used by Google in the Android Operating System. This case is based on copyright, not anti-trust law, but it addresses one of the most significant issues for companies who want to limit who can access and interact with their code – from database developers to automobile manufacturers – and its resolution will help determine which tech companies can create their own technological sandboxes and keep others from offering customer benefits within the closed systems. So this case will affect tech competition as much or more than some of the cases filed under U.S. anti-trust law.

Of course, the Europeans, frustrated with their own inability to create and nurture successful digital companies, have been quicker to claim antitrust violations by huge U.S. tech businesses. In June of 2017, the EU levied its largest antitrust fine in history – 2.4 billion euro – against Google to punish it for favoring its own shopping product in searches. In 2018 the EC fined Google more than 5 billion euros (a new record) for charges based on alleged misuse of Android to impede the development of the market for mobile devices. But wait, there’s more, as then again in March 2019, the European Commission fined Google nearly 1.5 billion euros for misuse of its dominant position in the market for brokering online search ads.

And while The EU may be resolving its financial crises on the back of Google, it is also attending to other American tech giants. The EU just filed antitrust charges against Amazon last month, accusing Amazon of using sales data to gain an unfair advantage over other merchants. In June of 2020, the European Commission opened an antitrust investigation into Apple store rules as anticompetitive behavior. And, to bring our discussion full circle, the same commission has been suing Facebook on antitrust grounds for years on a number of different claims.

Are Google, Apple, Facebook, and Amazon too big?  Will consumers benefit from restricting the power and reach of these companies? For years this question was pondered, but not acted upon by U.S. governments. That era has ended and a new one has begun.  Watch this blog for updates as courts, legislatures, and regulators consider whether and how to burden these beasts.

EU Data Localization Would Hurt U.S. Businesses

Stung by Brexit and set adrift by a neglectful U.S. foreign policy, the European Union has started to explore new ways of breaking away from the rest of the world, including taking steps to cordon EU data into locally managed systems. While this kind of protectionist move is short-sighted for the EU, it would also cause significant problems for U.S. businesses.

Americans have assumed the benefits of an open internet, where companies located nearly anywhere in the world can store and manage the information they receive in any way that makes sense to the business without undue government intervention in company choices or expenditures. We have built our technological infrastructures based on these rules since the beginning of the connectivity era thirty years ago.  Countries with closed political systems like Iran, China, and Russia have sliced their national internets off from the rest of us to maintain political and financial control, but we understood this might happen and have approached their markets differently.

But we have all assumed that free societies would be participants in an open and free internet – for information as well as business.  Maybe this was naïve. The U.S. First Amendment protections of free speech and association have no direct counterpart in Europe, and the EU/UK limits on free speech are troubling to the prospect of free expression online. But has seemed that allowing businesses to manage their data from servers in their home countries, which was a fundamental tenet of electronic commerce, may have reached its end.

The dominance of U.S. tech firms over the major data-collecting activities on the non-Chinese portion of the Internet – Google/Microsoft over search, Facebook/Microsoft over social media, Amazon over e-commerce – with no equivalent players from Europe, South America, or the rest of the English/Spanish speaking world, has placed pressure on the EU to clip the wings of huge entities largely beyond their control. It has also caused intellectual and governmental concern in Europe about how to create European versions of these successful companies.

Earlier this year, the New York Times reported on a “generational effort” in Europe to develop European solutions to the digital age. The plan included investing heavily in A.I., encouraging the development of EU-based data companies, and slamming the U.S. and Chinese data companies with restrictions, especially in the anti-trust and data privacy spaces. The Times wrote, “as Europe has created a reputation as the world’s most aggressive watchdog of Silicon Valley, it has failed to nurture its own tech ecosystem. That has left countries in the region increasingly dependent on companies that many leaders distrust.” Leading the world in tech business innovation is one thing; leading the world in tech business regulation is another.

I have already written twice about the growing enthusiasm for data localization in the EU, here and here, but I have not discussed why it matters for U.S. business.  All companies based in America should be concerned, not just tech firms, if one of our largest trading partners decides it needs to dictate how foreign businesses organize their databases, maintain their infrastructure, and spend their money. It is clear that EU regulators and Euro-crats want to limit Facebook, Google, Amazon, Microsoft, and Apple as much as they possibly can, but in doing so their rules will likely harm every U.S. manufacturer with plants and customers in Europe, every consulting company with European clients, and every retailer that sells online worldwide.

For any business wanting to avail itself of the EU marketplace, data localization will be like another tax – there may be specific data-focused taxes as well. But this will be an extra set of costs in organizing technology infrastructure and meeting new regulations that will drain profitability from any such venture. In addition, EU Internal Market Commissioner Thierry Briton has pushed forward a plan for companies collecting information in the EU to share with the European governments and with competitors. The EU rules already stand for the proposition that the data you collect on your own transactions does not belong to you, and may soon stand for the proposition that your valuable business data should be shared with people who want to hurt your company.

Importantly, if the EU moves toward data localization, other countries and regions would be empowered to do the same. The U.S. and the EU have been discouraging trading partners from closing off, and the concept that a free and fair internet helps everyone is one of their best arguments for openness. Closing down significant data movement from the EU would ruin this point, and others would react. At the moment, only those companies aspiring to iron political control over all information are localizing their data.  But if Brazil, Japan, or even Australia thought that it could localize its internet to protect its own local companies, then the business internet would quickly be closed off into discrete rooms encouraging local business. U.S. companies looking to expand into other markets would suffer through additional regulation, costs, and in some cases, partial or complete restriction from competing in these markets.

This is not an academic discussion.  If the EU moves to localize its data and restrict movement out of a “fortress Europe” then companies around the world will suffer. We need to dissuade the EU from taking this course.

ALERT: EU Actively Supports Protectionist Data Localization Policies

Meet the Euro-crats who think that the European Union needs to behave more like Russia and China.

More like Nigeria, Kazakhstan, and Indonesia.

These leaders are pushing not just to punish U.S. firms for successfully building data-focused businesses, but pushing to actively pull data away into localized pods so their governments can protect local companies from competition and can access the data at any time.  Like China.

EU internal market commissioner Thierry Breton claims he wants to make Europe “the most data-empowered continent in the world” in part by cutting its data off from the rest of the world. Breton has said EU rules need to state “European highly sensitive data should be able to be stored and processed in the EU.” Breton told a French newspaper that the EU should use privacy regulation as a weapon against U.S. tech companies, requiring data to be physically stored and processed in Europe.  He called an open internet “naïve.”

In this interview Breton said, “We must go further and demand that European data be stored and processed in Europe, in accordance with procedures that Europe will have set. In other words: it is necessary to structure the information space, as we have organized in the past the territorial space, the maritime space, and the air space. The Gafa [Google, Amazon, Facebook, and Apple] tried to make digital a “no man’s land” whose law they would write. It’s over. It is time to relocate this information space by opting for processing our data on European soil.” So he has re-characterized an open internet – which has been an aspiration for democracies and free societies around the world – as a digital no man’s land that must be divided into protectionist chunks.

Breton, France’s former Finance Minister, wants laws to help European businesses resist subpoenas from the U.S. and elsewhere. According to TechCrunch his governance proposals “will include a shielding provision — meaning data actors will be required to take steps to avoid having to comply with what he called ‘abusive and unlawful’ data access requests for data held in Europe from third countries.”

This sounds suspiciously like the industrial policy France has practiced for centuries, using the direct power and tools of government to coddle and enhance French companies and industries. Since 1712, when the French sent Jesuit priest François Xavier d’Entrecolles to China’s imperial kilns in Jingdezhen, Jiangxi province, to steal the secret of hard-paste porcelain, laying the foundations for the French porcelain industry, the French have happily applied government direction and assistance to steal industrial secrets and manufacturing methods for local companies. Entire treatises have been written about French government-sponsored industrial espionage against British manufacturing in the Eighteenth Century. The French government features prominently in Foreign Policy Magazine’s timeline of industrial spying including, “The FBI confirms that French intelligence targeted U.S. electronics companies including IBM and Texas Instruments between 1987 and 1989 in an attempt to bolster the failing Compagnie des Machines Bull, a state-owned French computer firm. The efforts mixed electronic surveillance with attempted recruitment of disgruntled personnel.” Don’t forget the incidents in the early 1990s when the French security service was caught bugging airplane seats assigned to U.S. tech executives to prop up failing French tech.

As reported in a different Foreign Policy article, “If you’ve been paying attention, you know that France is a proficient, notorious and unrepentant economic spy. ‘In economics, we are competitors, not allies,’ Pierre Marion, the former director of France’s equivalent of the CIA, once said. ‘America has the most technical information of relevance. It is easily accessible. So naturally, your country will receive the most attention from the intelligence services.’  Unlike the U.S. and most of its other allies, Mr. Marion clearly sees French government intelligence as an arm of France’s allegedly private industry. The article continued, “The spying continues even today, according to a recent U.S. National Intelligence Estimate. The NIE declared France, alongside Russia and Israel, to be in a distant but respectable second place behind China in using cyberespionage for economic gain.”  No wonder that Breton admires the Chinese methods of industrial and tech protection.

Germany wants in on the protectionist data localization scheme too, as its Economy Minister Peter Altmaier advocates for launching a European cloud storage system called Gaia-X to pull EU data away from Google, Amazon, Microsoft, and friends. As soon as the recent Schrems II decision was released, striking down some EU/US data transfer options, Data Protection offices in Germany issued interpretations of the ruling that would make it impossible for U.S. companies to the U.S. See our discussion of the decision and local reactions.

According to Politico, “leaked documents outlining Europe’s grand digital strategy include talk about fostering an environment that will “lead to more data being stored and processed in the EU,” as well as an “open, but assertive approach to international data flows. Not only would [EU data localization] undermine the EU’s own insistence on free data flows in negotiations with trade partners, it would also put the bloc in a league with authoritarian regimes in Russia and China, which use localization rules to clamp down on the circulation of information — splintering the notional worldwide web into country-sized shards.”

The article quoted Alex Roure, of the Computer & Communications Industry Association (CCIA) lobbying group, to say that he has not seen a “single case” where data localization benefits privacy, security, or the economy. “If it’s to protect local incumbents, that would be problematic.”

To this end, The EU just last week proposed new rules on data governance to benefit EU companies. The new rules create nine “data spaces” including industry, energy, and health care. The official press release from the EU makes clear that the EU plans to use these rules to cripple American tech companies by forcing EU data into government-operated data pools to benefit European businesses. They are finally saying the quiet part out loud.

This kind of protectionism may be what happens when our allies are left on their own, unsupported, and unchecked by a U.S. government that has withdrawn as a positive player on the world stage. Is data localization the future of EU policy, dividing the internet down into fortress zones? For now, the direction seems clear. Maybe a new U.S. administration can convince our allies that an open internet is in everyone’s best interest.