Silicon Valley Anti-Trust Review: Scorecard and Coming Attractions

Tim Wu, the bard of big tech, has written multiple books about the rise and coming fall of technology monopolies, oligopolies, and empires. In The Master Switch, Wu tells the story of how, in the 19th Century, the existing telegraph empire tried to smother telephone technology in its cradle. He continues with how the ascendant, then established, telephone monopoly destroyed rising competition for decades, with the tacit support of the government.

Wu’s latest book, The Curse of Bigness argues that, since the emergence of the digital economy, our government has abandoned a rich and socially beneficial history of trust-busting to promote the success of big companies dominating people’s lives. He advocates for the benefits of competition, especially among the digital industries that reach into our homes every minute.

Somebody was listening.

In the past two years, both public and private anti-trust actions have been initiated against the huge U.S. technology companies.  It is likely that more will arrive soon. I will use the occasion of last week’s landmark state and federal anti-trust enforcement filings against Facebook to examine some of the most significant anti-trust fights faced by the technology goliaths aimed to cut them down to size. Each of these battles affects different aspects of dominant digital tech, but they all arise from the argument that market size and position have been leveraged to stifle fair competition at a cost to consumers.

The major cases in this digital anti-trust law include the case filed in 1974 that broke up the AT&T telephone monopoly more than a decade later and the Department of Justice case that led to the 2001 settlement agreement to open Microsoft Application Programing Interfaces to competitors. Prior to 2018, very few legal efforts have been initiated in D.C. to rein in the burgeoning power of companies like Google, Apple, Facebook, or Amazon.

Last week’s cases against Facebook filed by the federal government and by 48 states will likely roll through the courts for years, possibly more than a decade. They seek to force Facebook to spin off WhatsApp and Instagram into their own companies, claiming that Facebook has purchased or destroyed emerging competing social media technologies while those technologies were starting to gain traction in the market. The FTC lawsuit includes a 2008 email from Facebook CEO Mark Zuckerberg that states, “it is better to buy than compete,” and a 2012 email where he wrote that facing Instagram in competition would be “really scary.”

According to Business Insider, “In addition to the divestitures, the filings are also seeking to keep Facebook from engaging in anticompetitive conduct. Such conduct could include Facebook preventing competing services from gaining access to its customer base, David Dinielli — an antitrust lawyer and a former special counsel with the antitrust division of the Department of Justice — told Business Insider. The ultimate goal, he said, is to restore competition in the market.” The scrutiny associated with these legal actions is also expected to limit Facebook’s bolder acquisitions and anti-competitive behavior into the future, requesting the court to restrain Facebook from making further acquisitions of more than $10 million without notifying the plaintiffs in advance.

But the Facebook government suits are far from the only anti-trust trouble for U.S. big tech companies. Less than two months ago the U.S. Justice Department, joined by 11 states, sued Google for “unlawfully maintaining monopolies through anticompetitive and exclusionary practices in the search and search advertising markets.” Google processes close to 90% of all online searches in the U.S. The government’s press release noted Google’s market value of a trillion dollars and highlights Google’s use of exclusivity agreements that forbid pre-installation of competing search services on hardware, use of tying agreements forcing pre-installation of its own applications as prominent and un-deletable features on hardware and using monopoly profits to create a self-reinforcing cycle of monopolization.

Google’s deals to place its search functions on Apple devices are especially sensitive and lucrative. According to a CNET article “Last year, almost half of Google’s search traffic came from Apple devices, according to the DOJ’s complaint. The agreement is so important that Google views losing it as a “Code Red” scenario, the lawsuit says.”   The New York Times notes, “The lawsuit, which may stretch on for years, could set off a cascade of other antitrust lawsuits from state attorneys general. About four dozen states and jurisdictions, including New York and Texas, have conducted parallel investigations and some of them are expected to bring separate complaints against the company’s grip on technology for online advertising. Eleven state attorneys general, all Republicans, signed on to support the federal lawsuit.”

Apple and Google are also defendants in anti-trust based lawsuits filed by Epic Games, covered by this blog here and here. Among other things, Epic accuses the tech giants of leveraging their dominant positions in electronic hardware 1) to exclude competitive app stores – overcharging application developers for the privilege of being available on the hardware – and 2) to exclude online payment competitors from offering alternate options to pay for those apps. The court, as a matter of law, has already thrown out two of Apple’s counterclaims based on the addition of an Epic direct payment option for the sale of its game apps to consumers using Apple hardware, rejecting Apple’s lawyer’s contention that the lesser fees consumers paid directly to Epic “should be in Apple’s hands.”

In a case that has rolled up and down the federal courts twice and is now before the U.S. Supreme Court (which refused to hear the case the first time), Oracle is trying to protect the Java Application Programming Interfaces – technology developed by a company Oracle purchased – from being used by Google in the Android Operating System. This case is based on copyright, not anti-trust law, but it addresses one of the most significant issues for companies who want to limit who can access and interact with their code – from database developers to automobile manufacturers – and its resolution will help determine which tech companies can create their own technological sandboxes and keep others from offering customer benefits within the closed systems. So this case will affect tech competition as much or more than some of the cases filed under U.S. anti-trust law.

Of course, the Europeans, frustrated with their own inability to create and nurture successful digital companies, have been quicker to claim antitrust violations by huge U.S. tech businesses. In June of 2017, the EU levied its largest antitrust fine in history – 2.4 billion euro – against Google to punish it for favoring its own shopping product in searches. In 2018 the EC fined Google more than 5 billion euros (a new record) for charges based on alleged misuse of Android to impede the development of the market for mobile devices. But wait, there’s more, as then again in March 2019, the European Commission fined Google nearly 1.5 billion euros for misuse of its dominant position in the market for brokering online search ads.

And while The EU may be resolving its financial crises on the back of Google, it is also attending to other American tech giants. The EU just filed antitrust charges against Amazon last month, accusing Amazon of using sales data to gain an unfair advantage over other merchants. In June of 2020, the European Commission opened an antitrust investigation into Apple store rules as anticompetitive behavior. And, to bring our discussion full circle, the same commission has been suing Facebook on antitrust grounds for years on a number of different claims.

Are Google, Apple, Facebook, and Amazon too big?  Will consumers benefit from restricting the power and reach of these companies? For years this question was pondered, but not acted upon by U.S. governments. That era has ended and a new one has begun.  Watch this blog for updates as courts, legislatures, and regulators consider whether and how to burden these beasts.

Newly Formulated Contract Terms Are the Key to EU-US Data Transfers

On December 2, 2020, the European Commission and the European Union (“EU”) foreign affairs service issued a joint statement with goals for the EU’s relationship with the United States. The statement highlighted areas of shared interest including cooperation on “cybersecurity capacity building, situational awareness, and information sharing.” The countries could coordinate to combat attributed attackers from third countries. The EU groups also welcomed greater parallel action in dealing with artificial intelligence, seeing it as an opportunity to express their common “humancentric approach.” However, pertaining to privacy and data governance, the statement was clear in calling EU and American views divergent.

A recent Politico article suggests that getting a new data protection agreement between the EU and the United States is critical to repairing the transatlantic relationship. Without a legal mechanism for American entities to transfer EU personal data to the United States, companies will have to store data on their European customers in Europe, which is very costly and may be unaffordable for small and medium-sized enterprises. On July 6, 2020, the Court of Justice of the European Union (“CJEU”) invalidated the Privacy Shield, which was an agreement between the EU and the United States that allowed data to be transferred between the two countries.

The Privacy Shield was the EU’s and the United States’ replacement for an agreement called Safe Harbor. Safe Harbor allowed companies sending EU citizens’ data to the United States to be subject to EU’s privacy regulations which were enforced by the United State government. Revelations about the US NSA’s access to data led to greater scrutiny from the EU about American privacy practices. In particular, Austrian privacy activist Max Schrems challenged the Safe Harbor agreement, arguing that American surveillance made the Safe Harbor agreement invalid because it was in conflict with EU law. The CJEU agreed with Schrems and ruled that Safe Harbor did not properly protect EU data.

Despite divergent regimes for protecting personal data, the United States and the EU had previously been able to come to terms to allow data transfers between the countries. First with the Safe Harbor, which existed from 2000 until 2013, and then with the Privacy Shield which was invalidated by the CJEU in 2020. The CJEU’s repeated unwillingness to trust America’s privacy regime leads to a natural skepticism that a third deal would provide a different ending even if it is pivotal to transatlantic relations. Max Schrems has likened deals between the two countries as the United States telling Europe that its citizens have no rights.

In the same decision that invalidated the Privacy Shield, the CJEU stated that Standard Contractual Clauses (“SCCs”) remained a legal means to transfer data from the EU to countries that had not been designated as “adequate” data protection jurisdictions by the European Commission. The CJEU did caveat that there would be instances, particularly where government surveillance created risks for data subjects, that additional risk mitigation measures be put in place to supplement the SCCs.

On November 11, 2020, the European Data Protection Board (“EDPB”) evaluated the CJEU’s ruling and issued guidance. The EDPB requires businesses to evaluate whether foreign governments could access an EU data subject’s personal data, without relying on a specific entity’s history of being subject to such government access in determining that the risk was low. Since then, the European Commission has also released a new draft of the SCCs, broadened to recognize the complexities of international business relationships. The draft set of clauses permits two novel processing relationships, namely: EU-based processor to ex-EU processor, and EU-based processor to ex-EU-controller. The existing version of the SCCS addressed two data flow scenarios: an EU-based controller exporting data outside of the EU to other controllers, or to processors. The feedback period on these proposed new SCCs ends today. Barring a new agreement being executed by the EU and the United States, entities transferring EU personal data will be leaning heavily on SCCs using the aforementioned guidance.

Pardon My Drone

If we think about drones, we probably think about remote-controlled assassination machines manned by the Mossad or “fly-through” tours of the homes of the rich and famous.  What we (or at least I) didn’t think about were artificially intelligent police drones that can be sent out by 911 dispatchers to the scene of the crime and follow the bad guys around until they do something they can be arrested for.  At least four U.S. cities currently use these remotely-controlled – and self-controlled – investigation tools. No more out-of-shape cops trying to climb chain link fences in hot pursuit of more fit criminals!  Hill Street Drones.

Drones use is now exploding in creativity. “Dehogifier” drones with heat sensors will tell you when wild hogs are destroying your crops. The Spotify Party Drone hovers over you in line at festivals to play your favorite songs. Russia and China are using drones disguised as birds.

Which started me thinking.  Now that smart drones have utterly transformed warfare and policing, not to mention real estate, what’s next? I have ideas:

  • Gecko Cam: GEICO Insurance customers are astounded to see their rates increase after the insurance carrier famous for its British spokeslizard deploys smart drones to watch your driving habits.  No word whether they will be disguised as pterodactyls or flying dragon lizards.  GEICO’s got you covered.
  • The Daddy Drone: Helicopter parenting is so 2000.  Just program the Daddy Drone with your daughter’s favorite haunts and voila! No need to prowl the neighborhood with your lights off or to wake up her BFF’s parents to cross-check her alibi. Integrate with Alexa or Siri and you can ground your kid from the comfort of your bed in a variety of celebrity voices!
  • Poli-Sci Fi: Did your favorite candidate just narrowly lose an election?  Are you a civic-minded soul who just wants every legal vote counted (as long as it was for your candidate)?  Well, no need to stand around all day in costume and argue with your neighbors; let your drone do the dirty work.  Available in red, white, and blue.
  • Karen Camera: Are you tired of enforcing the homeowner association rules from your minivan?  Have you been assaulted by threatening bird watchers and need the proof before calling 911?  Smile, you’re on Karen Camera!
  • The Gym Rat: Who didn’t wipe down the elliptical?  Who left those wet towels all over the locker room?  You did and we can prove it.  Your gym membership just became a little more expensive.  Feel the burn.

I could go on, but why should I when I have an audience of smart folks like you?  Send ‘em in and we’ll publish the best of them on HeyDataData.  In the meantime, might want to carry an umbrella the next time you want to do something a little shady.

EU Data Localization Would Hurt U.S. Businesses

Stung by Brexit and set adrift by a neglectful U.S. foreign policy, the European Union has started to explore new ways of breaking away from the rest of the world, including taking steps to cordon EU data into locally managed systems. While this kind of protectionist move is short-sighted for the EU, it would also cause significant problems for U.S. businesses.

Americans have assumed the benefits of an open internet, where companies located nearly anywhere in the world can store and manage the information they receive in any way that makes sense to the business without undue government intervention in company choices or expenditures. We have built our technological infrastructures based on these rules since the beginning of the connectivity era thirty years ago.  Countries with closed political systems like Iran, China, and Russia have sliced their national internets off from the rest of us to maintain political and financial control, but we understood this might happen and have approached their markets differently.

But we have all assumed that free societies would be participants in an open and free internet – for information as well as business.  Maybe this was naïve. The U.S. First Amendment protections of free speech and association have no direct counterpart in Europe, and the EU/UK limits on free speech are troubling to the prospect of free expression online. But has seemed that allowing businesses to manage their data from servers in their home countries, which was a fundamental tenet of electronic commerce, may have reached its end.

The dominance of U.S. tech firms over the major data-collecting activities on the non-Chinese portion of the Internet – Google/Microsoft over search, Facebook/Microsoft over social media, Amazon over e-commerce – with no equivalent players from Europe, South America, or the rest of the English/Spanish speaking world, has placed pressure on the EU to clip the wings of huge entities largely beyond their control. It has also caused intellectual and governmental concern in Europe about how to create European versions of these successful companies.

Earlier this year, the New York Times reported on a “generational effort” in Europe to develop European solutions to the digital age. The plan included investing heavily in A.I., encouraging the development of EU-based data companies, and slamming the U.S. and Chinese data companies with restrictions, especially in the anti-trust and data privacy spaces. The Times wrote, “as Europe has created a reputation as the world’s most aggressive watchdog of Silicon Valley, it has failed to nurture its own tech ecosystem. That has left countries in the region increasingly dependent on companies that many leaders distrust.” Leading the world in tech business innovation is one thing; leading the world in tech business regulation is another.

I have already written twice about the growing enthusiasm for data localization in the EU, here and here, but I have not discussed why it matters for U.S. business.  All companies based in America should be concerned, not just tech firms, if one of our largest trading partners decides it needs to dictate how foreign businesses organize their databases, maintain their infrastructure, and spend their money. It is clear that EU regulators and Euro-crats want to limit Facebook, Google, Amazon, Microsoft, and Apple as much as they possibly can, but in doing so their rules will likely harm every U.S. manufacturer with plants and customers in Europe, every consulting company with European clients, and every retailer that sells online worldwide.

For any business wanting to avail itself of the EU marketplace, data localization will be like another tax – there may be specific data-focused taxes as well. But this will be an extra set of costs in organizing technology infrastructure and meeting new regulations that will drain profitability from any such venture. In addition, EU Internal Market Commissioner Thierry Briton has pushed forward a plan for companies collecting information in the EU to share with the European governments and with competitors. The EU rules already stand for the proposition that the data you collect on your own transactions does not belong to you, and may soon stand for the proposition that your valuable business data should be shared with people who want to hurt your company.

Importantly, if the EU moves toward data localization, other countries and regions would be empowered to do the same. The U.S. and the EU have been discouraging trading partners from closing off, and the concept that a free and fair internet helps everyone is one of their best arguments for openness. Closing down significant data movement from the EU would ruin this point, and others would react. At the moment, only those companies aspiring to iron political control over all information are localizing their data.  But if Brazil, Japan, or even Australia thought that it could localize its internet to protect its own local companies, then the business internet would quickly be closed off into discrete rooms encouraging local business. U.S. companies looking to expand into other markets would suffer through additional regulation, costs, and in some cases, partial or complete restriction from competing in these markets.

This is not an academic discussion.  If the EU moves to localize its data and restrict movement out of a “fortress Europe” then companies around the world will suffer. We need to dissuade the EU from taking this course.

ALERT: EU Actively Supports Protectionist Data Localization Policies

Meet the Euro-crats who think that the European Union needs to behave more like Russia and China.

More like Nigeria, Kazakhstan, and Indonesia.

These leaders are pushing not just to punish U.S. firms for successfully building data-focused businesses, but pushing to actively pull data away into localized pods so their governments can protect local companies from competition and can access the data at any time.  Like China.

EU internal market commissioner Thierry Breton claims he wants to make Europe “the most data-empowered continent in the world” in part by cutting its data off from the rest of the world. Breton has said EU rules need to state “European highly sensitive data should be able to be stored and processed in the EU.” Breton told a French newspaper that the EU should use privacy regulation as a weapon against U.S. tech companies, requiring data to be physically stored and processed in Europe.  He called an open internet “naïve.”

In this interview Breton said, “We must go further and demand that European data be stored and processed in Europe, in accordance with procedures that Europe will have set. In other words: it is necessary to structure the information space, as we have organized in the past the territorial space, the maritime space, and the air space. The Gafa [Google, Amazon, Facebook, and Apple] tried to make digital a “no man’s land” whose law they would write. It’s over. It is time to relocate this information space by opting for processing our data on European soil.” So he has re-characterized an open internet – which has been an aspiration for democracies and free societies around the world – as a digital no man’s land that must be divided into protectionist chunks.

Breton, France’s former Finance Minister, wants laws to help European businesses resist subpoenas from the U.S. and elsewhere. According to TechCrunch his governance proposals “will include a shielding provision — meaning data actors will be required to take steps to avoid having to comply with what he called ‘abusive and unlawful’ data access requests for data held in Europe from third countries.”

This sounds suspiciously like the industrial policy France has practiced for centuries, using the direct power and tools of government to coddle and enhance French companies and industries. Since 1712, when the French sent Jesuit priest François Xavier d’Entrecolles to China’s imperial kilns in Jingdezhen, Jiangxi province, to steal the secret of hard-paste porcelain, laying the foundations for the French porcelain industry, the French have happily applied government direction and assistance to steal industrial secrets and manufacturing methods for local companies. Entire treatises have been written about French government-sponsored industrial espionage against British manufacturing in the Eighteenth Century. The French government features prominently in Foreign Policy Magazine’s timeline of industrial spying including, “The FBI confirms that French intelligence targeted U.S. electronics companies including IBM and Texas Instruments between 1987 and 1989 in an attempt to bolster the failing Compagnie des Machines Bull, a state-owned French computer firm. The efforts mixed electronic surveillance with attempted recruitment of disgruntled personnel.” Don’t forget the incidents in the early 1990s when the French security service was caught bugging airplane seats assigned to U.S. tech executives to prop up failing French tech.

As reported in a different Foreign Policy article, “If you’ve been paying attention, you know that France is a proficient, notorious and unrepentant economic spy. ‘In economics, we are competitors, not allies,’ Pierre Marion, the former director of France’s equivalent of the CIA, once said. ‘America has the most technical information of relevance. It is easily accessible. So naturally, your country will receive the most attention from the intelligence services.’  Unlike the U.S. and most of its other allies, Mr. Marion clearly sees French government intelligence as an arm of France’s allegedly private industry. The article continued, “The spying continues even today, according to a recent U.S. National Intelligence Estimate. The NIE declared France, alongside Russia and Israel, to be in a distant but respectable second place behind China in using cyberespionage for economic gain.”  No wonder that Breton admires the Chinese methods of industrial and tech protection.

Germany wants in on the protectionist data localization scheme too, as its Economy Minister Peter Altmaier advocates for launching a European cloud storage system called Gaia-X to pull EU data away from Google, Amazon, Microsoft, and friends. As soon as the recent Schrems II decision was released, striking down some EU/US data transfer options, Data Protection offices in Germany issued interpretations of the ruling that would make it impossible for U.S. companies to the U.S. See our discussion of the decision and local reactions.

According to Politico, “leaked documents outlining Europe’s grand digital strategy include talk about fostering an environment that will “lead to more data being stored and processed in the EU,” as well as an “open, but assertive approach to international data flows. Not only would [EU data localization] undermine the EU’s own insistence on free data flows in negotiations with trade partners, it would also put the bloc in a league with authoritarian regimes in Russia and China, which use localization rules to clamp down on the circulation of information — splintering the notional worldwide web into country-sized shards.”

The article quoted Alex Roure, of the Computer & Communications Industry Association (CCIA) lobbying group, to say that he has not seen a “single case” where data localization benefits privacy, security, or the economy. “If it’s to protect local incumbents, that would be problematic.”

To this end, The EU just last week proposed new rules on data governance to benefit EU companies. The new rules create nine “data spaces” including industry, energy, and health care. The official press release from the EU makes clear that the EU plans to use these rules to cripple American tech companies by forcing EU data into government-operated data pools to benefit European businesses. They are finally saying the quiet part out loud.

This kind of protectionism may be what happens when our allies are left on their own, unsupported, and unchecked by a U.S. government that has withdrawn as a positive player on the world stage. Is data localization the future of EU policy, dividing the internet down into fortress zones? For now, the direction seems clear. Maybe a new U.S. administration can convince our allies that an open internet is in everyone’s best interest.

Is Google Really the Borg?

“We are the Borg. Your biological and technological distinctiveness will be added to our own. Resistance is futile.”

 Now, I would never be mistaken for a Trekker, but there are some lines from the series that everyone of a certain age knows and this is one of them.

As a veteran of the mobile payment wars, I quickly learned the bête noire of merchants and banks is a clunky term better suited for the classroom than the boardroom: disintermediation.  In the case of mobile payments, that term describes the case where a competitor cuts you off from your valued customers using a shiny object as bait.  And no one cranks out shinier objects than Google. Now, Google is rolling out Plex, a digital bank account in Google Pay that will be offered by a variety of banks and credit unions.

American Banker reports that both Citigroup and Seattle Bank have partnered with Google Pay in an attempt to capture Gen Z and Millennial customers.   (Google has said it’s partnering with 11 financial institutions.)  So, is this a deal with the devil or a match made in heaven?  Like so many things, this is a case of “you pays your money and you takes your choice.”

On the one hand, banks like Citi and Seattle Bank see the partnership with Google as an opportunity to create scale, find new customers and grow into other products.  Or, as the CEO of Seattle Bank put it, it’s a chance “[t]o meet digital consumers where they are (on their smartphone), to reach a new market segment of digital-first consumers, and to move fast and at low cost with strong security.”  Banks like Seattle and Citi don’t fear disintermediation because they believe that their brands are strong enough to remain primary with their customers and that there is enough room on the field for a number of competitors to play.

Others are not so sure that such partnerships aren’t just capitulation and a digital coup de grâce delivered by Google in the competition for data.  According to Todd H. Baker, a senior fellow at Columbia University’s Richman Center for Business, Law & Public Policy: “What Google really wants to do is capture your information for everything, and this is the one piece they don’t have . . . [Google] get[s] to see payment, spending and savings behavior. Google gets what it wants and maybe it’s OK financially for the banks, but in the long term it’s disintermediating them from the experience. It feels a little bit like surrender.”

Will banks live long and prosper with Google Pay? The answer is written in the stars.

IoT Security Reaches Center Stage in U.S.

How is a refrigerator like a stoplight camera and a delivery drone?

Each of these devices and hundreds of millions of others are part of the internet of things (IoT), meaning that manufacturers are building them with sensors for their environment and connectivity to send information elsewhere.  The places that information is sent can be as varied as the devices themselves. The refrigerator will show its data to its owners and likely send maintenance information to its manufacturer and retailer. The stoplight camera will send photos or videos to the city traffic control office. The delivery drone will likely send data to the delivery recipient and the drone owner or retailer who sent it out.

As more of these devices are added into circulation every day, the risks increase that someone can hack into them and capture the data for nefarious purposes, ruin the data integrity or even use the connectivity to modify the functioning of the device itself. Many of the sensors and connective tools on these devices are small and have little room for extra functionality. They are often rushed to market to beat the competition. Manufacturers can easily skimp by building little or no security protection into them.

Forecasts suggest that the global market for Internet of things end-user solutions is expected to grow to around 1.6 trillion dollars by 2025, with more than 75 billion devices in the field receiving and sending data. These devices will control the buildings we live and work in, the equipment running our factories and warehouses, and the cars and trucks we drive.

With this knowledge, and otherwise largely dysfunctional U.S. Congress in an election year found that IoT security was a bipartisan issue ripe for the passage of legislation. Both Houses of Congress have now passed the consent the Internet of Things Cybersecurity Improvement Act which has been sent to the White House for the President’s signature, recognizing that developing a secure IoT is a matter of national security.

The Act instructs the National Institute for Standards and Technology (NIST) to oversee the creation of IoT security standards, and limits Federal agencies and contractors to only use devices that meet the cybersecurity standards prescribed by NIST and to notify specified agencies of known vulnerabilities affecting IoT devices they use.

According to Forbes, “The bill was written in response to major distributed denial of service (DDoS) attacks, including one in 2016 in which the Mirai malware variant was used to compromise tens of thousands of IoT devices, orchestrating their use in overwhelming and disrupting commercial web services. The threat hit closer to home for the federal government in 2017 when it was discovered that Chinese-made internet-connected security cameras were using previously undetected communications backdoors to “call home” to their manufacturers, presenting a risk that what was visible to a camera’s lens was also visible to our geopolitical rivals.” Last year, Congress prohibited the use of Chinese cameras in Department of Defense facilities.

Commentators expect that the IoT standards published by NIST pursuant to this Act will also influence the purchase of IoT devices in the private sector. Manufacturers wanting to address both markets will raise the bar on security for everyone, and lawsuits against security lapses can use the NIST standards as a baseline for corporate negligence. The requirements in the Act are also likely to ultimately reduce the costs of IoT security as more manufacturers develop their own standards and supply chains supporting this goal.

As the many devices in our lives become deeply interconnected, it is good to see a serious push for security in this space.

Maddening Online Complaints: Saving You From Yourself

We work hard for our businesses, and those of you who started your own enterprise are even more deeply concerned about it than others. Which is why criticism of your business can be so frustrating.

Nobody likes to be torn apart in public, especially when a bad review can cost you money.  Customers, patients, and clients will read online opinions and take them into account when hiring somebody. Your enterprise is no exception.

And it generally seems unfair – and often is. Some people are never satisfied.  Some fixate on the problem they had with your business and blow it way out of proportion.  Some are right to be angry but can’t let it go. Some are nuts.

But ignore it and move on.

I know this is difficult to do. We always feel better standing up for ourselves.  And if we don’t point out the unfairness of a complaint, who will?  Nobody.

But, in this age of the internet comment system, we need to adopt a more passive approach.  A passive approach feels wrong and is not as emotionally satisfying as standing and defending yourself. Let it go.

Except in very rare and narrow circumstances (discussed below), our best and most productive move will be to ignore the criticism or take it to heart, but in either case to move on without returning fire. In other words, contrary to American custom, “Don’t just do something. Stand there!”

Many solid reasons support this uncomfortable position:

  • Your opponent has the right to complain. In the U.S., people have the right to express their opinion. They do not have the right to outright lie about you, and such a clearly provable falsehood may move a negative review into the “actively oppose” category, at least in part because provable defamation gives you a claim to successfully extract retractions and damages from the complainer. But there is no law against being a whiny little toad. There is no law against being a jerk (who isn’t lying). The baseline position in the U.S. is that people can complain if they want. Even if you don’t like it. And overcoming this basic legal assumption makes it hard to threaten grumbles.
  • When push comes to shove, you can’t win. If the complaint is an opinion (“Dr. Whanker is a moron”), a whine that oversimplifies matters (“They kept me waiting for more than an hour), or a generalized statement – fair or otherwise (“Everybody says his breath smells terrible”), you have no legal basis to remove it from the general public discussion. If you try, you will lose. If you threaten legal action that you can’t support, you will look impotent and foolish, and you will lose. If you undertake legal action that you can’t win, you lose AND you may pay the complainer’s attorney fees.
  • Internet hosting companies abide by these rules that protect complainers. It is not easy to force Yelp, Google, Facebook, Yahoo! or any well-known public to remove a post you don’t like. First, their sites are premised on accumulating billions of comments – the more comments, the more valuable the site – so they are not inclined to pull things down easily. Second, these companies don’t want to get sued, so they take the most conservative legal position available to them – everything stays up unless you can either provide a legal document, like a court order, to take it down or maybe if you can show that the offending post clearly violates their terms of use – like obscenity or certain kinds of hate speech. In other words, if you don’t make it a no-brain decision for the hosting companies to take the complaint down, they won’t take it down.  Finally, many sites on the internet exist to create friction between people or between people and businesses. A company like TMZ thrives on conflict, and they WANT you to be angry about what they posted.  You would be surprised how many of these sites hover on the web, sucking up controversy.  Some specialize in publishing complaints against doctors, restaurants, banks, or other types of businesses, and there is no way short of a court order to force these sites to take down a complaint against you.
  • Many states have Anti-SLAPP laws that penalize companies for suing to silence critics. With the rise of the internet, there has been the recognition that powerful companies and people who do bad things can silence dissent by suing complainers into poverty. To address this problem, 28 states and the District of Columbia have passed what is known as anti-SLAPP laws – with SLAPP standing for Strategic Lawsuit against Public Participation. These laws are grounded in protecting free-speech rights and often allow a person who has made a public complaint the ability to quickly terminate a lawsuit against them and charge the subject of the complaint with paying everyone’s attorney fees and legal costs. These laws raise the stakes for anyone looking to defend a business from troublesome online complaints, tilting the risks further in the favor of a complainer. If you do not have a clearly watertight case of defamation (including a provable and important lie made against you) then you may suffer under one of these laws, cutting your case short and forcing you to pay your adversaries costs and fees. A non-profit advocacy group called the Public Participation Project posts a scorecard rating the strength of each state’s anti-SLAPP protection, while a First Amendment organization called the Reporter’s Committee for Freedom of the Press details how the law works in these cases.
  • By engaging with the complainer, you are almost always giving her what she wants. Internet complaints drift away into obscurity. Internet fights are interesting and have a recognizable rhythm that keeps people’s attention. If you respond to the complaint, the complainer now knows that she has gotten under your skin. She has your attention, and your attention to her grievance usually energizes her to make more noise against you. Often a lawyer’s response to a complaint will be posted alongside the criticism. Now the complainer can play the victim.  There is an internet phenomenon often called the Streisand effect where your attempt to censor embarrassing information or criticism has the unintended consequence of drawing many more people’s attention to the item. I can cite dozens of instances where the attempt to shut someone up online only makes the complaint more interesting and many more people are drawn to the fight than were to the original complaint or problem. In other words, you can make the problem you want to be solved much worse by making a fuss about it. When you wrestle with a pig, you both get muddy and the pig enjoys it. Don’t give the complainer what she wants. Don’t jump down into her pigpen. Stay above the fray. Better yet, withhold the fray from her entirely.
  • You save time and money by ignoring the complaints. Keep your powder dry for real problems that you can fix. Don’t throw away resources on what is likely to be a losing cause. You will be even angrier when the complaint still stands and you have spent hours and thousands trying to bring it down. Which leads to what may be the most important reason to simply ignore the criticism . . .
  • Fighting complainers takes attention and energy away from your business. No enterprise succeeds by looking backward rather than forward. Running your company is a full-time job and dealing with complainers, for the reasons discussed above, is a distraction – not a good use of your attention. Every minute spent planning your revenge or plotting to remove a grievance from the web is a minute not focused on growing a successful business.

But in unusual and isolated instances, I have supported attempts to bring down internet business complaints. The deciding element is whether the company has an underlying claim against the online complainer that will win in court.  The two likely reasons that might underlie a potentially successful claim are 1) if the online complainer tells lies that can be easily proven to be both false and impactful on the company, and/or 2) the complainer goes overboard with multiple posts, phone calls, or personal action that will support an allegation of stalking or a similar state law prohibiting obsessive and harmful behavior.

Admittedly, there are times where a protective strategy involves standing up for oneself, online and otherwise.  But think hard before you decide to do it. The super-majority of these cases I have seen over the past 30 years would have ended best for the business if the complaint had simply been ignored.

Feeding the trolls only brings them back for more.

Evolution of Personal Data in U.S. Law

Definitions are important.

How we define words sets the context for how we regulate them. In the U.S., the definitions of legally defended private information are changing, affecting the entire scope of information protection. The change in definitions reflects a desire to protect more of the data that describes how we live.

Early digital age protections of data in the U.S. tending to apply very specific definitions. First, the government began protecting the particular types of data that concerned legislators, regulators, and the general public – financial/banking information, descriptions of health care, and information relating to children.  This was the data that people felt was most private and most likely to be abused.  It was the data that many people would have been concerned about sharing with strangers.

The definitions around these laws reflected the specificity of their intent.

Then came official recognition of identity theft as a growing societal problem. As information was digitized, connected to the web, and accessed remotely, Americans saw how this data could be used to impersonate the people it was supposed to describe and identify.  Then came the passage of state laws, soon to encompass all 50 states, requiring notification of affected data subjects when their data had been exposed to unauthorized people.

The terms defined in this first wave of data breach notice laws were based on lists.  Each law listed a set of information categories likely to facilitate the theft of a citizen’s identity. The data breach notice law definitions of personally identifiable data tended to match a piece of identifying information – name or address – with a piece of data that would allow a criminal to access accounts.  This last category included account numbers, credit card numbers, social security numbers, driver’s license numbers, and even birth dates and mother’s maiden name. If it wasn’t on the list, it did not trigger the statute.  Different states added or subtracted pieces of information from the standard list, but the concept of listed trigger data remained the same.

The CCPA shattered this concept. As the first omnibus privacy act in the U.S., the California Consumer Privacy Act brought European thinking to privacy protection law.  Rather than a limited vertical market like finance or health care, or a narrow legal goal like stopping identity theft, the CCPA sought to create new rights that individuals would have to protect data collected about them, and the CCPA sought to impose those rights down on businesses who previously felt that they were owners of the data. The CCPA never defined anything as fundamental or nebulous as “ownership” of the data, but it did offer a new, breathtakingly broad definition of the personal information at the heart of the statute.

The CCPA definition was not a list. For years, demographics experts have known that 85% of the U.S. population could be identified by name if you had just three pieces of information about them: gender, zip code, and birth date. The more information about a person in your file, the easier it is to identify her and know many more things about her.  So it has been clear to privacy professionals for a long time that relevant personally identifiable information is not a list of names or addresses, but a mathematical calculation.  If your company had seven, eight, or nine facts about a person – even seemingly disparate facts like where they were at a given time and what they bought – with the right math your company could probably identify that person. This mathematical accretion concept better encompasses a useful definition of personally identifiable information to enforce a broader set of rights than the standard lists would do.

The European Union had already built this concept into law when it passed the GDPR. The GDPR includes protections for personal data, which is broadly defined, and then a tighter set for sensitive data, which is defined by category. I expect to discuss definitions and protection of sensitive data in this space next week. The GDPR defines personal data as “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”

While ‘any information relating to an identifiable person” is broad, the California definition is both broad and vague. The CCPA defines personal information as “information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” It may be years before this definition is tested and clarified in the courts. Until that time, we will need to operate under the knowledge that any information reasonably capable of being associated with a person is regulated data.  What about a slice of data that can’t, by itself be associated with a person, but might help describe someone when linked with other data?  That seems to fall within this definition.  What falls outside? Given the state of today’s machine learning and analytics, almost nothing.

If California chooses to interpret and enforce this definition broadly, hardly a behavioral action or descriptive fact about a person will escape its purview.  Businesses that market to consumers are not ready to meet this standard for preserving, protecting, and restricting the use of data. We have jumped from one extreme to the other on defining personal information.

Tiny Personal Assistant Poses Big Risk and Privacy Concerns

What if your personal digital assistant was so small but it encompassed your entire home?

If you are wondering how this would be possible, see the new generation of smart assistants designed to be placed into your wall, allowing Amazon Echo-type interaction directly with your house. A half-dollar sized device in key rooms allows homebuilders to offer voice control over current electronic features like security and music, plus whatever applications come next.

A startup company called Josh.ai is offering a niche product designed to be professionally installed in a home to manage interaction between homeowners and digital house services. The tiny device is embedded in the wall and controlled by a central unit. Tech Crunch reports, “The device bundles a set of four microphones eschewing any onboard speaker, instead opting to integrate directly with a user’s at-home sound system. Josh boasts compatibility with most major AV receiver manufacturers in addition to partnerships with companies like Sonos. There isn’t much else to the device; a light for visual feedback, a multi-purpose touch sensor and a physical switch to cut power to the onboard microphones in case users want extra peace of mind.”

Installing one of these systems helps homeowners accustomed to addressing Siri and Alexa interface with voice commands to all domestic systems.  In addition, it replaces the button or screen wall interfaces with tiny microphones that promote health with touch-less design and are unlikely to look ugly and outdated when the home is sold years later. Josh’s “nearly invisible” footprint can be an advantage.

The initial Josh business model is interesting because it licenses its installed services to the homeowner and the hardware comes as part of the package. Josh offers licenses to its technology on an annual, five-year, or lifetime basis.  And would the longest license be for the lifetime of the home or the lifetime of the owner? Those of us stuck with old wiring and outmoded wall units know the cost and frustration of making changes, and yet, if the technology was still useful when the house is sold, a seller would want to include the right to use it as a fixture.  This is especially true if the tech controls all the other tech in the house.

If you are building the tech into your home, I would expect that longer licenses would be more desirable. But the licensing model still leaves open the risk that you build this system into your home and are unable to pass it along to the next owner. Further, buying this technology from a start-up involves a very real possibility that the company doesn’t exist by the time you sell your home, or that its business models have changed and it no longer supports your hardware.

It also acclimates us to a microphones-everywhere lifestyle where the listening devices are built right into the walls, turning all of our future homes into versions of the U.S. embassy in Moscow. Yes, the Josh wall unit includes an “off” button, but will we always remember to use it, and will it always work in the manner expected? And will Josh make arrangements with police departments like famous in-home security systems that will allow the police to turn on these microphones and listen in to your home? If so, will the police need a warrant to do so, or just an interest in knowing what is happening inside your home?

The Josh system is a natural evolution of the personal digital assistant, but legal, privacy, and risk concerns cast a shadow on wide adoption.