IoT Security Reaches Center Stage in U.S.

How is a refrigerator like a stoplight camera and a delivery drone?

Each of these devices and hundreds of millions of others are part of the internet of things (IoT), meaning that manufacturers are building them with sensors for their environment and connectivity to send information elsewhere.  The places that information is sent can be as varied as the devices themselves. The refrigerator will show its data to its owners and likely send maintenance information to its manufacturer and retailer. The stoplight camera will send photos or videos to the city traffic control office. The delivery drone will likely send data to the delivery recipient and the drone owner or retailer who sent it out.

As more of these devices are added into circulation every day, the risks increase that someone can hack into them and capture the data for nefarious purposes, ruin the data integrity or even use the connectivity to modify the functioning of the device itself. Many of the sensors and connective tools on these devices are small and have little room for extra functionality. They are often rushed to market to beat the competition. Manufacturers can easily skimp by building little or no security protection into them.

Forecasts suggest that the global market for Internet of things end-user solutions is expected to grow to around 1.6 trillion dollars by 2025, with more than 75 billion devices in the field receiving and sending data. These devices will control the buildings we live and work in, the equipment running our factories and warehouses, and the cars and trucks we drive.

With this knowledge, and otherwise largely dysfunctional U.S. Congress in an election year found that IoT security was a bipartisan issue ripe for the passage of legislation. Both Houses of Congress have now passed the consent the Internet of Things Cybersecurity Improvement Act which has been sent to the White House for the President’s signature, recognizing that developing a secure IoT is a matter of national security.

The Act instructs the National Institute for Standards and Technology (NIST) to oversee the creation of IoT security standards, and limits Federal agencies and contractors to only use devices that meet the cybersecurity standards prescribed by NIST and to notify specified agencies of known vulnerabilities affecting IoT devices they use.

According to Forbes, “The bill was written in response to major distributed denial of service (DDoS) attacks, including one in 2016 in which the Mirai malware variant was used to compromise tens of thousands of IoT devices, orchestrating their use in overwhelming and disrupting commercial web services. The threat hit closer to home for the federal government in 2017 when it was discovered that Chinese-made internet-connected security cameras were using previously undetected communications backdoors to “call home” to their manufacturers, presenting a risk that what was visible to a camera’s lens was also visible to our geopolitical rivals.” Last year, Congress prohibited the use of Chinese cameras in Department of Defense facilities.

Commentators expect that the IoT standards published by NIST pursuant to this Act will also influence the purchase of IoT devices in the private sector. Manufacturers wanting to address both markets will raise the bar on security for everyone, and lawsuits against security lapses can use the NIST standards as a baseline for corporate negligence. The requirements in the Act are also likely to ultimately reduce the costs of IoT security as more manufacturers develop their own standards and supply chains supporting this goal.

As the many devices in our lives become deeply interconnected, it is good to see a serious push for security in this space.

Advertisement

Maddening Online Complaints: Saving You From Yourself

We work hard for our businesses, and those of you who started your own enterprise are even more deeply concerned about it than others. Which is why criticism of your business can be so frustrating.

Nobody likes to be torn apart in public, especially when a bad review can cost you money.  Customers, patients, and clients will read online opinions and take them into account when hiring somebody. Your enterprise is no exception.

And it generally seems unfair – and often is. Some people are never satisfied.  Some fixate on the problem they had with your business and blow it way out of proportion.  Some are right to be angry but can’t let it go. Some are nuts.

But ignore it and move on.

I know this is difficult to do. We always feel better standing up for ourselves.  And if we don’t point out the unfairness of a complaint, who will?  Nobody.

But, in this age of the internet comment system, we need to adopt a more passive approach.  A passive approach feels wrong and is not as emotionally satisfying as standing and defending yourself. Let it go.

Except in very rare and narrow circumstances (discussed below), our best and most productive move will be to ignore the criticism or take it to heart, but in either case to move on without returning fire. In other words, contrary to American custom, “Don’t just do something. Stand there!”

Many solid reasons support this uncomfortable position:

  • Your opponent has the right to complain. In the U.S., people have the right to express their opinion. They do not have the right to outright lie about you, and such a clearly provable falsehood may move a negative review into the “actively oppose” category, at least in part because provable defamation gives you a claim to successfully extract retractions and damages from the complainer. But there is no law against being a whiny little toad. There is no law against being a jerk (who isn’t lying). The baseline position in the U.S. is that people can complain if they want. Even if you don’t like it. And overcoming this basic legal assumption makes it hard to threaten grumbles.
  • When push comes to shove, you can’t win. If the complaint is an opinion (“Dr. Whanker is a moron”), a whine that oversimplifies matters (“They kept me waiting for more than an hour), or a generalized statement – fair or otherwise (“Everybody says his breath smells terrible”), you have no legal basis to remove it from the general public discussion. If you try, you will lose. If you threaten legal action that you can’t support, you will look impotent and foolish, and you will lose. If you undertake legal action that you can’t win, you lose AND you may pay the complainer’s attorney fees.
  • Internet hosting companies abide by these rules that protect complainers. It is not easy to force Yelp, Google, Facebook, Yahoo! or any well-known public to remove a post you don’t like. First, their sites are premised on accumulating billions of comments – the more comments, the more valuable the site – so they are not inclined to pull things down easily. Second, these companies don’t want to get sued, so they take the most conservative legal position available to them – everything stays up unless you can either provide a legal document, like a court order, to take it down or maybe if you can show that the offending post clearly violates their terms of use – like obscenity or certain kinds of hate speech. In other words, if you don’t make it a no-brain decision for the hosting companies to take the complaint down, they won’t take it down.  Finally, many sites on the internet exist to create friction between people or between people and businesses. A company like TMZ thrives on conflict, and they WANT you to be angry about what they posted.  You would be surprised how many of these sites hover on the web, sucking up controversy.  Some specialize in publishing complaints against doctors, restaurants, banks, or other types of businesses, and there is no way short of a court order to force these sites to take down a complaint against you.
  • Many states have Anti-SLAPP laws that penalize companies for suing to silence critics. With the rise of the internet, there has been the recognition that powerful companies and people who do bad things can silence dissent by suing complainers into poverty. To address this problem, 28 states and the District of Columbia have passed what is known as anti-SLAPP laws – with SLAPP standing for Strategic Lawsuit against Public Participation. These laws are grounded in protecting free-speech rights and often allow a person who has made a public complaint the ability to quickly terminate a lawsuit against them and charge the subject of the complaint with paying everyone’s attorney fees and legal costs. These laws raise the stakes for anyone looking to defend a business from troublesome online complaints, tilting the risks further in the favor of a complainer. If you do not have a clearly watertight case of defamation (including a provable and important lie made against you) then you may suffer under one of these laws, cutting your case short and forcing you to pay your adversaries costs and fees. A non-profit advocacy group called the Public Participation Project posts a scorecard rating the strength of each state’s anti-SLAPP protection, while a First Amendment organization called the Reporter’s Committee for Freedom of the Press details how the law works in these cases.
  • By engaging with the complainer, you are almost always giving her what she wants. Internet complaints drift away into obscurity. Internet fights are interesting and have a recognizable rhythm that keeps people’s attention. If you respond to the complaint, the complainer now knows that she has gotten under your skin. She has your attention, and your attention to her grievance usually energizes her to make more noise against you. Often a lawyer’s response to a complaint will be posted alongside the criticism. Now the complainer can play the victim.  There is an internet phenomenon often called the Streisand effect where your attempt to censor embarrassing information or criticism has the unintended consequence of drawing many more people’s attention to the item. I can cite dozens of instances where the attempt to shut someone up online only makes the complaint more interesting and many more people are drawn to the fight than were to the original complaint or problem. In other words, you can make the problem you want to be solved much worse by making a fuss about it. When you wrestle with a pig, you both get muddy and the pig enjoys it. Don’t give the complainer what she wants. Don’t jump down into her pigpen. Stay above the fray. Better yet, withhold the fray from her entirely.
  • You save time and money by ignoring the complaints. Keep your powder dry for real problems that you can fix. Don’t throw away resources on what is likely to be a losing cause. You will be even angrier when the complaint still stands and you have spent hours and thousands trying to bring it down. Which leads to what may be the most important reason to simply ignore the criticism . . .
  • Fighting complainers takes attention and energy away from your business. No enterprise succeeds by looking backward rather than forward. Running your company is a full-time job and dealing with complainers, for the reasons discussed above, is a distraction – not a good use of your attention. Every minute spent planning your revenge or plotting to remove a grievance from the web is a minute not focused on growing a successful business.

But in unusual and isolated instances, I have supported attempts to bring down internet business complaints. The deciding element is whether the company has an underlying claim against the online complainer that will win in court.  The two likely reasons that might underlie a potentially successful claim are 1) if the online complainer tells lies that can be easily proven to be both false and impactful on the company, and/or 2) the complainer goes overboard with multiple posts, phone calls, or personal action that will support an allegation of stalking or a similar state law prohibiting obsessive and harmful behavior.

Admittedly, there are times where a protective strategy involves standing up for oneself, online and otherwise.  But think hard before you decide to do it. The super-majority of these cases I have seen over the past 30 years would have ended best for the business if the complaint had simply been ignored.

Feeding the trolls only brings them back for more.

Evolution of Personal Data in U.S. Law

Definitions are important.

How we define words sets the context for how we regulate them. In the U.S., the definitions of legally defended private information are changing, affecting the entire scope of information protection. The change in definitions reflects a desire to protect more of the data that describes how we live.

Early digital age protections of data in the U.S. tending to apply very specific definitions. First, the government began protecting the particular types of data that concerned legislators, regulators, and the general public – financial/banking information, descriptions of health care, and information relating to children.  This was the data that people felt was most private and most likely to be abused.  It was the data that many people would have been concerned about sharing with strangers.

The definitions around these laws reflected the specificity of their intent.

Then came official recognition of identity theft as a growing societal problem. As information was digitized, connected to the web, and accessed remotely, Americans saw how this data could be used to impersonate the people it was supposed to describe and identify.  Then came the passage of state laws, soon to encompass all 50 states, requiring notification of affected data subjects when their data had been exposed to unauthorized people.

The terms defined in this first wave of data breach notice laws were based on lists.  Each law listed a set of information categories likely to facilitate the theft of a citizen’s identity. The data breach notice law definitions of personally identifiable data tended to match a piece of identifying information – name or address – with a piece of data that would allow a criminal to access accounts.  This last category included account numbers, credit card numbers, social security numbers, driver’s license numbers, and even birth dates and mother’s maiden name. If it wasn’t on the list, it did not trigger the statute.  Different states added or subtracted pieces of information from the standard list, but the concept of listed trigger data remained the same.

The CCPA shattered this concept. As the first omnibus privacy act in the U.S., the California Consumer Privacy Act brought European thinking to privacy protection law.  Rather than a limited vertical market like finance or health care, or a narrow legal goal like stopping identity theft, the CCPA sought to create new rights that individuals would have to protect data collected about them, and the CCPA sought to impose those rights down on businesses who previously felt that they were owners of the data. The CCPA never defined anything as fundamental or nebulous as “ownership” of the data, but it did offer a new, breathtakingly broad definition of the personal information at the heart of the statute.

The CCPA definition was not a list. For years, demographics experts have known that 85% of the U.S. population could be identified by name if you had just three pieces of information about them: gender, zip code, and birth date. The more information about a person in your file, the easier it is to identify her and know many more things about her.  So it has been clear to privacy professionals for a long time that relevant personally identifiable information is not a list of names or addresses, but a mathematical calculation.  If your company had seven, eight, or nine facts about a person – even seemingly disparate facts like where they were at a given time and what they bought – with the right math your company could probably identify that person. This mathematical accretion concept better encompasses a useful definition of personally identifiable information to enforce a broader set of rights than the standard lists would do.

The European Union had already built this concept into law when it passed the GDPR. The GDPR includes protections for personal data, which is broadly defined, and then a tighter set for sensitive data, which is defined by category. I expect to discuss definitions and protection of sensitive data in this space next week. The GDPR defines personal data as “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”

While ‘any information relating to an identifiable person” is broad, the California definition is both broad and vague. The CCPA defines personal information as “information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” It may be years before this definition is tested and clarified in the courts. Until that time, we will need to operate under the knowledge that any information reasonably capable of being associated with a person is regulated data.  What about a slice of data that can’t, by itself be associated with a person, but might help describe someone when linked with other data?  That seems to fall within this definition.  What falls outside? Given the state of today’s machine learning and analytics, almost nothing.

If California chooses to interpret and enforce this definition broadly, hardly a behavioral action or descriptive fact about a person will escape its purview.  Businesses that market to consumers are not ready to meet this standard for preserving, protecting, and restricting the use of data. We have jumped from one extreme to the other on defining personal information.

Tiny Personal Assistant Poses Big Risk and Privacy Concerns

What if your personal digital assistant was so small but it encompassed your entire home?

If you are wondering how this would be possible, see the new generation of smart assistants designed to be placed into your wall, allowing Amazon Echo-type interaction directly with your house. A half-dollar sized device in key rooms allows homebuilders to offer voice control over current electronic features like security and music, plus whatever applications come next.

A startup company called Josh.ai is offering a niche product designed to be professionally installed in a home to manage interaction between homeowners and digital house services. The tiny device is embedded in the wall and controlled by a central unit. Tech Crunch reports, “The device bundles a set of four microphones eschewing any onboard speaker, instead opting to integrate directly with a user’s at-home sound system. Josh boasts compatibility with most major AV receiver manufacturers in addition to partnerships with companies like Sonos. There isn’t much else to the device; a light for visual feedback, a multi-purpose touch sensor and a physical switch to cut power to the onboard microphones in case users want extra peace of mind.”

Installing one of these systems helps homeowners accustomed to addressing Siri and Alexa interface with voice commands to all domestic systems.  In addition, it replaces the button or screen wall interfaces with tiny microphones that promote health with touch-less design and are unlikely to look ugly and outdated when the home is sold years later. Josh’s “nearly invisible” footprint can be an advantage.

The initial Josh business model is interesting because it licenses its installed services to the homeowner and the hardware comes as part of the package. Josh offers licenses to its technology on an annual, five-year, or lifetime basis.  And would the longest license be for the lifetime of the home or the lifetime of the owner? Those of us stuck with old wiring and outmoded wall units know the cost and frustration of making changes, and yet, if the technology was still useful when the house is sold, a seller would want to include the right to use it as a fixture.  This is especially true if the tech controls all the other tech in the house.

If you are building the tech into your home, I would expect that longer licenses would be more desirable. But the licensing model still leaves open the risk that you build this system into your home and are unable to pass it along to the next owner. Further, buying this technology from a start-up involves a very real possibility that the company doesn’t exist by the time you sell your home, or that its business models have changed and it no longer supports your hardware.

It also acclimates us to a microphones-everywhere lifestyle where the listening devices are built right into the walls, turning all of our future homes into versions of the U.S. embassy in Moscow. Yes, the Josh wall unit includes an “off” button, but will we always remember to use it, and will it always work in the manner expected? And will Josh make arrangements with police departments like famous in-home security systems that will allow the police to turn on these microphones and listen in to your home? If so, will the police need a warrant to do so, or just an interest in knowing what is happening inside your home?

The Josh system is a natural evolution of the personal digital assistant, but legal, privacy, and risk concerns cast a shadow on wide adoption.

2020 Ballot Initiatives Affect Tech Industry

Privacy, the gig economy, and access to digital information in our cars were all issues on ballots Tuesday for direct response of the voters. U.S. technology companies will be affected by the results of these propositions.

November 3, 2020, was a substantial election for direct ballot initiatives affecting technology, the internet, and digital information. Several states offered tech-minded direct voter propositions, and the reverberations of their passage will affect us for many years.

Most notable is the passage of Proposition 24 in California, known as the California Privacy Rights Act and supplementing the recent legislative passage of an omnibus consumer privacy bill last year in California.  That legislation, the CCPA, went into force at the beginning of 2020 with additional regulations passed throughout the year and attorney general enforcement beginning in July 2020.  Now, about 120 days later, the entire privacy landscape changes again for businesses managing consumer information in California.

Along with granting consumers new privacy rights, California Proposition 24 provides for the creation of the first privacy enforcement bureaucracy in the United States.  Europe has its Data Protection Offices and Canada has the Office of the Privacy Commissioner, but until now, no authority exists solely to enforce privacy protection laws.  Most states leave that work to their attorneys general, and the federal government offers a cross-section of enforcement from agencies with broader mandates like the Federal Trade Commission, the Office of the Comptroller of the Currency, or the Department of Health and Human Services. So the California specific enforcement dynamic will be watched closely by other states and likely the executive branch of the federal government.

The L.A. Times reported “The new privacy law brings California more closely in line with the European Union’s General Data Protection Regulation, and as the strongest law in the U.S. is likely to serve as the standard for companies across the nation. The next step in its implementation is assembling the new agency’s five-seat board, which will be appointed by the governor, state attorney general, state Senate rules committee, and speaker of the state Assembly.” The article also reported, “Despite the ballot measure itself clocking in at 52 pages of dense technical language, many of the practical details of compliance and enforcement remain to be hashed out.”

Proposition 24 will likely have lasting effects on how websites interact with their users, providing initial administrative choices as consumers first click on a site.  As C/NET wrote, “Proposition 24 will also likely set in motion an effort to create a system to let internet users automatically tell websites not to collect their data. Many websites show a pop-up to new visitors letting them either accept their tracking policies or go into a settings section and uncheck several boxes to opt-out.

Other sites make it even more complicated, offering a list of several third-party web tracking services that users can visit individually to opt-out of their data tracking.”

The failure of a California ballot initiative also made tech news.  Proposition 22 would have forced gig economy companies like Uber and Lyft to treat their drivers as full employees with benefits, rather than contractors, removing incentives to bring on more workers and reducing flexibility for those who want to drive part-time. ABC News reported “Fifty-eight percent of voters rejected Proposition 22.

Uber, Lyft and other app-based ride-hailing and delivery services spent $200 million to combat the measure.” Labor unions had pushed hard for this law and lost.

In Massachusetts, over 75% of the electorate voted in favor of the “right to repair” law forcing automakers to open up their wireless APIs so that independent repair shops can access the telematics information stored in the vehicle. We covered this ballot initiative in the blog on election day, and its importance in the ongoing fight to allow consumers full options to repair the things that they buy. This referendum was one of the most expensive in state history. As reported in a Massachusetts focused website, “Both sides spent a combined $42.8 million, exceeding the 2016 record of $41.2 million over a ballot measure on charter schools that failed. The prior record was set in 2014 when $15.5 million was set on an unsuccessful ballot measure to repeal the gaming law. . . the “yes” side spent roughly $6.50 per voter, while opponents of the ballot measure spent $6.69 per voter. The high spending on the ballot measure, especially from out-of-state donors isn’t uncommon in a proposition that could have ripple effects in other states, a campaign finance expert said last month.”

The voters in Michigan passed Proposal 2, which amends the state constitution to provide electronic data and communications the same protections from unreasonable search and seizure as the homes and papers of Michigan residents receive. The Proposal also requires a search warrant to access a person’s electronic data or electronic communications, under the same conditions required for the government to obtain a search warrant for physical home and property. A first of its kind constitutional limitation on police powers, it is expected that the ACLU will push for similar initiatives in other U.S. states over the coming years. Advocating for the passage of Proposal 2, Merissa Kovach of the ACLU wrote in the Detroit Free Press, “Because electronic communications can reveal the most intimate details of our lives, this information must be covered by the same procedural safeguards that already exist for the search and seizure of a person’s physical property. If government agents need a search warrant to enter our homes and search through our closets and the papers locked in desk drawers, the same sort of due process should be required if they want to go rifling through electronic data that can be every bit as revealing.”

One of the most underhanded and sinister power plays in technological history is the fight by cable and telephone companies over the past twenty years paying state and local lawmakers to pass laws prohibiting the formation or provision of public-access internet systems. Such laws force a virtual monopoly or duopoly of cable-oriented access on cities and rural areas, forcing all of us to pay the cable and Wi-Fi bill to huge private companies and leaving the poor without viable alternatives. Denver voters decided to opt-out of such a law prohibiting the use of tax dollars to build a municipal internet. A referendum in Chicago would advise the city to ensure all communities have access to broadband. 90% of Chicago voters believed that internet access should be a public utility.

When tech is on the ballot, the voters can answer directly which business models should be protected and which run against the public interest.  As our electronic lives become more important, tech companies are likely to see more interest in the coming years in direct voting on issues affecting the digital world.

The Sandbox Dilemma: Massachusetts Votes

Who owns the stuff you buy?

This used to seem so easy.  Of course, you own the house, car, refrigerator, books, watch, shoes, pants, and everything else you bought – we always thought so. But not anymore. The companies that sold you these things claim that they have rights to what you bought and can dictate how you can use it.

I am not talking about bank lending here, where a finance company has a security interest in your items because you borrowed the money to buy them. I am talking about the things you buy outright.

We thought the intellectual property laws had settled this matter with something called the “First Sale Doctrine.” Also known as the exhaustion doctrine, this patent and trademark principle states that once a manufacturer sells an item that is subject to a patent and/or trademark, the manufacturer no longer has rights to enforce its intellectual property over the person or company who bought the item. The manufacturer can’t sue a legitimate buyer for infringing use of the patent or trademark by using the product, even if the patent/trademark holder doesn’t approve of the use, like buying something in order to sell it into a secondary market.

In other words, a patent or trademark holder has an exclusive right to sell products based on its invention or carrying its mark. But once those items are sold into commerce, those rights end (are “exhausted”), and the purchaser of the product may do what she likes with it. In reviewing this doctrine, the U.S. Supreme Court explained, “‘the authorized sale of an article which is capable of use only in practicing the patent is a relinquishment of the patent monopoly with respect to the article sold.”

Despite these restrictions in U.S. intellectual property law, manufacturers of products and platforms will always be driven by a desire to control the items they release out into the world. The motivations for this inclination are many, from the customer-centric drive for quality control and the reputation management nestled therein, to the simple desire to create a mini-monopoly in all aspects of the product cycle and the huge profits this can deliver. I have written in this space about how Apple fights to control all aspects of apps allowed on its products, at least in part because of the 30% revenue skim Apple takes from each sale of apps to Apple hardware customers – billions of dollars every year. Nice work if you can get it.

The “sell a razor and profit from ongoing blade sales” business model encourages companies to find ways to lock out other producers of supporting products. While hard-goods companies have been trying for years to force the rest of us to only use company-approved materials in their company-developed sandboxes – see HP/Xerox with copier toner and Keurig with coffee pods – the recent rise in software-driven products has made the strategy easier for producers.

Everything has software in it these days, from wearables to SUVs, which are essentially computers on wheels. And software is licensed to you, not sold. While many of these IoT items supplement the core functionality of the original product – car, refrigerator, cash register – some, like tablets, are little more than software parking garages. So companies can use license restrictions on software to limit how third parties can interact with your equipment. This expansion of restricted sandboxes has raised legal issues recently.

On the ballot today in Massachusetts is Question 1 which would expand their 2013 right-to-repair law to force auto manufactures to further open up the car’s wireless telematics – the intricate computer code monitoring modern vehicles – so that car owners can take those vehicles to independent repair shops.  Right now, auto manufacturers are using the complexities of the proprietary telematics sandbox to force car owners to seek repairs from authorized dealers. There is lots of money in car repairs.

According to Wired,  “If a majority of Massachusetts residents vote Yes on Question 1 this fall, carmakers would have to install standardized, open data-sharing platforms on any cars with telematics systems starting with the model year 2022. “Owners of motor vehicles with telematics systems would get access to mechanical data through a mobile device application,” the ballot summary reads.”

With the typical 2020 election hyperbole, Auto companies are fighting against Massachusetts Question 1 in commercials claiming the open auto standards will be used by stalkers, sexual predators, and perpetrators of domestic violence. More realistic arguments by auto manufacturer and dealer groups claim that forcing open the manufacturer’s telematics would affect their ability to comply with best practices in cybersecurity. Those supporting passage of questions says it is only an issue of whether automakers can force their own customers into spending repair money in the auto maker’s business ecosystems, or whether car owners have a right to find and support their own trusted repair shops.

This type of law could likely only pass by direct ballot initiative as car dealers have the most to lose if the initiative passes, and the dealer lobby is one of the most powerful political forces in most state legislatures. Your state legislators are unlikely to consider passing a law that harms them. Further, as observed in the New York Times changes affecting the economics of the vehicle industry are difficult to pass, “that’s because major manufacturers sit on the panels that set guidelines for things like environmental impact. As a result … tougher standards can be difficult to achieve.”

We pay significant money to own a car, and voters in Massachusetts will have an opportunity to decide if owning a vehicle means that it belongs to you, or if manufacturers can build it in such a way as to dictate your choices about your own property. I will be writing more about this topic soon, but stay tuned to see how the voters in Mass will determine the fate of their own vehicles.

Easy Hacking Tools Facilitate Bad Behavior

A few years ago, if you wanted to wreak havoc online, you needed some skill. You needed to understand coding and how to break into other computers.  You needed to develop attack bots and probe for vulnerabilities.

Now you just need to point and click.

Aside from the rise of social media, which seems to have unleashed the nastiness in many people who might not otherwise express it, the most troubling recent aspect of internet culture is the democratization of hacking tools and destructive software. Dangerous, damaging, and life-altering tools are now usable by nearly anyone who can find them online.

As observed last year on Security Boulevard  “This easy access to all sorts of hacking tools may be responsible for the significant spike in cyberattacks of all kinds in recent years. To hack a system, you don’t need the professional programming skills you once did; it’s enough to download an appropriate tool from the malware repository and follow the instructions on one of the myriad “how to hack” websites. The threat pool just grew by thousands of percent.”

The author describes a social engineering toolkit, which is a preprogrammed Linus application to automatically steal user credentials from a sign-in site. “The application has various modules designed to fool users into sharing their credentials and/or get them to click on links that will install credential-stealing malware on their systems.

For example, a hacker can choose to use the Web-Jacking Attack module, which provides a legitimate-looking URL (i.e., not connected to a malware site) that, when clicked, opens a pop-up window that contains a different URL, one that leads to a malware site where a keylogger or other malware can be installed. All the hacker has to do is choose the malware, choose the site they want to forge, and create a web link. It’s all free, and, as the site notes, ‘for educational purposes only.’”

In recent weeks, other proto-hackers have been “educating” themselves with easy tools. According to Ars Technica, a hacking tool called Trickbot is a for-hire botnet that has infected more than a million devices since 2016, selling access to the illegal network to anyone who wants to commit crimes online. The botnet has been so harmful that an industry taskforce led by Microsoft has been working to bring it down, initially managing to take down 62 of the 69 servers that Trickbot has used so that Trickbot was forced to use the servers of a competing criminal group to distribute its software. Microsoft Corporate VP for Security & Trust Tom Burt, “who has overseen several global botnet takedowns in the past, said the industry is getting better at them. After identifying new Trickbot servers, Microsoft and its partners have been able to locate their respective hosting providers, initiated required legal actions, and taken down the new infrastructure in as little as three hours.”

Apparently, some of the world’s most skilled hackers are sharing the most sophisticated hacking tools as “prizes” for poker tournaments, poetry competitions, and rap battles during the worldwide COVID crisis. Prizes include not only access to stolen credit cards and personal information but also scripts to automate the creation of cloned websites and e-shops used to harvest user credentials and e-wallets.

Even more troubling this week is a report from Wired that a new pornbot can be used by nearly anyone to create Deepfakes to target women online.  The article claims that the tool has targeted more than 100,000 women online, operating on the messaging app Telegram since July, and can be used to create nude images of regular people known to the person operating the tool. Apparently, the quality of images could pass for genuine.

“The still images of nude women are generated by an AI that “removes” items of clothing from a non-nude photo. Every day the bot sends out a gallery of new images to an associated Telegram channel which has almost 25,000 subscribers. The sets of images are frequently viewed more than 3,000 times. A separate Telegram channel that promotes the bot has more than 50,000 subscribers.” According to Wired.

The Washington Post covered this terrifying tool also, writing, “Ten years ago, creating a similarly convincing fake would have taken advanced photo-editing tools and considerable skill. Even a few years ago, creating a lifelike fake nude using AI technology — such as the “deepfake” porn videos in which female celebrities, journalists, and other women have been superimposed into sex scenes — required large amounts of image data and computing resources. But with the chatbot, creating a nude rendering of someone’s body is as easy as sending an image from your phone. The service also assembles all of those newly generated fake nudes into photo galleries that are updated daily; more than 25,000 accounts have already subscribed for daily updates.” As anyone who follows the recent history of technology knows, once a tech tool is good enough to use and gains popularity, its makers keep improving its effectiveness. So expect better customizable pornbots in the future.

As these tools proliferate and become easier to use, anyone can destroy lives or businesses just by following a few simple instructions. If we are concerned about fake news and conspiracy theories now, in a blink of an eye, our fears will be justified by Deepfakes showing real people in fake situations and making comments on video that they never made. Laws exist in some states to address revenge porn like that being created with the pornbot, but they are not consistent or available in every state.  And they don’t address other kinds of revenge Deepfakes.

Renting entire botnets, winning hacking tools in poker games, easily creating Deepfakes, and effortlessly stealing web credentials are just the start.  When this trend continues to expand, none of us will be safe.

To Encourage Autonomous Vehicles in Your State Create a No-Fault Insurance Pool

Legislators can become heroes.

With one act of non-partisan legislation, your representatives could save thousands of lives, could boost the U.S. manufacturing economy, and could make all of our lives easier, safer, and less expensive.

Oh, and the legislation would pay for itself.

Why would you not do this?

The act of legislation would be to create a no-fault insurance fund for autonomous vehicles.  It would be best to operate a nation-wide fund, but that seems unlikely in the near term for Congress. However, any state that organized such a fund would immediately become the hub for the autonomous vehicle industry.

Why would a state want to encourage a large percentage of the vehicles on its roads to be autonomous? Because thousands of lives would be saved every year. Of course, there will still be accidents, injuries, and deaths on the road.  That is why we need insurance to protect people harmed.  Any time thousands of machines weighing tons move at 30 miles an hour or more the laws of physics will occasionally intrude, and our current economy relies on moving people and goods across distances. We need motorized vehicles to operate our society as it now exists.

Human-operated motorized vehicles are a menace. According to the National Safety Council, there were nearly 40,000 deaths in car crashes in the U.S. last year, which is relatively consistent with the previous two years. Some studies have shown that as much as 95% of these crashes involved some kind of driver error, from disobeying traffic signs and signals to substance abuse to narcolepsy to distraction to simple mistakes like pressing the gas pedal instead of the break. U.S. Department of Transportation researchers estimate that 94% of fatal crashes could be eliminated if all the vehicles on the road were autonomous. (For anyone interested, this report also contains a section called Best Practices for Legislatures, discussing safety-related components that states should consider incorporating into legislation.)

A recent ZD Net article stated,

“Elderly drivers and teenagers are particularly likely to benefit from autonomous vehicles because the cars can monitor a situation that a driver might not be able to themselves, said Wayne Powell, vice president of electrical engineering and connected technologies for Toyota Motors North America.

‘Teen drivers are classically a high risk category of people. If you put a teen driver in a car that was looking out for that person, it won’t let them make bad choices. That could also have an immediate benefit,’ Powell said.”

Autonomous driving control systems remain vigilant.

But autonomous vehicles do not necessarily fit our current insurance structures. 5% of current fatal accidents still represents many deaths, and there will be a transition period before we can approach this level of safety. With no driver to hold at fault, who should be held responsible for the victim’s injuries –the vehicle owner, its manufacturer, its software designer, or someone else? I would suggest that a government-manage insurance fund is the best option.

The fund could operate on a ‘no-fault’ basis, where there is compensation for injured people regardless of who is found to be at fault for the accident. Our current system punishes mistakes in driving, but anyone who has been hit by an underinsured motorist or a city-run vehicle operating under sovereign immunity can demonstrate the holes in the system. We should take care of people injured in accidents and their families, and the current system fails to provide help in many cases.

No-fault insurance is not only more beneficial to society, but it makes more sense in a world of autonomous vehicles. The accidents from these vehicles will be substantially less, so the costs will be greatly reduced.  Plus, the logic of blaming the driver in our current system is not as resonant for an AI driver. If the software is flawed, then everyone has a product defect suit against the manufacturer, but if the accident was essentially unavoidable a no-fault insurance pool will make sure that the injured are compensated.

And if the state offers the pool, it can fund the pool through a payment from the sale and licensing of each autonomous car on the road. Removing liability for accidents will greatly accelerate the manufacture’s desire to sell, lease or offer autonomous taxi service in a state, removing one of the largest risk hurdles for companies looking to implement the safer vehicles in fleets and putting them out on the road.

And the self-driving cars are ready to go right now. As reported by Ars Technica, General Motors subsidiary Cruise has received permission to operate its modified Chevrolet Bolts without drivers in San Francisco by the end of 2020, and Alphabet’s self-driving vehicle developer Waymo is expanding its pay to ride service around Phoenix in autonomous vehicles. Until recently, autonomous vehicles had been permitted to drive only with a safety driver available, but that is already changing just as the next generation of AI-driven vehicles is ready to go.

The Ars Technica article tells us, “Four other companies—Waymo, Amazon-owned Zoox, delivery-robot company Nuro, and AutoX—have received permits to test totally driverless vehicles in the state. But none is testing its driverless cars in areas as hectic as San Francisco. The [Cruise] permit is a sign that companies like Cruise “are transitioning out of the development phase of the technology,” says Kyle Vogt, the company’s CTO.”

So now is the time for legislative encouragement to this vital industry. I will leave for later a deeper analysis of the policy reasons for not laying vehicle liability on owners or manufacturers, but it should suffice to say that economically penalizing either of them for driving incidents in which they are not direct participants is illogical and it will discourage the making and use of these much safer vehicles. A government-run no-fault system removed these dis-incentives while protecting people on the roads.

The system would work by barring claims against autonomous vehicles or those associated with them, in exchange for claims that could be raised against a state insurance fund. The fund trustees would evaluate and pay claims like a private insurer in the present system. Rules for resolving claims could be set by public policy, rather than the best interests of private insurers. Vehicle manufacturers or fleet services with the worst safety records over time could be expected to pay more into the fund than those with better safety.

A legislature thinking far enough ahead to draft and pass a law organizing a no-fault insurance fund for self-driving vehicles will immediately place itself at the forefront of autonomous vehicle adoption. This is an issue that both protects businesses and protects people, and it should be seriously considered.

How Risky Is Tossing Your Old Servers? Maybe $60,000,000 Fine

We all have them. Old computers sitting around in storage, never to be used again. Broken servers that have passed their prime. Laptops abandoned for their newer, shinier versions.

And what do you do with them? If these are business computers and you were considering tossing them into the trash can or hauling them to the landfill, you could be courting serious risk for your company. Improper disposal of data holders can lead to embarrassment, lawsuits, and fines.

There are environmental issues, of course.  The FTC publishes a notice on disposal of computers that states: “Most computers contain hazardous materials like heavy metals that can contaminate the earth and don’t belong in a landfill. So what are your options? You can recycle or donate your computer. Computer manufacturers, electronics stores, and other organizations have computer recycling or donation programs. Check out the Environmental Protection Agency’s Electronics Donation and Recycling page to learn about recycling or donating your computer.”

But the data exposure is another ballgame entirely. We were reminded of this fact last week when the Office of the Comptroller of the Currency, a lead regulator for national banks, fined Morgan Stanley Bank and its Private Bank $60 million for risk management issues related to the closing of two wealth management data centers.

The American Banker reported, “The OCC found that the bank did not take proper precautions in dismantling and disposing of outgoing hardware that contained sensitive customer data and failed to properly supervise the vendors that Morgan Stanley tasked with wiping customer data from the old equipment before it was resold.” The OCC reported in its press release on the fine, “Among other things, the banks failed to effectively assess or address risks associated with decommissioning its hardware; failed to adequately assess the risk of subcontracting the decommissioning work, including exercising adequate due diligence in selecting a vendor and monitoring its performance; and failed to maintain appropriate inventory of customer data stored on the decommissioned hardware devices.”

But the OCC investigation was not the only attack on Morgan Stanley’s computer disposal procedures arising from the decommissioning of these data centers. Two lawsuits have also been filed by Morgan Stanley clients and former clients who were notified that the data center closing placed their information at risk. The lawsuits claim that unencrypted private financial data remained on the decommissioned computers after they left the bank’s possession and that a software flaw left previously deleted data on the computer hard drives. These putative class action suits have not yet specified damages.

As Morgan Stanley can now attest, termination/destruction hygiene is a crucial part of any information technology program. And like many aspects of modern computing and e-commerce, if safe computer destruction is not part of your company’s core competence, then you are likely best served by hiring professionals to perform the task for you.  But make sure you know what you are getting.

Computer recycling, destruction, and refurbishment involve a full removal of unencrypted data from the drives and storage units. We all know that the simple deletion of an item does not necessarily remove the item itself, just the ease of access to it – like boarding up the door of a house.  The house is still there, just harder to enter. Your vendor handling destruction should be able to attest to writing over the important drives or otherwise destroying the data or drives themselves.

As stated on the U.S. Homeland Security website, “Do not rely solely on the deletion method you routinely use, such as moving a file to the trash or recycle bin or selecting “delete” from the menu. Even if you empty the trash, the deleted files are still on the device and can be retrieved. Permanent data deletion requires several steps.” Homeland Security promotes full physical destruction of the device to prevent others from retrieving sensitive information off of a decommissioned computer.

It also promotes overwriting, in which strings of ones and zeros are written over the data to completely obliterate it.  The site suggests using either of the following:

  • Cipher.exe is a built-in command-line tool in Microsoft Windows operating systems that can be used to encrypt or decrypt data on New Technology File System drives. This tool also securely deletes data by overwriting it.
  • Clearing is a level of media sanitation that does not allow information to be retrieved by data, disk, or file recovery utilities. The National Institute of Standards and Technology (NIST) notes that devices must be resistant to keystroke recovery attempts from standard input devices (e.g., a keyboard or mouse) and from data scavenging tools.

Either of these options can help assure that your company meets its obligations for the proper disposal of outdated computers.

The end of a computer’s life can be just as dangerous as its active use for exposing sensitive data. Your company needs a set of written policies and programs to establish that computers are removed in a legally compliant manner. Fines, lawsuits, and significant customer conflict may follow if you don’t get this right.

GDPR and CCPA Uncertainty: What Should a Company Do?

Some companies don’t seem to care about privacy compliance.

They may not have the money to build a compliance regime. They may not believe in the laws or believe that the laws would ever be applied to them. They may just not have thought much about it.

However, many other companies care deeply about the privacy of their customers, their data protection regimes, and meeting legal and regulatory requirements in this space. And the data leaders at these companies are troubled right now.

If they had a wide geographic footprint, the data teams at these companies spent much – maybe all – of 2017-2018 preparing to comply with the GDPR and 2019-2020 assuring compliance with the CCPA. Various changes in the regulations, case law, and enforcement of these laws, as well as changes in laws of Canada, Brazil, and others, have garnered compliance attention.

But now our do-your-homework, 10,000-steps-a-day, be-prepared companies are thrown into a tizzy. How can you meet your obligations on the most significant and most dangerous (to companies) privacy laws if the requirements from those governments are not clear? What do you do when certainty melts away, but your bosses still count on you to protect the company by assuring legal requirements are met?

How does a conscientious Chief Privacy Officer protect her company when the upcoming California Privacy Ballot Initiative threatens to change all of the rules internalized from the recent implementation of the CCPA and the Schrems II decision not only chucks out the US-EU Privacy Shield but may have made all data transfers from the EU to the US illegal? No amount of planning can assure that your U.S. based company is correctly following the EU or California privacy laws on January 1, 2021. What should your company do?

The two jurisdictions provide different types of uncertainty.

EU privacy laws are longstanding and relatively consistent.  However, the recent Schrems II decision has specifically invalidated a regime 5300 conscientious companies were using to establish compliance with those laws, and the reasoning behind the decision cast doubt on all of the other formerly-approved methods for companies to establish legal compliance for transferring data from the EU to the US. We have already discussed in this space some of the reasons for the uncertainty following the Schrems II decision. Official guidance in Europe run the spectrum from the UK’s privacy regulator, the ICO, which essentially told companies to keep calm and carry on with direct permissions and the statutorily prescribed contract clauses, to some of the German state privacy regulators, who both said that no private data should pass from the EU to the US and who seemed to believe the result was long overdue.

As a US Commerce Department official wrote after the decision, “The [Schrems II] ruling has generated significant legal and operational challenges for organizations around the world at a time when the ability to move, store, and process data seamlessly across borders has never been more crucial. Cross-border data flows have become indispensable to how citizens on both sides of the Atlantic live, work, and communicate. They power the international operations and growth of American and European businesses of every size and in every industry, and underpin the $7.1 trillion transatlantic economic relationship.”

So US companies with strong retail and commercial interests in Europe can no longer point to their approved method of transferring information in a protected fashion outside of Europe, even for their own companies’ employee data within their own companies’ servers. It may cost tens of millions of Euros for many of these companies to localize the storage and processing of their European personal data – if it is even possible for them to do so. Localization is not a great option for most, especially based on a court decision that is being interpreted in many ways.

So if your company is not intending to localize its European data, and it is unwilling or unable to simply stop collecting or moving EU personal data, it should place itself into the best possible light for the EU Data Privacy Authorities. This means 1) protecting the EU in a manner that comports with EU data protection laws, and 2) finding cover in binding corporate rules, approved contract clauses, or documented permissions from the EU resident data subjects.

Will this give an assurance of protection from the data regulator’s wrath? No, but then again, nothing will at this point. Will this action provide the best possible protection for this unfortunate predicament? Yes.  Short of strict localization of storage and processing or withdrawal from Europe, this is about as much assurance as your U.S. company. Even companies not named Facebook will be evaluated for their treatment of data, and those that care to comply should try to comport to the rules that seem to apply.

California is a different animal. Its omnibus privacy act is less than a year old and enforcement started at the beginning of last quarter. So this is new for all of us, and no such law existed before in the US, so the diligent corporate privacy office can be excused for not knowing exactly how it will be enforced and what its effect will be on her company. All we know for sure is that the California AG will be watching, likely targeting scofflaws, and that the CCPA’s statutory damage provision just ushered in the era of non-stop class-action suits against victims of hacks and ransomware attacks.

And yet, even this level of certainty will likely be defenestrated in a month. A new ballot initiative, called the CPRA, is expected to overwhelmingly pass in California this fall, and it will add more rights and requirements to the privacy landscape, in addition to being nearly impossible to revise due to its nature as a ballot initiative rather than an action of the legislature.  So all overbreadth, vagueness, and ambiguity will be puzzled over by businesses and possibly ironed out through regulations off in the future.  In the meantime, affected companies will need to continue complying with the CCPA as we know it, and simply keep an eye on changes in the law, knowing that they will force behavioral changes in the near future.  Once again, where certainty is impossible, coping under the current regime is the best we can muster.

This is a deeply uncertain time for international companies who care about data protection and legal compliance. The way forward is clear, but so are the risks of proceeding in any direction at all.