HB Ad Slot
HB Mobile Ad Slot
HTML Embed Code
HB Ad Slot
Emerging Tech Companies: It’s Not Your Uncle’s Dot.Com Regulatory Environment Anymore for Privacy and Data Security
Tuesday, July 24, 2018

The Silicon Valley motto, “Move fast and break things” is alive and well in the era of Big Data, artificial intelligence and machine learning.  But moving fast can also lead to stubbed toes and worse, as recent privacy troubles at major tech leaders such as Facebook and Uber demonstrate.[1]  While certain economic sectors may be enjoying some relaxation of regulations in the current political climate, the technology sector is facing new regulations and increased calls for further regulation, in particular with respect to data privacy and security.[2]  Regulators are issuing new rules that are likely to be cost more when broken by data-driven businesses that like to move fast.  Emerging technology companies that plan to succeed in what’s been called the “algorithmic society”[3] and by others the “surveillance economy”[4] should pay attention to this tightening regulatory environment and be prepared for more questions about privacy and security from investors, strategic partners, consumers and, possibly, regulators. 

Three recently-introduced laws in the U.S. and the EU are bell weathers of what may be a shift in public sentiment about data privacy in the digital economy.  In March, 2017, New York State put into effect the country’s most detailed and stringent state cybersecurity regulation[5], directed at banks, insurers and other entities licensed by the state’s Department of Financial Services (the “New York Cybersecurity Regulation”).  Although few technology companies are directly covered this new regulation, companies that provide services to covered entities must now deal with extensive vetting and stricter contracting regimes. 

In May of this year, the European Union scrapped its 23-year-old data protection Directive and upgraded to an even stronger set of data protection rules in the new General Data Protection Regulation, or “GDPR”.[6]  Of the many new features of the watershed GDPR, at least two stand out of particular interest to U.S.-based tech startups: (1) the GDPR’s jurisdictional provisions reach non-EU based companies that target data subjects in the EU (whether or not the goods or services are paid or free) or that profile or monitor data subjects located in the EU;[7] and (2), the GDPR prohibits certain types of automated processing, including data subject profiling, unless certain strict conditions are met (one of which is a data subject’s “explicit consent”).[8]  In the parlance of the GDPR, “automated processing” encompasses technologies such as AI and machine learning.  Emerging companies with aggressive strategies for collecting and processing personal data in European markets will need to understand whether their U.S.-based operations can bring them within GDPR’s jurisdictional net and, if so, what they must do to comply.

Finally, this past June California enacted the most sweeping U.S. state consumer data privacy law to date, the California Consumer Privacy Act of 2018 (or “CCPA”).[9]  Although the CCPA will not go into effect until January 1, 2020, it creates several new rights to protect consumer control over the sale and disclosure of “personal information” (defined more broadly than in previous state privacy law), as well as certain new enforcement provisions with teeth, including a statutory damages scheme for covered entities’ failure to maintain reasonable security measures to prevent data breaches.  Nor does the CCPA neatly track the GDPR, meaning that businesses hoping to collect consumer data in the EU and in California will require separate compliance strategies.

This wave of new data privacy and security laws will challenge affected startups and emerging companies – which usually have limited or no compliance budgets at all.   Ignoring these developments in data privacy regulation is not an option for companies that plan to take advantage of Big Data analytics and machine learning, but panic isn’t an appropriate response either.  Startups that are just sketching out or refining their business models and preparing for investor diligence can benefit by asking a handful of basic questions about how personal data will figure in the company’s operations.  The answers may repay in the form of simplified data privacy compliance headaches down the road.   

  1. Does the business model include a data strategy that takes account of privacy and security? 

In the era of Big Data, AI and machine learning, businesses have strong incentives to treat every bit of data collected from or about consumers –no matter how trivial or incidental -- as a potential asset for future analytics projects, marketing initiatives, service improvements, profiling, etc.  From this perspective, no data is ever completely useless or without value.  For startups whose business model is essentially based on data mining, data aggregation, analytics, predictive services, profiling, ad targeting, or data brokerage, for example, this perspective makes sense because data is a core product or service.  

The data-hungry approach describes many startup models, but it may not be appropriate for many others.  Truly data-driven startups that plan to amass or tap into large amounts of personal data on consumers are committing themselves to dealing with significant and likely growing privacy regulatory frameworks, of which the new GDPR and CCPA are only the latest iterations.  But companies that are not particularly data intensive can spare themselves some regulatory headaches and limit their exposure to personal or sensitive consumer data by restricting the collection and storage of such data only to what’s needed to operate the business. 

Simply put, startups should determine early on just how central personal data is to the company’s mission and value and develop risk-mitigation policies accordingly.  Personal data not used or needed in a business and stored in personally-identifying form is not an asset, just a liability waiting to happen.

  1. What is the privacy compliance and liability risk profile of the company’s data strategy? 

By taking time to understand how significant personal information is to the business model, early stage companies can assess where they sit on the spectrum of regulatory privacy risk.  Will the company be a high-volume user of personal information, whether collected directly from consumers or from third parties?  Will the company operate in a significantly regulated sector for data privacy, such as healthcare, financial services, insurance, education, child-oriented services or telemarketing?  Will the data collected and processed include sensitive personal characteristics, such as race, ethnicity, sexual orientation, personal health records, precise geo-location, religious beliefs, or criminal history?  Will the company’s footprint potentially place it within the reach of international jurisdictions that have stronger privacy legislation than the U.S., such as European Union?  Or does the company’s business model largely avoid these potential mine fields of privacy regulation?                                                                                               

Answers to these questions will help management:

  • Understand just how much time and expense may need to be directed at mitigating privacy and security risk in business operations and  on risk transfer through liability insurance;
  • Formulate the company’s compliance policies and procedures for both personnel and third party service providers and business partners; and
  • Clarify for investors the type and extent of privacy risk that attaches to the company’s business model.
  1. Is there a baseline set of agreed principles and practices –the company’s ground rules—for addressing consumer privacy? 

Startups with data-intensive business models can build out their data privacy practices by assessing where they stand regarding FIPPs (“Fair Information Practice Principles”).  The FIPPs describe certain basic data subject rights that by consensus protect privacy.  There are many versions of FIPPS recognized by different governmental agencies and industry groups,[10] but, for the most part, FIPPS are not mandated wholesale under law in the U.S.  Still, FIPPs have long been viewed in the privacy community as rough baseline for consumer rights transparency and control over the use of personal information.[11]  Core principles of consumer privacy protection that run most versions of FIPPs include:

  • Transparency in notices of privacy practices, including the basis, purpose and uses for collecting personal information (see item 4 below);

  • Rights of access, accuracy, correction and/or deletion of personal information;

  • Limiting the collection and use of personal information to what is reasonably needed to provide a product or service (so-called ‘data minimization’);

  • Rights to opt out (or opt into) to uses of personal information and related commercial uses;

  • Accountability of the entity collecting and using personal information (i.e., the entity monitors and enforces its privacy practices to ensure that they are properly carried out, are capable of being audited, and enforced with consequences for infractions).

By determining where the company stands, as a matter of policy, on the various component rights set out in the FIPPs, the company can set a baseline for its public-facing privacy disclosures and help set the tone for its internal privacy ‘culture.’

  1. Are the privacy principles and related data practices communicated simply, accurately and transparently? 

Of the many ways that startups can commit unforced errors in digital privacy, the most common and avoidable is publishing a misleading privacy policy.  The Federal Trade Commission has punished many companies over the years for alleged “deceptive practices” in privacy policies that omitted material information, were vague or ambiguous, made false or misleading promises about data practices, or buried key disclosures at the back of lengthy and densely-written policies. [12]  In most of these instances the offending companies were not being singled out for failing to make a legally-required disclosure but for making voluntary disclosures that were deemed duplicitous and therefore a “deceptive business practice.”  The resulting pain, costs and embarrassment to these companies was wholly self-inflicted.

Although U.S. privacy law is a notorious thicket of non-uniform federal, state and local laws and regulations, there is at least one consistent mantra cutting through all of it, at least in digital commerce: keep privacy disclosures transparent, accurate and simple.  For digital startups, there is no shortage of guidance and resources on best practices in privacy disclosures.  These range from the developer rules published by the large app platforms (e.g., IOS, Android)[13] and digital advertising industry guidelines,[14] to detailed guidance from federal and state regulators.[15] 

In creating privacy statements, startups should consider avoiding some common mistakes:

  • Be wary of ‘cut and paste’ syndrome.  Rule one for data privacy disclosures is accuracy – i.e., the words in the policy should align with the data practices of the publisher of the policy.  This means the policy content should be based on a full understanding of what the company actually does with personal data of consumers, from initial collection to final disposal.  While it is tempting to skip the conversation with a privacy lawyer, hop onto a search engine and ‘borrow’ a similar company’s privacy, it’s risky to conclude that one size fits all.  It would be awkward for a company to have to explain to a regulator (or to an investor) that a misleading statement in its privacy disclosure was simply copied from another website or app.
  • Avoid lengthy documents dense with legalese.  Use layered policies that include a baseline statement of privacy principles and practices, coupled with conspicuous, real-time notices (e.g., pop-ups) whenever user consent is required by law or by the company’s policy, or whenever a reasonable user might be surprised that the data would be collected.  Examples include pop-up requests for permission to collect location data, to access data in other apps, or to access recording or camera functions on a device.
  • Use simple, short statements and place material terms about data collection, use and sharing prominently and up front, not several pages into the policy. 

5.Has the company assessed its particular security risk profile and implemented a written security policy based on that profile?  

One of the consistent themes in U.S. data privacy law is that companies handling personal information of consumers should apply “reasonable” data security measures based on risk assessments.  Federal Trade Commission consent decrees in enforcement actions over the last 10 years –including actions directed at early stage companies --have included requirements of maintaining reasonable data security policies and practices, in writing, that are informed by periodic risk assessments.  What constitutes “reasonable” security measures is, of course, not defined in the FTC Act or FTC regulations.  “Reasonable” security in reality is a moving target based on relevant circumstances, a key one of which is the actual data security risk profile of a particular company.  If a business should ever need to defend its security practices as “reasonable,” it will need to able to explain how those practices were reasonable in light of the data security risks of its business and its data practices in particular.  That means being able to point to steps the business took to (i) gauge its exposure to known or reasonably anticipated attacks on, or vulnerabilities in, its systems and processes for handling personal and confidential information, and (ii) implement policies that address those risks. 

Early stage companies generally do not have the time or expertise to conduct sophisticated security risk assessments or the money to pay for expensive cybersecurity consultants.  The good news is that neither the FTC, nor the handful of states with minimum data security laws, require perfect security or audits by high-ticket security experts to achieve reasonable security.  Indeed, most FTC and state enforcement actions involving data security seize on evidence of systemic failures to prepare for known or reasonably anticipated security weaknesses or to respond to clear indicators that a company’s systems or data have been compromised.[16]  Startups can mitigate their chances of being deemed lax on security regulators (and investors), by documenting certain basic steps, including:

  • Monitoring widely-distributed sources of industry knowledge about security risks relevant to company systems and deploying commercially available or freely-distributed tools to address such risks (e.g., guidelines and bulletins published by the SANS Institute and the Open Web Application Security Project);

  • Addressing known security vulnerabilities in the company’s operations and documenting such efforts; 

  • Create and update as needed a written information security policy that, at a minimum, provides a high level statement of the security measures the company applies to its handling of personal information, or which are required of third parties who are given access to company personal information or company systems.  The policy should identify applicable information security standards that the company and its service providers follow (e.g., ISO 27001, PCI DSS, HIPAA).

  • Unless there are compelling reasons not to, apply widely-adopted security methods in the company’s industry sector, such as securing personal information when in transmission and at rest (e.g., encryption) and employing multi-factor authentication for remote access to personal information and sensitive company systems.

  1. Has the company committed resources and documented the procedures that enable it to enforce and be accountable for these privacy and security policies? 

Even the best written policies do not enforce themselves.  This is why the FIPPs, FTC security action consent decrees and, increasingly, data privacy and security laws, emphasize the principle of accountability: an organization should institutionalize and enforce its own policies.  In practical terms, accountability means at a minimum: (i) having designated persons charged with overseeing, maintaining and enforcing privacy and security policies (i.e., “owning” the responsibility on behalf of the organization) and (ii) resources and procedures that support and document these accountability functions.  The GDPR, for instance, in some cases mandates that organizations designate and support an in-house Data Protection Officer.[17]  New York’s Cybersecurity Rule requires covered entities to designate a Chief Information Security Officer.[18]  Most FTC consent orders relating to alleged deficient privacy or data security practices include the requirement of a designated person or office in the organization to oversee its privacy or security programs.[19]

Most startups cannot afford these formalities and titles in order to embed accountability into their nascent privacy and security programs.  At the same time, startups do well not to neglect this principle entirely or treat it simply as one line item in an IT person’s job description.  Someone, preferably a person with senior management status and relevant subject matter expertise, should “own” the company’s privacy and security function.  This person should be able to explain (and defend) the company’s privacy and security practices with investors and strategic partners, advise on privacy and security issues affecting product/services development, and take lead responsibility in building out the company’s overall privacy and security strategy and written policies.  Ideally, this role should be reasonably independent and supported by an appropriate budget. 

  1. Are the company’s privacy and security policies applied to its third party relationships? 

Recent revelations that the now-defunct firm Cambridge Analytica accessed and profiled the Facebook account data of millions of users, without their knowledge or consent, have thrown a harsh spotlight on the privacy risks in how businesses manage the sharing and secondary uses of data collected from or about their customers and users.  The data access by Cambridge Analytics wasn’t a classic data breach in which hackers outwit a firm’s security measures and steal data; instead, it was a direct by-product of data-sharing policies (since revised by Facebook) between Facebook and its third party developers and researchers.  As a result of this incident, future privacy regulation and enforcement in the U.S. is likely to focus on all manner of ‘third party risk’ to personal information.  The new CCPA is a prime example of this trend: the Act imposes strict new requirements on the sale or disclosure of personal information of California residents, coupled with potentially severe fines for violations.

The enhanced regulatory focus on third party risk is not limited to data sharing with developers or other data licensees.  Much of data processing today, even processing needs of startups, is outsourced to third party service providers.  Many startups rely heavily on multiple, backend cloud processors for services such as data storage, application hosting, payments services, human resources, ad generation, and analytics.  All such third party relationships have the potential to introduce risk to the privacy and security of a company’s data and systems. 

And the reality is that many of these startup arrangements with third parties are entered into without a great deal of due diligence and even less negotiation of terms. 

The new crop of privacy and security regulations, both in the U.S. and in Europe, zeroes in on third party risk. The GDPR, for example, includes detailed specifications for mandatory content in contracts for the processing of personal data, as well as exposure to direct liability under the GDPR for data processors.[20]  New York’s recent Cybersecurity Regulation for the financial services industry requires that covered entities have a distinct security program governing their use of third party service providers, including due diligence and contracting requirements.[21]  Emerging companies that aren’t directly covered by either GDPR or the New York Cybersecurity Regulation but expect to do business with customers who are covered should be prepared for the type of due diligence scrutiny and data security contract addenda that such customers are now required to include in their contracts.

Some questions that early stage companies should consider in preparing for scrutiny by corporate customers include:

  • What is the company’s policy toward sharing personal data of customer with developers, advertisers, social media platforms and other third parties?  What conditions will be imposed on recipients of the company’s personal data?  If the company relies on de-identification or anonymization of personal data when sharing with third parties, what technical standard will be followed and what contractual commitments are obtained from the recipients not to re-identify the data?  Will contracts with recipients include the company’s right to audit the recipient’s compliance with the restrictions on data use?

  • If the company will be using third party service providers to host or process personal data, what due diligence will be done to confirm that the providers can comply with the company’s privacy and security requirements?  If the provider is a large cloud platform (e.g., AWS, Azure, Google Cloud) with ‘take it or leave it’ contracts, what version of the platform’s offerings will best support the company’s particular security requirements?

  • When working with smaller service providers in the cloud and outsourcing markets, what minimum contract terms will the company require to assure that its personal data is secure and will not be misused?

 

*****************

The regulatory web for privacy is likely to be denser in the future than it’s been for the previous generation of tech startups. In keeping with the old proverb “a stitch in time saves nine”, the time spent today clarifying a new company’s strategy on personal data will probably be much shorter than the time needed to rework that strategy tomorrow as a public company with a billion users.

Read more about Legal Issues for High-Growth Technology Companies: The Series.

 


[1] “Facebook and Cambridge Analytica: What You Need to Know as Fallout Widens”, The New York Times, March 19, 2018; here; “Uber’s Massive Hack: What We Know”, CNN Money, November 32, 2017; here.    

[2] See, e.g., “There’s a New Bill to Regulate Facebook and Google’s Data Collection”, Slate, April 24, 2018; here

[3] Balkin, Jack, Free Speech in the Algorithmic Society: Big Data, Private Governance and New School Speech Regulation, Yale Law School Legal Scholarship Repository, 2018; here.   

[4] “Facebook under fire, but it’s just part of surveillance economy”, The Christian Science Monitor, March 28, 2018; here

[5] New York State Department of Financial Services, 23 NYCRR 500, “Cybersecurity Requirements for Financial Services Companies”; here

[6] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation); here.  

[7] GDPR, Article 3.

[8] GDPR, Article 22.

[9] California Consumer Privacy Act of 2018 (AB 375); here

[10] Gellman, Robert, “Fair Information Practices: A Basic History”, April 10, 2017; available at SSRN: here.

[11] Id.

[12] “Enforcing Privacy Promises”, FTC webpage here

[13] “Protecting the User’s Privacy”, Apple Developer, available here;  “Privacy Policy Guidance”, Actions on Google, available here

[14] See, e.g., “DAA Self-Regulatory Principles”, Digital Advertising Alliance, available here.  

[15] See, e.g., “Mobile Privacy Disclosures: Building Trust Through Transparency”, Federal Trade Commission Staff Report, February 2013; here;  “.com Disclosures: How to Make Effective Disclosures in Digital Advertising”, Federal Trade Commission, March 2013; here

[16] “Start With Security: Lessons Learned from FTC Cases”, Federal Trade Commission, June 2015; here

[17] GDPR, Articles 37 – 38.

[18] 23 NYCRR 500.04.

[19] See, e.g., “Privacy and Data Security: Update 2017”, Federal Trade Commission; here.  

[20] GDPR, Art. 28

[21] 23 NYCRR 500. 11.

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins