Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

Patrick Law Group, LLC
Contact

On October 24, 2018, Apple CEO Tim Cook, one of the keynote speakers at the International Conference of Data Protection and Privacy Commissioners Conference, threw down the gauntlet when he assured an audience of data protection professionals that Apple fully supports a “GDPR-like” federal data privacy law in the United States.  Just one week later, Senator Ron Wyden of Oregon introduced a discussion draft for a privacy law, the Consumer Data Protection Act, that would result in steep fines and possible incarceration for top executives of companies that violated the law.  Cook’s position and Wyden’s proposed bill stand in stark contrast to several of Apple’s technology industry competitors.  Indeed, those competitors are reportedly already seeking to unravel California’s recent data protection legislation.  In the wake of the congressional and corporate push for more restrictive privacy regulations in an innovative industry, will other giants on the technology landscape be left in the dust?

Tim Cook’s proclamation was of no surprise to close followers of Apple’s culture and approach to data protection; yet, Cook’s message at this conference was more direct than ever, citing his competitors’ efforts to create “platforms and algorithms” to “weaponize personal data.”  Cook’s argument for increased data protection regulation then took an apocalyptic turn, when he warned, “Your profile is a bunch of algorithms that serve up increasingly extreme content, pounding our harmless preferences into harm…we shouldn’t sugarcoat the consequences.  This is surveillance.” 

Cook’s proposed regulation included several of the key hallmarks of GDPR and its progeny:

  1. Minimization of personal data collection from technology users;
  2. Communication to technology users as to the “what and why”---what data is being collected and why;
  3. Users’ rights to obtain their data, as well as to correct and delete their data; and
  4. Security of user data.

Cook argued that not only were such rights fundamental to the user experience and adoption of emerging technologies, but that such rights also engendered trust between technology organizations and the consumers of such technologies.

Cook’s (and Apple’s) approach to increased data protection has garnered praise by those members of Congress who routinely champion consumer rights in this space, including Senator Wyden.  His discussion draft identified the federal government’s failures to protect data:

“ (1)     Information about consumers’ activities, including their location information and the websites they visit is tracked, sold and monetized without their knowledge by many entities;

(2)        Corporations’ lax cybersecurity and poor oversight of commercial data-sharing partnerships has resulted in major data breaches and the misuse of Americans’ personal data;

(3)        Consumers have no effective way to control companies’ use and sharing of their data.”

As consumer protections of the nature contemplated in data protection laws and regulations would typically be the province of the Federal Trade Commission (FTC), Senator Wyden then went on to demonstrate how the FTC lacked the power and ammunition to effectively combat threats to consumer data privacy:

“(1)      The FTC cannot fine first-time corporate offenders.  Fines for subsequent violators of the laws are tiny and not a credible deterrent.

(2)        The FTC does not have the power to punish companies unless they lie to consumers about how much they protect their privacy or the companies’ harmful behavior costs consumers money.

(3)        The FTC does not have the power to set minimum cybersecurity standards for products that process consumer data, nor does any federal regulator.

(4)        The FTC does not have enough staff, especially skilled technology experts…”

Senator Wyden posited that Congress could empower the FTC to:

“(1)      Establish minimum privacy and cybersecurity standards.

(2)        Issue steep fines (up to 4% of annual revenue), on the first offense for companies and 10-20 year criminal penalties for senior executives.

(3)        Create a national Do Not Track system that lets consumers stop third-party companies from tracking them on the web by sharing data, selling data, or targeting advertisements based on their personal information.  It permits companies to charge consumers who want to use their products and services, but don’t want their information monetized.

(4)        Give consumers a way to review their personal information a company has about them, learn with whom it has been shared or sold, and to challenge inaccuracies in it.

(5)        Hire 175 more staff to police the largely unregulated market for private data.

(6)        Require companies to assess the algorithms that process consumer data to examine their impact on accuracy, fairness, bias, discrimination, privacy and security.”

Any combination of Cook’s proposal and Senator Wyden’s bill would change the landscape of privacy in the United States, especially among the technology giants of Silicon Valley.  The push for increased data protection at the federal level from technology companies (other than Apple) largely have been efforts to avoid and supersede laws passed in states such as California and Massachusetts.  In the absence of a superseding federal law, those states’ laws become the de facto law of the land for companies doing business nationally.  Yet, technology companies must walk a fine line if they are to avoid a publicity and consumer nightmare—if they succeed in their efforts to have federal law implemented in place of state laws, consumer sentiment would dictate that federal law must be something akin to those ones proposed by Cook and Senator Wyden, and not merely lip service.  Otherwise, they could find themselves in the headlines for all the wrong reasons.   

With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonabtele, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

Diversity of Backgrounds

Axon’s ethics board includes not only AI experts, but also a diverse sampling from the related fields of computer science and engineering, privacy and data protection and civil liberties.  A sampling of members includes the following individuals:

  • Ali Farhadi, Professor of Computer Science and Engineering, University of Washington
  • Barry Friedman, Professor and Director of the Policing Project at New York University School of Law
  • Jeremy Gillula, Privacy and Civil Liberties Technologist
  • Jim Bueerman, President of the Police Foundation
  • Miles Brundage, Research Fellow at the University of Oxford’s Future of Humanity Institute
  • Tracy Ann Kosa, Professor at Seattle University School of Law
  • Vera Bumpers, Chief of Houston Metro Police Department and President of the National Organization of Black Law Enforcement Executives
  • Walter McNeil, Sheriff of Leon County, Florida Sheriff’s Office and prior President of the International Association of Chiefs of Police

Obviously, Axon’s goal was to establish a team that could evaluate use and implementation from all angles, ranging from law enforcement officials who employ such technologies, to those experts who help to create and shape legislation governing use of the same.  Axon may be moving in the direction of facial recognition technologies; after all, police forces in both the United Kingdom and China have leveraged these types of technologies for years.  Thus far, one of the chief concerns surrounding facial recognition is the penchant for racial and gender bias—higher error rates for both females and African-Americans.  If Axon does, indeed, move in that direction, it is critical that its advisory group include constituents from all perspectives and demographics.

Core Values

In addition to its own commitment to diversity, DeepMind’s key principles are reflective of its owner’s more expansive footprint across technology platforms:

  • Social benefit: AI should “serve the global social and environmental good…to build fairer and more equal societies…”
  • Rigorous and evidence-based: Technical research must conform to the highest academic research standards, including peer review.
  • Transparent and open: DeepMind will be open as to “who we work with and what projects we fund.” 
  • Collaboration and Inclusion: Research must be “accountable to all of society.”

DeepMind’s focus on managing the risk of AI is on an even broader canvas than that of Axon.  In furtherance of its key principles,  DeepMind seeks to answer key questions:

  • What are the societal risks when AI fails?
  • How can humans remain in control of AI?
  • How can dangerous applications of AI in the contexts of terrorism and warfare be avoided?

Though much of the AI industry has yet to provide details as to their own ethics programs, some of its blue chips have acted in unison to establish a more formalized set of guidelines.  In 2016, the Partnership on AI to Benefit People and Society was founded collectively by Amazon, Apple, Google, Facebook, IBM and Microsoft.  Seven pillars form the basis of this partnership:

  • Safety-critical AI: Tools used to perform human discretionary tasks must be “safe, trustworthy and aligned with ethics…”
  • Fair, transparent and accountable AI: Systems must designed so as to be alert to possible biases resulting from the use of AI.
  • Collaborations between people and AI systems: AI is best harnessed in a close collaboration between humans and the systems, themselves.
  • AI, labor and the economy: “Competition and innovation is encouraged and not stifled.”
  • Social and societal influences of AI: While AI has the potential to provide useful assistance and insights to humans, users must also be sensitive to its potential to subtly influence humans.
  • AI and social good: The sky’s the limit in terms of AI’s potential in addressing long-standing societal ills.

While these best practices are a promising start, the industry continues to lack more particulars in terms of how the guidelines will be put into practice.  It is likely that consumers will maintain a healthy skepticism until more particular guardrails are provided which offer compelling evidence of the good, rather than the bad and the ugly.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Patrick Law Group, LLC | Attorney Advertising

Written by:

Patrick Law Group, LLC
Contact
more
less

Patrick Law Group, LLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide