This Week in eDiscovery: Discovery Requests of Private Email and Social Media Accounts, Plus, Microsoft Unveils “Air Gapped” LLM

Array
Contact

Every week, the Array team reviews the latest news and analysis about the evolving field of eDiscovery to bring you the topics and trends you need to know. This week’s post covers the week of May 13-19. Here’s what’s happening.

Courts Rule for Private Email and Social Media Discovery

In two separate recent rulings, courts granted discovery related to private email accounts and private social media account usernames in requests made by plaintiffs. And in both cases, while making limited rulings, the courts rejected arguments by the defendants that private email or social media account information was not in their possession or control.

In the first case, Clark v. Council of Unit Owners of the 100 Harborview Drive Condominium Association, covered by Michael Berman on the EDRM Blog, the plaintiffs alleged retaliation by the condo association. After the plaintiffs said the defendant hadn’t produced any responsive email, and the court replied that plaintiffs’ requests were “plainly overbroad,” plaintiffs requested the defendant to search the private email account of the former condo board president.

While the former board president stated, “I hereby confirm that I did not send or receive emails on my personal email account relating to activities of the Board,” plaintiffs pointed to emails found on the condo association servers that included messages sent to members of the board at their private email addresses. The court rejected the defendant’s argument that private email servers were out of their control and wrote that a company officer cannot avoid discovery by using personal email for work purposes.

The court wrote that the combination of the emails directed to the private accounts of several board members provided cause for a limited search, but only directed at current board members: “Defendant, at a minimum, has an obligation to ask its current Board Members for emails that may be relevant to the case, as the Court ordered.”

The second case, covered by eDiscovery Today, is In re OpenAI ChatGPT Litigation, where authors allege ChatGPT was trained on their copyrighted works without permission. Plaintiffs requested defendants to ask current employees or board members listed in response to any interrogatory “as to whether any such person has used any of their personal social media accounts to discuss anything relevant to this litigation, and, if so, Defendants will produce those individuals’ social media usernames.”

The defendants responded that they do not have the requested information in their “possession or custody” because “the company does not systematically collect from its employees and Board members information about personal social media accounts, or monitor those accounts in the ordinary course of business.”

The court, however, ordered defendants to “promptly investigate and inquire from current directors and employees whether they have engaged in any discussions, the contents of which might be relevant to the claims or defenses involved in this case, using their personal social media accounts,” finding that burden to be minimal. The defendants were ordered to certify whether all directors and employees stated they had engaged in no such discussion, or produce the usernames of any account that engaged in relevant discussion.

These cases serve as a good reminder to organizations to ensure that they have an Email Usage Policy, referring to the rules and regulations that an organization mandates its users to follow while conducting business over email and that the policy is revisited on an annual basis. Part of that policy should include a provision prohibiting the use of personal email to conduct company business, promoting professionalism, efficiency, and security.

Microsoft Introduces “Air-Gapped” Gen AI Tool

While Large Language Models like ChatGPT can analyze swaths of data in a relatively short amount of time, security can be a concern as data is fed over the internet and to third parties. Now comes an innovation that could safely open the door for AI to analyze sensitive information.

Microsoft this month announced it has developed an “air gapped” generative AI model. An air-gapped system is physically segregated and incapable of connecting wirelessly or physically with other computers or network devices. Air gaps protect critical computer systems or data from potential attacks, ranging from malware and ransomware to keyloggers or other attacks from malicious actors. Bloomberg reports it’s the “first time a major large language model has operated fully separated from the internet.”

While the technology was developed for U.S. intelligence services, the added layer of security would prove beneficial in legal and other applications where sensitive data is handled. According to Bloomberg, the air-gapped Microsoft AI tool can read and analyze content without the internet and also won’t learn from, or be trained on, the sensitive information that it is given to analyze.

Other recent eDiscovery news and headlines:

Written by:

Array
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Array on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide