Cybersecurity Experts Urge Responsible AI Adoption, Not Overreliance

HaystackID
Contact
Editor’s Note: This article covers valuable insights on artificial intelligence’s (AI) evolving role in cybersecurity and incident response shared during an expert panel discussion. As cybersecurity, information governance, and eDiscovery professionals adopt AI technologies to help manage expanding data volumes, these perspectives highlight important considerations around responsible and accountable AI implementation. While AI offers immense potential to automate repetitive tasks and quickly derive insights from massive datasets, relying on AI without transparency or human oversight raises significant compliance, ethical, and legal risks. By illuminating these concerns and AI’s advantages, the panel discussion provided crucial guidance for professionals seeking to prudently apply AI to enhance security, eDiscovery, and data analysis while proactively mitigating the dangers of over-automation. With careful planning and validation, organizations can realize AI’s benefits while heeding these experts’ warnings about the need for human judgment in training, auditing, and deploying intelligent systems. These insights help cybersecurity, information governance, and eDiscovery practitioners make more informed decisions as they evaluate integrating AI capabilities into their workflows.

Cybersecurity Experts Urge Responsible AI Adoption, Not Overreliance

By HaystackID Staff*

HaystackID’s Michael Sarlo Discusses AI Promise and Risks

At the recent NetDiligence Cyber Risk Summit, HaystackID’s Chief Innovation Officer and President of Global Investigations & Cyber Incident Response Services, Michael Sarlo, shared his expertise on a panel exploring artificial intelligence (AI) in cybersecurity.

Moderated by Risk Strategies’ Allen Blount, the session “AI in Cybersecurity: Efficiency, Reliability, Legal Oversight & Responsibility in Incident Response” covered the potential benefits and risks of deploying AI for security operations.

Sarlo explained that HaystackID sees AI as a “force multiplier,” not an outright replacement for humans. AI speeds up threat hunting, insider risk detection, forensic investigations, and document review for HaystackID clients. However, extensive vetting and oversight is crucial, as AI lacks human discernment.

Fellow panelist Priya Kunthasami of Eviden agreed AI should enhance staff, not replace them. She discussed using AI for threat hunting by analyzing massive datasets too large for humans to process quickly. But like Sarlo, she emphasized that constant AI vetting is vital as threat actors exploit the same tools to escalate attacks rapidly.

Jena Valdetero of Greenberg Traurig raised legal issues around AI’s lack of human judgment, but noted that AI is indirectly regulated by a number of state and international data privacy laws if it processes personal data. Those laws require human oversight to avoid compliance problems if AI improperly impacts individuals. She noted laws generally lag behind AI’s adoption in critical areas where personal data is not being processed.

The panelists agreed regulations enacted pre-AI often struggle with AI’s high-speed data processing capabilities today. Per Valdetero, Europe’s proposed AI Act could spur US laws mandating more AI transparency and individual rights around profiling or automated decisions affecting them. External auditing of AI systems for bias may also increase, given AI’s growing role.

The experts concluded that while efficient, over-relying on AI without human checks is risky. Sarlo summarized that balancing AI’s potential with experienced professionals is vital for accountable and ethical implementation. HaystackID will leverage AI carefully to augment security as information volumes grow exponentially. But human expertise remains irreplaceable, underscoring the need to validate where AI can best complement an organization’s unique needs.

Elaborating on HaystackID’s approach, Sarlo explained they are cautious and avoid broad AI proclamations due to defensibility concerns. HaystackID focuses on repeatable AI processes that produce auditable outcomes, an essential consideration for Sarlo given his digital forensics and legal expert witness background.

He noted effective insider threat and behavioral analytics programs using AI but cautioned about the sheer data volumes involved. Large organizations can produce terabytes of log data daily, requiring huge storage and computational costs to leverage AI at scale. So, picking key high-risk data points to monitor is crucial for cost-effective deployment.

Kunthasami highlighted AI’s role in incident response, where AI-enabled endpoint detection and response (EDR) tools help swiftly contain threats and collect forensic data. But constant retuning is essential as rules and data change. And while carriers increasingly seek EDR for cyber insurance, premiums reflect the expense of proper implementation across an organization’s entire environment.

Sarlo added that AI document review also helps responders notify affected parties faster after breaches, as required legally. Privacy and cybersecurity expert counsel Jena Valdetero echoed this benefit of AI, noting that most extortion cases involve theft of huge volumes of data, which need to be reviewed to identify personal information for breach reporting purposes. The sheer volume of stolen data in these attacks makes timely, accurate notifications challenging. While AI can accelerate review, vendors must balance speed with precision when leveraging AI to determine whether personal information is present in the data set.

The panel also touched on the rise of large language models like ChatGPT that can generate human-like text. Sarlo noted that HaystackID explores these AI innovations carefully to ensure ethical and defensible use. Valdetero pointed out such models’ lack of human judgment and warned that attack tools like “WormGPT” help threat actors bypass countermeasures by exploiting the same publicly available AI. The experts agreed that while large language models hold promise to enhance security, relying on them without proper oversight poses risks. Vetting performance, training models responsibly, and pairing them with human expertise is key to realizing benefits while minimizing harms.

Overall, the panel underscored AI’s immense potential to enormously enhance security and response effectiveness. However, responsible adoption requires understanding one’s data, risks, and use cases before deploying AI, then monitoring closely and retuning regularly afterward. With the right cautions and checks, organizations can realize AI’s power to augment human-led security while guarding against overautomation or misuse.

*Assisted by GAI and LLM technologies.

Written by:

HaystackID
Contact
more
less

HaystackID on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide