Tue. Apr 30th, 2024

Content Assessment: Risky Business? Cybersecurity Experts Urge Responsible AI Adoption

Information
Insight
Relevance
Objectivity
Authority

Excellent

A short assessment of the qualitative benefit of the recent industry article from HaystackID highlighting cybersecurity expert commentary on responsible AI adoption.

Editor’s Note: The responsible adoption of artificial intelligence (AI) is an increasingly crucial issue for cybersecurity, information governance, and legal technology professionals, as highlighted in this panel discussion from the recent NetDiligence Cyber Risk Summit. As experts at the Summit underscored, AI holds great promise to enhance threat detection, incident response, and eDiscovery processes by rapidly analyzing massive datasets. However, overreliance on AI without adequate human oversight and validation can introduce significant risks.


Industry Article

Cybersecurity Experts Urge Responsible AI Adoption, Not Overreliance

Shared with permission from HaystackID Staff

HaystackID’s Michael Sarlo Discusses AI Promise and Risks

At the recent NetDiligence Cyber Risk Summit, HaystackID’s Chief Innovation Officer and President of Global Investigations & Cyber Incident Response Services, Michael Sarlo, shared his expertise on a panel exploring artificial intelligence (AI) in cybersecurity.

Moderated by Risk Strategies’ Allen Blount, the session “AI in Cybersecurity: Efficiency, Reliability, Legal Oversight & Responsibility in Incident Response” covered the potential benefits and risks of deploying AI for security operations.

Sarlo explained that HaystackID sees AI as a “force multiplier,” not an outright replacement for humans. AI speeds up threat hunting, insider risk detection, forensic investigations, and document review for HaystackID clients. However, extensive vetting and oversight is crucial, as AI lacks human discernment.

Fellow panelist Priya Kunthasami of Eviden agreed AI should enhance staff, not replace them. She discussed using AI for threat hunting by analyzing massive datasets too large for humans to process quickly. But like Sarlo, she emphasized that constant AI vetting is vital as threat actors exploit the same tools to escalate attacks rapidly.

Jena Valdetero of Greenberg Traurig raised legal issues around AI’s lack of human judgment, but noted that AI is indirectly regulated by a number of state and international data privacy laws if it processes personal data. Those laws require human oversight to avoid compliance problems if AI improperly impacts individuals. She noted laws generally lag behind AI’s adoption in critical areas where personal data is not being processed.

The panelists agreed regulations enacted pre-AI often struggle with AI’s high-speed data processing capabilities today. Per Valdetero, Europe’s proposed AI Act could spur US laws mandating more AI transparency and individual rights around profiling or automated decisions affecting them. External auditing of AI systems for bias may also increase, given AI’s growing role.

The experts concluded that while efficient, over-relying on AI without human checks is risky. Sarlo summarized that balancing AI’s potential with experienced professionals is vital for accountable and ethical implementation. HaystackID will leverage AI carefully to augment security as information volumes grow exponentially. But human expertise remains irreplaceable, underscoring the need to validate where AI can best complement an organization’s unique needs.

Elaborating on HaystackID’s approach, Sarlo explained they are cautious and avoid broad AI proclamations due to defensibility concerns. HaystackID focuses on repeatable AI processes that produce auditable outcomes, an essential consideration for Sarlo given his digital forensics and legal expert witness background.

He noted effective insider threat and behavioral analytics programs using AI but cautioned about the sheer data volumes involved. Large organizations can produce terabytes of log data daily, requiring huge storage and computational costs to leverage AI at scale. So, picking key high-risk data points to monitor is crucial for cost-effective deployment.

Kunthasami highlighted AI’s role in incident response, where AI-enabled endpoint detection and response (EDR) tools help swiftly contain threats and collect forensic data. But constant retuning is essential as rules and data change. And while carriers increasingly seek EDR for cyber insurance, premiums reflect the expense of proper implementation across an organization’s entire environment.

Sarlo added that AI document review also helps responders notify affected parties faster after breaches, as required legally. Privacy and cybersecurity expert counsel Jena Valdetero echoed this benefit of AI, noting that most extortion cases involve theft of huge volumes of data, which need to be reviewed to identify personal information for breach reporting purposes. The sheer volume of stolen data in these attacks makes timely, accurate notifications challenging. While AI can accelerate review, vendors must balance speed with precision when leveraging AI to determine whether personal information is present in the data set.

The panel also touched on the rise of large language models like ChatGPT that can generate human-like text. Sarlo noted that HaystackID explores these AI innovations carefully to ensure ethical and defensible use. Valdetero pointed out such models’ lack of human judgment and warned that attack tools like “WormGPT” help threat actors bypass countermeasures by exploiting the same publicly available AI. The experts agreed that while large language models hold promise to enhance security, relying on them without proper oversight poses risks. Vetting performance, training models responsibly, and pairing them with human expertise is key to realizing benefits while minimizing harms.

Overall, the panel underscored AI’s immense potential to enormously enhance security and response effectiveness. However, responsible adoption requires understanding one’s data, risks, and use cases before deploying AI, then monitoring closely and retuning regularly afterward. With the right cautions and checks, organizations can realize AI’s power to augment human-led security while guarding against overautomation or misuse.

About HaystackID


Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is a highly recognized digital publication focused on providing detailed insights into the fields of cybersecurity, information governance, and eDiscovery. Based in Estonia, a hub for digital innovation, ComplexDiscovery OÜ upholds rigorous standards in journalistic integrity, delivering nuanced analyses of global trends, technology advancements, and the eDiscovery sector. The publication expertly connects intricate legal technology issues with the broader narrative of international business and current events, offering its readership invaluable insights for informed decision-making.

For the latest in law, technology, and business, visit ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, Midjourney, and DALL-E, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.