|
Content Assessment: The Dark Side of the Force? A New Report on AI-Enabled Cyberattacks from Traficom
Information - 93%
Insight - 94%
Relevance - 92%
Objectivity - 94%
Authority - 92%
93%
Excellent
A short percentage-based assessment of the qualitative benefit of the recent report by the Finnish Transport and Communications Agency (Traficom) on the security threat of AI-enabled cyberattacks.
Editor’s Note: The Finnish Transport and Communications Agency, Traficom, is responsible for ensuring the availability of safe, secure, and reasonably priced transport and communication connections and services in Finland. In addition to these responsibilities, Traficom also serves as an authority for people and businesses in matters related to licenses, registration, and supervision.
Recently, Traficom published a report by the National Cyber Security Centre Finland (NCSC-FI) and the National Emergency Supply Agency (NESA) titled “The Security Threat of AI-Enabled Cyberattacks.” This report examines the threat of AI-enabled cyberattacks by summarizing current knowledge on the topic. The report may be useful for cybersecurity, information governance, and eDiscovery professionals who are considering the impact of generative AI on the enhancement of cyberattacks.
Cybersecurity Information
Artificial Intelligence Will Shape The Future of Cyberattacks
Press Announcement, Abstract, and Report
AI Security Innovations Needs to Keep Pace with Cyber Attacks
WithSecure Press Announcement
While the use of artificial intelligence (AI) in today’s cyber attacks is limited, a new report warns that this is poised to change in the near future.
The report, co-created by WithSecure™ (formerly known as F-Secure Business), the Finnish Transport and Communications Agency (Traficom), and the Finnish National Emergency Supply Agency (NESA), analyzes current trends and developments in AI, cyber attacks, and areas where the two overlap. It notes cyber attacks that use AI are currently very rare and limited to social engineering applications, (such as impersonating an individual) or used in ways that aren’t directly observable by researchers and analysts (such as data analysis in backend systems).
However, the report highlights that the quantity and quality of advances in AI have made more advanced cyber attacks likely in the foreseeable future.
According to the report, target identification, social engineering, and impersonation are today’s most imminent AI-enabled threats and are expected to evolve further within the next two years in both number and sophistication.
Within the next five years, attackers are likely to develop AI capable of autonomously finding vulnerabilities, planning and executing attack campaigns, using stealth to evade defenses, and collecting/mining information from compromised systems or open-source intelligence.
“Although AI-generated content has been used for social engineering purposes, AI techniques designed to direct campaigns, perform attack steps, or control malware logic have still not been observed in the wild. Those techniques will be first developed by well-resourced, highly-skilled adversaries, such as nation-state groups,” said WithSecure Intelligence Researcher Andy Patel. “After new AI techniques are developed by sophisticated adversaries, some will likely trickle down to less-skilled adversaries and become more prevalent in the threat landscape.”
While current defenses can address some of the challenges posed by attackers’ use of AI, the report notes that others require defenders to adapt and evolve. New techniques are needed to counter AI-based phishing that utilizes synthesized content, spoofing biometric authentication systems, and other capabilities on the horizon.
The report also touches on the role non-technical solutions, such as intelligence sharing, resourcing, and security awareness training, have in managing the threat of AI-driven attacks.
“Security isn’t seeing the same level of investment or advancements as many other AI applications, which could eventually lead to attackers gaining an upper hand,” said WithSecure Senior Data Scientist Samuel Marchal. “You have to remember that while legitimate organizations, developers, and researchers follow privacy regulations and local laws, attackers don’t. If policy makers expect the development of safe, reliable, and ethical AI-based technologies, they’ll need to consider how to secure that vision in relation to AI-enabled threats.”
About WithSecure™
WithSecure™, formerly F-Secure Business, is cyber security’s reliable partner. IT service providers, MSSPs and businesses – along with the largest financial institutions, manufacturers, and thousands of the world’s most advanced communications and technology providers – trust us for outcome-based cyber security that protects and enables their operations.
Our AI-driven protection secures endpoints and cloud collaboration, and our intelligent detection and response are powered by experts who identify business risks by proactively hunting for threats and confronting live attacks. Our consultants partner with enterprises and tech challengers to build resilience through evidence-based security advice. With more than 30 years of experience in building technology that meets business objectives, we’ve built our portfolio to grow with our partners through flexible commercial models.
WithSecure™ Corporation was founded in 1988, and is listed on NASDAQ OMX Helsinki Ltd.
Read the original announcement.
Artificial Intelligence Will Shape The Future of Cyberattacks
Traficom (Finnish Transport and Communications Agency) Report Abstract
The topic of AI-enabled cyberattacks surfaced around five years ago with examples of generative AI models able to automate both spear-phishing attacks and vulnerability discovery. Since then, social engineering and impersonation attacks supported by AI have occurred, causing millions of dollars in financial losses. Current rapid progress in AI research, coupled with the numerous new applications it enables, leads us to believe that AI techniques will soon be used to support more of the steps typically used during cyberattacks. This is the reason why the idea of AI-enabled cyberattacks has recently gained increased attention from both academia and industry, and why we are starting to see more research devoted to the study of how AI might be used to enhance cyberattacks.
A study from late 2019 illustrated that over 80% of decision-makers were concerned with AI-enabled cyberattacks and predicted that these types of attacks may go mainstream in the near future. Current AI technologies already support many early stages of a typical attack chain. Advanced social engineering and information gathering techniques are such examples. AI-enabled cyberattacks are already a threat that organizations are unable to cope with. This security threat will only grow as we witness new advances in AI methodology, and as AI expertise becomes more widely available.
This report aims to investigate the security threat of AI-enabled cyberattacks by summarising current knowledge on the topic. AI technology is currently able to enhance only a few attacker tactics, and it is likely only used by advanced threat actors such as nation-state attackers. In the near future, fast-paced AI advances will enhance and create a larger range of attack techniques through automation, stealth, social engineering or information gathering. Therefore, we predict that AI-enabled attacks will become more widespread among less skilled attackers in the next five years. As conventional cyberattacks will become obsolete, AI technologies, skills and tools will become more available and affordable, incentivizing attackers to make use of AI-enabled cyberattacks.
The cybersecurity industry will have to adapt to cope with the emergence of AI-enabled cyberattacks. For instance, biometric authentication methods may become obsolete because of advanced impersonation techniques enabled by AI. New prevention and detection mechanisms will also need to be developed to counter AI-enabled cyberattacks. More automation and AI technology will also need to be used in defense solutions to match the speed, scale and sophistication of AI-enabled cyberattacks. This may lead to an asymmetrical fight between attackers having unrestricted use of AI technologies and defenders being constrained by the upcoming regulation on AI applications.
Complete Report: The Security Threat of AI-Enabled Cyberattacks (PDF) – Mouseover to Scroll
By Matti Aksela, Samuel Marchal, Andrew Patel, Lina Rosenstedt, and WithSecure
TRAFICOM The Security Threat of AI-Enabled Cyberattacks 2022-12-12Additional Reading
- Keeping an Eye on AI? European National Strategies on Artificial Intelligence
- Defining Cyber Discovery? A Definition and Framework
Source: ComplexDiscovery