
2025 Cybersecurity Predictions Dominated by AI
When it comes to cybersecurity in 2025, many analysts and experts are focusing on artificial intelligence.
Artificial intelligence will be used by both adversaries and defenders, but attackers will benefit more from it, says Willi Leichter, chief marketing officer AppSOCapplication security and vulnerability management services provider in San Jose, California.
“We know that AI will be increasingly used by both sides of the cyber war,” he told TechNewsWorld. “However, attackers will still be less constrained as they are less concerned about the accuracy, ethics and unintended consequences of AI. Techniques such as personalized phishing and finding weak points in networks will benefit from the use of AI.”
“While AI has enormous defensive potential, there are more limitations—both legal and practical—that are slowing adoption,” he said.
Chris Hauck, consumer privacy advocate at Pixel Privacypublisher of online consumer security and privacy guides, has predicted that 2025 will be the year of AI vs. AI as the good guys use AI to defend against AI-powered cyberattacks.
“This will likely be a year of constant battles as both sides use information gained from previous attacks to mount new attacks and new defenses,” he told TechNewsWorld.
Reducing security risks associated with AI
Leichter also predicted that cybercriminals will begin to attack artificial intelligence systems more often. “AI technology significantly expands the attack surface with rapidly emerging threats to models, data sets and machine language systems,” he explained. “Additionally, when AI applications are rushed from lab to production, the full safety impact will not be understood until the inevitable breaches occur.”
Carl Holmquist, Founder and CEO The last wallThe Honolulu-based identity security company agreed. “The uncontrolled mass adoption of artificial intelligence tools, often implemented without sound security foundations, will have serious consequences in 2025,” he told TechNewsWorld.
“Without adequate privacy measures and security systems in place, these systems will become prime targets for hacking and manipulation,” he said. “This Wild West approach to AI deployment will leave data and decision-making systems dangerously vulnerable, pushing organizations to urgently prioritize core security controls, transparent AI frameworks, and ongoing monitoring to mitigate these growing risks.”
Leuchter also said that in 2025, security teams will have to take more responsibility for the security of artificial intelligence systems.
“It sounds obvious, but in many organizations, early AI projects were driven by data scientists and business specialists, who often bypass traditional application security processes,” he said. “Security teams will be fighting a losing battle if they try to block or slow down AI initiatives, but they will have to bring rogue AI projects under the umbrella of security and compliance.”
Leichter also noted that in 2025, AI will expand the attack surface for attackers targeting software supply chains. “We have already seen supply chains becoming a major attack vector as complex software stacks rely heavily on third-party and open source code. “, he said. “The crazy proliferation of AI is making this goal larger with new and complex attack vectors on data sets and models.”
“Understanding model provenance and maintaining the integrity of changing data sets is a complex problem, and there is currently no real way for an AI model to unlearn toxic data,” he added.
Data poisoning threats to artificial intelligence models
Michael Lieberman, CTO and co-founder of Kusari, a software supply chain security company in Ridgefield, Connecticut, also sees the poisoning of large language models as a major development in 2025. “Data poisoning attacks aimed at manipulating LLMs will become more common, although this method is likely more resource-intensive compared to simpler tactics such as the distribution of malicious open LLMs,” he told TechNewsWorld.
“Most organizations don’t train their own models,” he explained. “Instead, they rely on pre-trained models, often available for free. The lack of transparency regarding the origin of these models makes it easy for attackers to introduce malicious models, as evidenced by the Hugging Face malware incident.” This incident occurred in early 2024 when it was discovered that approximately 100 LLMs containing hidden backdoors that could execute arbitrary code on users’ machines were uploaded to the Hugging Face platform.
“Future data poisoning efforts will likely target big players like OpenAI, Meta and Google, which train their models on large data sets, making such attacks difficult to detect,” Lieberman predicts.
“In 2025, forwards will likely outpace defenders,” he added. “Attackers are financially motivated, while defenders often struggle to provide an adequate budget since security is not typically seen as a source of income. It may take a major disruption to the AI supply chain—like the SolarWinds Sunburst incident—to motivate the industry to take the threat seriously.”
Thanks to AI, 2025 will see more attackers launching more sophisticated attacks. – explained Justin Blackburn, senior cloud threat detection engineer at the company AppOmniSaaS security management software company in San Mateo, California.
“Additionally, the emergence of AI-powered bots will allow attackers to carry out large-scale attacks with minimal effort,” he told TechNewsWorld. “Armed with these AI-powered tools, even less capable attackers will be able to gain unauthorized access to sensitive data and disrupt services at a scale that only more sophisticated and well-funded attackers could previously see.”
Scenario “Children grow up”
In 2025, the emergence of agent-based AI—AI that can make independent decisions, adapt to its environment, and take action without direct human intervention—will exacerbate the challenges for defenders as well. “Advances in artificial intelligence are expected to enable non-state actors to develop autonomous cyber weapons,” he said. Jason Pittmanis an assistant professor in the School of Cybersecurity and Information Technology at the University of Maryland Global Campus in Adelphi, Maryland.
“Agent AI acts autonomously and behaves purposefully,” he told TechNewsWorld. “Such systems can use advanced algorithms to identify vulnerabilities, penetrate systems and develop their tactics in real time without human intervention. “
“These features distinguish it from other artificial intelligence systems, which rely on predefined instructions and require human intervention,” he explained.
“Like the Morris worm in decades past, the release of agent cyberweapons could start out as an accident, which is even more unpleasant. This is because the availability of advanced artificial intelligence tools and the proliferation of open source machine learning systems are lowering the barrier to developing sophisticated cyber weapons. Once built, a powerful autonomy feature could easily cause agent-based AI to evade security measures.”
As harmful as AI can be in the hands of attackers, it can also help better protect data such as personally identifiable information (PII). “After analyzing more than six million Google Drive files, we found that 40% of files contain personal data that puts businesses at risk of data breaches,” said Rich Vibert, co-founder and CEO of the company. Metomikdata privacy platform in London.
“As we enter 2025, we will see more companies prioritizing automated data classification techniques to reduce the amount of sensitive information accidentally stored in public files and collaboration workspaces in SaaS and cloud environments,” he continued.
“Companies will increasingly use artificial intelligence-based tools that can automatically identify, flag and protect sensitive information,” he said. “This shift will enable companies to cope with the enormous volumes of data generated daily, ensuring sensitive data is always protected and unnecessary data exposure is minimized.”
However, 2025 could also usher in a wave of disappointment among security professionals as the AI hype reaches fans. “CISOs will deprioritize AI by 10% due to lack of quantification,” Cody Scott, senior analyst at the company Forrester Researchmarket research company headquartered in Cambridge, Massachusetts, wrote in a company blog.
“According to Forrester 2024 data, 35% of global CISOs and CIOs consider exploring and implementing generational AI use cases to improve employee productivity as their top priority,” he noted. “The security market has been quick to tout the expected performance benefits of artificial intelligence, but the lack of practical results is contributing to disappointment.”
“The idea of an autonomous security operations center using artificial intelligence has generated a lot of excitement, but it is very far from reality,” he continued. “In 2025, this trend will continue, and security professionals will sink further into frustration as issues such as insufficient budgets and unrealized AI benefits reduce the number of security-focused generation AI deployments.”
2025-01-07 13:00:22