CrowdStrike Survey Highlights Security Challenges in AI Adoption
December 17, 2024

CrowdStrike Survey Highlights Security Challenges in AI Adoption

Do the safety benefits of generative artificial intelligence outweigh its harms? Only 39% of security professionals say returns exceed riskaccording to a new report from CrowdStrike.

In 2024, CrowdStrike surveyed 1,022 security researchers and practitioners from the United States, Asia Pacific, EMEA, and other regions. Survey results show that cyber professionals are deeply concerned about challenges related to artificial intelligence. Although 64% of respondents have purchased Generative Artificial Intelligence Tools Most people remain cautious about working with or researching these tools: 32% are still exploring these tools, while only 6% are actively using them.

What do security researchers hope to gain from generating artificial intelligence?

According to reports:

  • The highest motivation for adopting generative AI is not to solve skills shortages or meet leadership tasks, but to improve the ability to respond to and defend against cyberattacks.
  • General-purpose artificial intelligence may not necessarily appeal to cybersecurity professionals. Instead, they want generative artificial intelligence combined with security expertise.
  • 40% of the respondents said rewards and risks The performance of generative AI is “comparable.” Meanwhile, 39% said the rewards outweighed the risks, and 26% said the rewards outweighed the risks.

“Security teams want to deploy GenAI as part of a platform to get more value from existing tools, enhance the analyst experience, accelerate onboarding and eliminate the complexity of integrating new point solutions,” the report states.

Measuring ROI has been an ongoing challenge when adopting generative AI products. CrowdStrike found that quantifying ROI was the top economic concern among respondents. The next two biggest concerns are licensing costs for AI tools and unpredictable or confusing pricing models.

CrowdStrike divides methods for evaluating AI return on investment into four categories, ranked in order of importance:

  • Optimize costs through platform integration and more efficient use of security tools (31%).
  • Safety incidents reduced (30%).
  • Less time is spent managing security tools (26%).
  • Shorter training cycles and associated costs (13%).

CrowdStrike said adding AI to existing platforms rather than purchasing standalone AI products can “realize incremental savings associated with broader platform integration efforts.”

See: A ransomware group claims responsibility for late November incident Cyberattacks that disrupt operations at Starbucks and other organizations.

Will generative AI create more security problems than it solves?

Rather, generative AI itself needs to be protected. CrowdStrike’s survey found that security professionals are most concerned about the exposure of the legal masters behind AI products and attacks against generative AI tools.

Other concerns include:

  • The tools that generate artificial intelligence lack guardrails or controls.
  • AI hallucination.
  • There is insufficient public policy regulation for the use of generative AI.

Nearly all (roughly nine in 10) respondents said their organizations have implemented new security policies or are in the process of developing policies around managing generative AI within the next year.

How organizations can use artificial intelligence to defend against cyber threats

Generative AI can be used for brainstorming, research or analysis, but its information often must be double-checked. Generative AI can pull data from disparate sources into a single window in a variety of formats, shortening the time needed to research an incident. Many automated security platforms offer generative AI assistants, such as Microsoft’s safety co-pilot.

GenAI can defend against cyber threats pass:

  • Threat detection and analysis.
  • Automated incident response.
  • Phishing detection.
  • Enhanced security analysis.
  • Comprehensive information for training.

However, organizations must consider security and privacy controls as part of any generative AI purchase. Doing so protects sensitive data, complies with regulations, and reduces the risk of data leakage or misuse. Without appropriate safeguards, AI tools can expose vulnerabilities, produce harmful output, or violate privacy laws, resulting in financial, legal, and reputational damage.

2024-12-17 20:00:34

Leave a Reply

Your email address will not be published. Required fields are marked *