Microsoft Warning As New Attacks ‘Weaponize AI’—Hackers Gain Access To User Accounts
January 11, 2025

Microsoft Warning As New Attacks ‘Weaponize AI’—Hackers Gain Access To User Accounts

The new year is not even two weeks old, and the AI ​​threat landscape is already proving to be as dangerous as feared. Several cyber firms have warned that 2025 will define it, and The FBI has issued a specific warning about these rapidly evolving AI threats. These include more sophisticated, personalized phishing programs and malware that is configured with artificial intelligence to bypass security. Now Microsoft has warned that attackers could steal some of the most powerful artificial intelligence tools available to power some of their campaigns.

On Friday, the tech giant confirmed it was “accepting lawsuit to protect the public from offensive content generated by artificial intelligence.” Via message from Stephen Masada of the Digital Crimes UnitThe company warned that it had discovered a “foreign attacker” who was collecting “public customer credentials” to gain access to “generative artificial intelligence services and intentionally modify the capabilities of those services.”

ForbesNew Gmail, Outlook and Apple Mail Warning: AI Nightmare Coming to Reality in 2025

Microsoft claims that foreign cybercriminals then used these artificial intelligence services and even “resold access to other attackers… to create malicious and illegal content.” The company has revoked all known access and implemented “countermeasures,” which it says include “enhanced security measures to further block such malicious activity.”

The specific threat here comes from powerful artificial intelligence tools, including Microsoft’s access to OpenAI’s DALL-E image generator, used to launch powerful attacks on third parties. But the broader context is more important. A week ago Financial Times reported that artificial intelligence is being used to create malicious phishing campaigns whose content and tone are tailored to each target based on the characteristics of the person from whom they purport to originate, by collecting data from social media and other sources.

“Every day,” says Microsoft, “people use generative AI tools to improve their creative expression and productivity. Unfortunately, as we have seen with the emergence of other technologies, the advantages of these tools attract attackers who seek to use and abuse technology and innovation for malicious purposes… Cybercriminals remain persistent and tirelessly improve their tools and techniques to bypass even the most robust security measures . With this action, we are sending a clear message: the weaponization of our artificial intelligence technology by online actors will not be tolerated.”

ForbesGoogle will start tracking all your devices in 6 weeks – forget Chrome and Android

Last year they company issued a recommendation on “protecting the public from offensive AI-generated content,” warning that “AI-generated deepfakes are realistic, easy for almost anyone to create, and are increasingly being used to defraud, abuse, and manipulate, especially against children and the elderly.”

How McAfee warns: “As AI continues to evolve and become more accessible, cybercriminals are using it to create scams that are more convincing, personalized and harder to detect… The risks to trust and safety online have never been greater.” We now have more information about how AI is accessed.

Fasten your seatbelts, because 2025 is only going to get worse.

2025-01-11 13:43:01

Leave a Reply

Your email address will not be published. Required fields are marked *