HB Ad Slot
HB Mobile Ad Slot
The Growing Cyber Risks from AI — and How Organizations Can Fight Back
Monday, June 16, 2025

Artificial Intelligence (AI) is transforming businesses—automating tasks, powering analytics, and reshaping customer interactions. But like any powerful tool, AI is a double-edged sword. While some adopt AI for protection, attackers are using it to scale and intensify cybercrime. Here’s a high-level discussion at emerging AI-powered cyber risks in 2025—and steps organizations can take to defend.

AI-Generated Phishing & Social Engineering

Cybercriminals now use generative AI to craft near-perfect phishing messages—complete with accurate tone, logos, and language—making them hard to distinguish from real communications . Voice cloning tools enable “deepfake” calls from executives, while deepfake video can simulate someone giving fraudulent instructions.

Thanks to AI, according to Tech Advisors, phishing attacks are skyrocketing—phishing surged 202% in late 2024, and over 80% of phishing emails now incorporate AI, with nearly 80% of recipients opening them. These messages are bypassing filters and fooling employees.

Adaptive AI-Malware & Autonomous Attacks

It is not just the threat actors but the AI itself that drives the attack. According to Cyber Defense Magazine reporting:

Compared to the traditional process of cyber-attacks, the attacks driven by AI have the capability to automatically learn, adapt, and develop strategies with a minimum number of human interventions. These attacks proactively utilize the algorithms of machine learning, natural language processing, and deep learning models. They leverage these algorithms in the process of determining and analyzing issues or vulnerabilities, avoiding security and detection systems, and developing phishing campaigns that are believable.

As a result, attacks that once took days now unfold in minutes, and detection technology struggles to keep up, permitting faster, smarter strikes to slip through traditional defenses.

Attacks Against AI Models Themselves

Cyberattacks are not limited to business email compromises designed to effect fraudulent transfers or to demand a ransom payment in order to suppress sensitive and compromising personal information. Attackers are going after AI systems themselves. Techniques include:

  • Data poisoning – adding harmful or misleading data into AI training sets, leading to flawed outputs or missed threats.
  • Prompt injection – embedding malicious instructions in user inputs to hijack AI behavior.
  • Model theft/inversion – extracting proprietary data or reconstructing sensitive training datasets.

Compromised AI can lead to skipped fraud alerts, leaked sensitive data, or disclosure of confidential corporate information. Guidance from NIST, Adversarial Machine Learning A Taxonomy and Terminology of Attacks and Mitigations, digs into these quite a bit more, and outlines helpful mitigation measures.

Deepfakes & Identity Fraud

Deepfake audio and video are being used to mimic executives or trusted contacts, instructing staff to transfer funds, disclose passwords, or bypass security protocols.

Deepfakes have exploded—some reports indicate a 3,000% increase in deepfake fraud activity. These attacks can erode trust, fuel financial crime, and disrupt decision-making.

Supply Chain & Third-Party Attacks

AI accelerates supply chain attacks, enabling automated scanning and compromise of vendor infrastructures. Attackers can breach a small provider and rapidly move across interconnected systems. These ripple-effect attacks can disrupt entire industries and critical infrastructure, far beyond the initial target. We have seen these effects with more traditional supply chain cyberattacks. AI will only amplify these attacks.

Enhancing Cyber Resilience, Including Against AI Risks

Here’s some suggestions for stepping up defenses and mitigating risk:

  1. Enhance Phishing Training for AI-level deception
    Employees should recognize not just misspellings, but hyper-realistic phishing, voice calls, and video
 impersonations. Simulations should evolve to reflect current AI tactics.
  2. Inventory, vet, and govern AI systems
    Know which AI platforms you use—especially third-party tools. Vet them for data protection, model integrity, and update protocols. Keep a detailed registry and check vendor security practices. Relying on a vendor’s SOC report simply may not be sufficient, particularly is not read carefully and in context.
  3. Validate AI inputs and monitor outputs
    Check training data for poisoning. Test and stress AI models to spot vulnerabilities. Use filters and anomaly detection to flag suspicious inputs or outputs.
  4. Use AI to defend against AI
    Deploy AI-driven defensive tools—like behavior-based detection, anomaly hunting, and automated response platforms—so you react in real time.
  5. Adopt zero trust and multi-factor authentication (MFA)
    Require authentication for every access, limit internal privileges, and verify every step—even when actions appear internal.
  6. Plan for AI-targeted incidents
    Update your incident response plan with scenarios like model poisoning, deepfake impersonation, or AI-driven malware. Include legal, communications, and other relevant stakeholders in your response teams.
  7. Share intelligence and collaborate
    Tap into threat intelligence communities, “Information Sharing and Analysis Centers” or “ISACs”, to share and receive knowledge of emerging AI threats.

Organizations that can adapt to a rapidly changing threat landscape will be better position to defend against these emerging attack vectors and mitigate harm.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot

More from Jackson Lewis P.C.

HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters