AI is making cyberattacks sharper, faster and harder to spot

AI is making cyberattacks sharper, faster and harder to spot

AI does not need to invent a new class of cyberattack to raise the danger. It only has to make familiar attacks more believable, more scalable, and cheaper to run. That is exactly what security agencies and threat-intelligence teams are describing: attackers using AI to polish phishing messages, translate them fluently, clone voices, fabricate identities, summarize stolen data, generate or debug code, and accelerate operations that still rely on old fundamentals such as credential theft, impersonation, and initial access.

The most useful way to think about the threat is this: AI is not turning every criminal into a genius. It is making average attackers look more competent and skilled attackers move much faster. Microsoft’s 2025 defense reporting is blunt on this point. Financially motivated actors still lean on phishing, unpatched assets, exposed services, and infostealers. AI is speeding up the attack chain, not replacing it.

Why the threat feels different now

One reason the danger feels sharper is that the cost of deception has fallen. The joint NSA, FBI, and CISA guidance on deepfakes notes that synthetic media has become easier, cheaper, and faster to produce, with free or easily accessible tools putting persuasive manipulation within reach of far less sophisticated actors. The FBI has issued a similar warning, saying AI increases the speed, scale, and automation of existing schemes and is being used to run highly targeted phishing campaigns.

The other reason is industrialization. Europol’s 2025 IOCTA report describes a mature cybercrime market where attackers can buy phishing kits, infostealers, exploit kits, spoofing services, malicious LLM access, and even initial access to compromised environments. In one example, Europol says the LabHost phishing platform was linked to at least 40,000 phishing domains and around 10,000 users worldwide, operating on a subscription model. AI slots neatly into that market because it improves targeting, speeds up content creation, and lowers the amount of expertise needed to look convincing.

Where AI gives attackers the biggest edge

The first and most immediate gain is in phishing, spear phishing, business email compromise, and vishing. The FBI’s latest Internet Crime Report says phishing and spoofing were the top cybercrime complaint categories reported in 2024. Verizon’s current guidance adds an important detail: AI helps attackers cross language barriers, rapidly tailor campaigns, and scale spear phishing from one carefully written lure to thousands of convincing variations. That matters because poor grammar used to be one of the easiest warning signs. It is no longer reliable.

Voice fraud is moving up the list because it targets trust directly. Europol notes that vishing is increasingly used to gain initial access, while the FBI and CISA deepfake guidance warns that fraudulent voice messages, texts, and videos can be used to impersonate leaders, pressure staff, and open paths to sensitive systems or money transfers. A request that sounds authentic is no longer proof of authenticity.

AI is also making identity abuse more dangerous. Microsoft’s 2025 Digital Defense Report says AI-driven forgeries grew 195% globally and are convincing enough to challenge selfie checks and liveness tests by simulating natural eye blinks or head turns. That means the threat is not limited to fake emails. It extends to onboarding, hiring, payments, account recovery, and any workflow that assumes a face, a voice, or a scanned ID is strong evidence on its own.

Why deepfakes deserve more attention

Deepfakes attract headlines because they are dramatic, but the real problem is more practical than cinematic. The NSA, FBI, and CISA warning says synthetic media can threaten a company’s brand, impersonate leaders and financial officers, and enable access to networks, communications, and sensitive information. In other words, deepfakes matter because they plug directly into fraud, social engineering, and internal trust relationships that companies already struggle to protect.

The most revealing examples are not elaborate Hollywood-style fabrications. They are messy, plausible interactions that create just enough confusion to push someone past procedure. In one case cited by the NSA, FBI, and CISA, an attacker used synthetic audio and video while impersonating an executive, then exploited a poor connection and switched to text while urging the target to wire money. That detail matters. A bad line, a frozen image, or a request to “just continue in chat” can be part of the attack, not a harmless glitch.

The FBI’s public guidance on synthetic content adds what users should actually look for: unnatural pauses, strange inflections, inconsistent background noise, awkward blinking, lighting that feels off, distorted facial features, and body movement that does not quite match speech. None of these signs is definitive on its own, but together they matter. Attackers do not need perfection. They need a moment of compliance.

The red flags that still matter

The strongest warning signs are often behavioral, not visual. Be suspicious when a message or call creates urgency, asks you to bypass a normal approval chain, pressures you to stay in one channel, or tries to isolate you from independent verification. The NCSC’s phishing guidance still describes the core logic accurately: criminals use emails, texts, websites, and phone calls to trick people into clicking, downloading, or handing over sensitive information. AI makes those lures smoother, but it does not change their intent.

You should also pay attention to source quality rather than just content quality. The deepfake guidance recommends checking the source before drawing conclusions, using reverse image search where relevant, and treating obvious audio or visual anomalies as signals to stop and verify. Microsoft’s latest threat reporting makes a similar point from the defender side: detection has to focus more on behavior, delivery infrastructure, and context, not just on whether a message “sounds wrong.”

A practical rule helps here: any request involving credentials, money, sensitive files, or privileged access should survive a second method of verification. If it cannot survive a callback, a known contact route, a separate confirmation channel, or a formal approval step, it should not be trusted. That is not paranoia. It is basic hygiene in an environment where voices, faces, and writing style can be imitated.

The overlooked internal attack surface

Many organizations focus on incoming attacks and miss the risk created by their own people using AI carelessly. Verizon says roughly 14% of employees are routinely accessing generative AI systems on corporate devices, with 72% using non-corporate email addresses for those accounts and 17% using corporate email without integrated authentication such as SAML. That creates a quiet but serious exposure: staff may paste sensitive data into tools the company does not govern properly, turning convenience into leakage.

There is also a software supply chain angle that deserves more attention. Europol’s 2025 reporting highlights so-called slopsquatting, where AI coding assistants hallucinate package names and attackers then publish malicious packages under those invented names. If developers trust the suggestion and use the code without verification, the result can be a supply chain compromise. This is a sharp reminder that AI can undermine security even when nobody is being phished directly.

Hiring and contractor workflows are another weak spot. Microsoft says threat actors are using AI to create convincing personas, research job roles, fabricate identity details, and support large-scale operational persistence. That makes security a recruitment problem too. A fake candidate, a fraudulent contractor, or a remote worker operating under a synthetic identity can become an access problem long before they look like a cybersecurity incident.

Defenses that still work

The central defensive shift is simple: move from trust based on presentation to trust based on verification. The NSA, FBI, and CISA guidance recommends real-time identity verification procedures, mandatory MFA, unique or one-time passwords or PINs for sensitive communications, and stronger controls around financial transactions. Those controls may feel old-fashioned next to AI headlines, but that is precisely why they work. They force an attacker out of performance and into proof.

Training needs an upgrade as well. Generic phishing awareness is no longer enough if staff still assume a familiar voice, polished English, or a believable video means low risk. The same federal guidance recommends planning and rehearsing for deepfake scenarios, running tabletop exercises, and teaching staff how manipulated media may be used for executive impersonation, BEC, recruitment fraud, and operational disruption. Security culture has to catch up with synthetic credibility.

For security teams, one of the more important mindset changes is to stop treating AI-enabled attacks as a separate exotic category. They should be mapped onto existing controls: email security, identity protection, privileged access, payment approvals, vendor verification, recruitment checks, data loss prevention, and software supply chain validation. AI changes the shape of the pressure. It does not remove the need for disciplined fundamentals.

The habit that matters most

For individuals, the most useful habit is to slow the moment of response. Do not send money, approve access, share sensitive files, or reset credentials because a message sounds polished or a caller sounds familiar. Verify through a number or address you already know, not the one provided in the message. If a recruiter, bank, executive, supplier, or colleague is real, they will survive verification. If they resist it, that resistance is information.

For organizations, the deeper lesson is even clearer. The decisive weakness is not that AI can write better text or generate a fake face. It is that too many business processes still assume human-sounding communication is trustworthy by default. Attackers understand that gap. They are exploiting it with tools that make deception faster, cheaper, and easier to personalize. The companies that adapt first will not be the ones chasing every new buzzword. They will be the ones that redesign trust so that evidence beats appearance every time.

Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

Author: Jan Bielik CEO & Founder of Webiano Digital & Marketing Agency
Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

Sources

Cyber Signals Issue 9 | AI-powered deception: Emerging fraud threats and countermeasures
Microsoft Security analysis of AI-enabled fraud, deception, and identity abuse, with practical implications for defenders.
https://www.microsoft.com/en-us/security/blog/2025/04/16/cyber-signals-issue-9-ai-powered-deception-emerging-fraud-threats-and-countermeasures/

2025 Microsoft Digital Defense Report
Microsoft’s annual threat landscape report covering phishing, exposed services, infostealers, synthetic identities, and AI-related security risks.
https://www.microsoft.com/en-us/security/security-insider/threat-landscape/microsoft-digital-defense-report-2025

AI as tradecraft | How threat actors operationalize AI
Microsoft Threat Intelligence research on how attackers use AI across reconnaissance, social engineering, malware development, and post-compromise activity.
https://www.microsoft.com/en-us/security/blog/2026/03/06/ai-as-tradecraft-how-threat-actors-operationalize-ai/

FBI warns of increasing threat of cyber criminals utilizing artificial intelligence
FBI warning on how cybercriminals use AI to improve targeted phishing and other existing attack methods.
https://www.fbi.gov/contact-us/field-offices/san-francisco/news/fbi-warns-of-increasing-threat-of-cyber-criminals-utilizing-artificial-intelligence

FBI releases annual Internet Crime Report
Official FBI release summarizing the latest Internet Crime Report, including phishing and spoofing trends.
https://www.fbi.gov/news/press-releases/fbi-releases-annual-internet-crime-report

Contextualizing Deepfake Threats to Organizations
Joint NSA, FBI, and CISA cybersecurity information sheet on deepfakes, executive impersonation, media verification, and defensive planning.
https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF

STEAL, DEAL AND REPEAT | IOCTA 2025
Europol’s 2025 Internet Organised Crime Threat Assessment on stolen data markets, phishing kits, access brokers, malicious AI use, and supply chain abuse.
https://www.europol.europa.eu/cms/sites/default/files/documents/Steal-deal-repeat-IOCTA_2025.pdf

AI in Cybersecurity | Opportunities and Threats
Verizon business guidance on AI-enabled spear phishing, voice cloning, and internal data leakage risks.
https://www.verizon.com/business/resources/articles/ai-in-cybersecurity-opportunities-and-threats/

Phishing scams | Spot and report scam emails, texts, websites and calls
UK National Cyber Security Centre guidance on how phishing works, what it looks like, and how to report it.
https://www.ncsc.gov.uk/collection/phishing-scams