CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

Escalating deepfake voice social engineering attacks drive multi-million-dollar losses amid absence of verification protocols

First reported
Last updated
1 unique sources, 1 articles

Summary

Hide ▲

Since early 2024, threat actors have deployed AI-generated voice clones in real-time telephone and videoconference calls to impersonate executives and colleagues, bypassing existing technical controls and inducing victims to authorize high-value wire transfers. Attacks leveraged only three seconds of publicly available audio—often from corporate recordings or social media—to create convincing replicas using free, offline tools. Incidents surged 680% year-over-year in 2025, with over 100,000 documented cases in the United States alone and global documented fraud losses exceeding $2.19 billion. Organizations that prevented financial losses relied on enforced verification steps—such as pre-stored callback numbers, verbal passcodes, and mandatory pauses before acting—rather than technical detection.

Timeline

  1. 27.04.2026 16:00 1 articles · 2h ago

    AI voice impersonation drives surge in six-figure wire fraud amid absence of verification protocols

    Threat actors increasingly leverage publicly available voice samples and free AI tools to conduct real-time impersonation attacks via phone and videoconference, bypassing technical security stacks. Documented losses exceeded $200 million in the first four months of 2025, with 61% of impacted organizations reporting losses above $100,000. Organizations that enforced callback requirements, verbal passcodes, and mandatory verification pauses prevented financial losses.

    Show sources

Information Snippets