Skip to Main Content

Deepfakes in Authentication

Deepfakes—AI‑generated voices, faces, and behaviors that convincingly mimic real people—have moved from novelty to material enterprise risk. This paper examines how synthetic media is eroding the reliability of identity verification, with a pragmatic focus on authentication in contact centers. Our aim is to equip engineering, security, and fraud teams with a clear threat model and a set of controls that still work when an attacker can sound and look exactly like a trusted customer or executive. Accordingly, this paper treats deepfake detection as a risk signal that can trigger step‑up controls—not as a binary gate.

Authors: Alex Shockley & Keziah Gopalla

Man's face that is half natural and half polygon mesh, reflecting facial biometrics

Download the Deepfakes in Authentication White Paper/Article

Alex Shockley Headshot

Previous Roles: Director of Digital Strategy AOR (ad agency), Founded/exited Shocking Creations (digital agency), Director Partnerships Unreasonable Adventures.