Microsoft research: AI media authentication doesn't work reliably, yet new laws assume it does
A Microsoft technical report systematically evaluates provenance metadata, invisible watermarks, and digital fingerprints for distinguishing real media from AI-generated content. The findings are sobering: no single method is reliable on its own, and even combined approaches have significant limits.
The report, produced under Microsoft's LASER long-term AI safety program, arrives as governments worldwide draft legislation requiring AI content authentication. The gap between regulatory assumptions and technical reality creates what the researchers call a false sense of security.
View full digest for February 21, 2026