Apple Intelligence found to push hallucinated stereotypes across millions of devices
Non-profit AI Forensics analyzed over 10,000 Apple Intelligence notification summaries and found systematic bias in how the system handles identity-related content. The on-device model omits the ethnicity of white protagonists more often than other groups and reinforces gender stereotypes when processing ambiguous texts.
Given deployment across hundreds of millions of iPhones, iPads, and Macs, Apple Intelligence could fall under the EU AI Act's "systemic risk" classification. Apple has not signed the voluntary Code of Practice. The bias is particularly concerning because users never opt into receiving these summaries on identity-sensitive content.
View full digest for February 23, 2026