ChatGPT and Gemini voice bots repeat falsehoods up to 50% of the time with malicious prompts
NewsGuard tested ChatGPT Voice, Gemini Live, and Alexa+ against 20 false claims across health, politics, and disinformation. With neutral questions, ChatGPT repeated falsehoods 22% of the time and Gemini 23%. Malicious prompts spiked those rates to 50% and 45% respectively.
Amazon's Alexa+ was the outlier, rejecting every false claim by pulling from trusted sources like AP and Reuters. The findings raise concerns about AI voice bots becoming vectors for audio disinformation on social media. OpenAI declined to comment; Google didn't respond.
View full digest for February 23, 2026