Missouri Attorney General Andrew Bailey has opened an investigation into major tech companies, including OpenAI, Google, Microsoft, and Meta, over alleged political bias in their AI chatbots. He claims these systems misrepresented Donald Trump by ranking him highest in antisemitism among recent presidents. He called this deceptive and potentially unlawful.
He argues that these responses reflect political manipulation rather than objective fact. He asserts that misleading outputs from AI platforms could misinform consumers and violate consumer protection laws. His office is demanding documentation on how the systems are trained, including processes involving content filtering, suppression, or ideological input.
The investigation cites an incident in which several generative AI chatbots reportedly ranked presidents based on antisemitism. ChatGPT, Gemini, and Meta AI allegedly placed Trump last. However, Copilot from Microsoft refused to rank at all. Critics say Bailey mischaracterized this refusal as evidence of bias and undermined the foundation of his legal claims.
It can be argued that the ranking of presidents by antisemitism is inherently subjective and cannot serve as a basis for a factual claim. Moreover, given the vagueness of the alleged harm, the possible investigation may reflect political posturing rather than a legitimate consumer protection concern. The limits of generative AI chatbots should also be taken into consideration.
Bailey has previously pursued high-profile investigations involving online platforms and several media organizations. Analysts suggest this current effort mirrors a broader political campaign to challenge perceived left-leaning influence in artificial intelligence. Some warn that this approach could threaten innovation and free expression in the AI industry.
Note that the Attorney General also contends that Section 230 protections may not apply if AI tools are intentionally designed to deceive. However, in reading Section 230, it can also be argued that the principle of immunity does not govern AI-generated speech and that any legal reform should come from the United States Congress and not individual state actions.
The actions of Bailey could be seen as part of an emerging trend in which state officials target artificial intelligence systems over perceived political leanings. This raises concerns about government overreach, especially when legal arguments rest on disputed interpretations of how these technologies function and what constitutes consumer deception.