State attorneys general urge Microsoft, OpenAI, Google, and other leading AI companies to address ‘unrealistic’ results
State Attorneys General Urge AI Companies to Address Harmful Outputs
Following a series of troubling incidents involving AI chatbots and mental health, a coalition of state attorneys general has issued a formal warning to leading AI companies. The group cautioned that failure to address “delusional outputs” from their systems could result in violations of state laws.
Representatives from numerous U.S. states and territories, in partnership with the National Association of Attorneys General, signed a letter directed at major AI firms such as Microsoft, OpenAI, Google, and ten others. The letter also named Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI, urging them to introduce new internal measures to safeguard users.
This action comes amid ongoing debates between state and federal authorities regarding the regulation of artificial intelligence.
Proposed Safeguards for AI Systems
The attorneys general recommended several protective steps, including independent third-party audits of large language models to detect signs of delusional or excessively agreeable behavior. They also called for new protocols to alert users if chatbots generate content that could negatively impact mental health. The letter emphasized that external organizations, such as academic institutions and civil society groups, should have the freedom to review AI systems before release and publish their findings without interference from the companies involved.
“Generative AI holds the promise to transform society for the better, but it has already caused—and could continue to cause—significant harm, particularly to vulnerable individuals,” the letter noted. It referenced several high-profile cases over the past year, including instances of suicide and violence, where excessive AI use was implicated. In many of these situations, generative AI tools produced outputs that either reinforced users’ harmful beliefs or assured them their perceptions were accurate.
Incident Reporting and Safety Measures
The attorneys general further advised that mental health-related incidents involving AI should be handled with the same transparency as cybersecurity breaches. They advocated for clear incident reporting procedures and the publication of timelines for detecting and responding to problematic outputs. Companies were urged to promptly and transparently inform users if they had been exposed to potentially dangerous chatbot responses.
Additionally, the letter called for the development of robust safety tests for generative AI models to ensure they do not produce harmful or misleading content. These evaluations should take place before any public release of the technology.
Federal and State Regulatory Tensions
Efforts to reach Google, Microsoft, and OpenAI for comment were unsuccessful at the time of publication; updates will be provided if responses are received.
At the federal level, AI developers have generally encountered a more favorable environment. The Trump administration has openly supported AI innovation and has attempted several times over the past year to enact a nationwide ban on state-level AI regulations. These initiatives have not succeeded, partly due to resistance from state officials.
Undeterred, former President Trump announced plans to issue an executive order in the coming week that would restrict states’ authority to regulate AI. In a statement on Truth Social, he expressed hope that this action would prevent AI from being “DESTROYED IN ITS INFANCY.”
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
PUMP Tops Solana With $205M Buybacks, 13.8% Supply Retired
Quick Take Summary is AI generated, newsroom reviewed. PUMP's buyback program has reached over $205 million, making it the largest cumulative repurchase effort on Solana. The program, funded by daily Pump.fun revenues, has retired 13.86% of the token's total circulating supply in just five months. Surpassing Raydium in buyback volume signals a major shift toward retail-led activity shaping Solana's economic footprint. The sustained buybacks reinforce token value and may prompt other Solana projects to reth
Phoenix Perpetuals Launch Positions Solana for a New Era of On-Chain Derivatives
Quick Take Summary is AI generated, newsroom reviewed. Fed’s T-bill purchases focus on liquidity management, not real QE. Ellipsis Labs launches Phoenix Perpetuals to deliver Solana-native, high-speed on-chain derivatives trading. The demo shows sub-1 bps slippage on multimillion-dollar trades with gasless and crankless execution. Phoenix builds on the success of its $1B+ spot DEX and positions Solana as a leading chain for institutional-grade DeFi.References X Post
Vitalik Says Fileverse Now Stable for Secure Web3 Collaboration
Quick Take Summary is AI generated, newsroom reviewed. Vitalik Buterin confirmed Fileverse's stability and reliability for secure, decentralized document sharing and collaboration. The platform's design allows for instant use without needing crypto wallets, tokens, or prior blockchain knowledge. This usability fills a crucial gap, making Web3 tools practical for real-world document collaboration and secure online coordination. The endorsement highlights a shift toward high-quality, practical infrastructure
Ripple’s Push for a Federal Reserve Master Account Pushes Financial Shift
Quick Take Summary is AI generated, newsroom reviewed. Ripple pursued a Federal Reserve master account for RLUSD. Ripple acquired Hidden Road and launched Ripple Prime. Riksbank shifted to urgent stablecoin regulation. U.S. digital asset policy accelerated in 2025. Ripple positioned itself for global financial infrastructure dominance.References X Post