OpenAI and Anthropic cross-test models for hallucination and security issues
Jinse Finance reported that OpenAI and Anthropic have recently evaluated each other's models in order to identify issues that may have been missed during their own internal testing. Both companies stated on their respective blogs on Wednesday that this summer, they conducted safety tests on each other's publicly available AI models, examining whether the models exhibited hallucination tendencies and issues of so-called "misalignment," meaning the models did not operate as intended by their developers. These evaluations were completed before OpenAI launched GPT-5 and before Anthropic released Opus 4.1 in early August. Anthropic was founded by former OpenAI employees.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
BIS: US Tariff Shock Drives Global Forex Trading Volume to Record High
OpenAI report: Enterprise AI applications surge
BlackRock: The wave of capital flowing into AI infrastructure is far from peaking
