7 de mayo de 2025

OpenAI's latest AI models are generating alarming levels of misinformation

OpenAI's latest AI models seem to have a big problem. A report reveals that the GPT o3 and o4-mini are producing misinformation at an alarming rate.

AI-generated misinformation, aka hallucinations, are common among most artificial intelligence services. The New York Times has published an investigation conducted by OpenAI that discovered its own models are generating more fake content than others. This in turn has raised serious concerns about their reliability.

GPT o3 and o4-mini have been designed to mimic human reasoning and logic. When these were put to the test in benchmarks involving public figures, nearly one-third of GPT o3's results were found to be hallucinations. In comparison, GPT o1 had less than half of that error rate in tests that were conducted last year. GPT o4-mini fared even worse, as it hallucinated on 48% of its tasks. When these models tackled general knowledge questions, hallucinations soared to 51% for GPT o3, and a staggering 79% for o4-mini.

OpenAI says that the hallucinating problem is not because the reasoning models are worse, but because they could simply be more verbose and adventurous in their answers, and are speculating possibilities rather than repeating predictable facts. Developers initially aimed for these systems to think critically and reason through complex queries; however, this ambitious approach appears to have led to an increase in creativity at the expense of factuality.

This could pose a big problem for OpenAI's ChatGPT, as rival services like Google Gemini, Anthropic Claude, have been designed to provide information more accurately. Unlike simpler models focused on high-confidence predictions, GPT o3 and o4-mini often speculate, blurring the line between possible scenarios and outright fabrications. This raises red flags for users in high-stakes environments, from legal professionals to educators and healthcare providers, where reliance on AI could lead to significant missteps.

The more useful AI becomes, the greater the potential for critical errors. While AI models may outperform humans in certain tasks, the risk of inaccuracies diminishes AI's overall credibility. Until these hallucination issues are effectively addressed, users are advised to approach AI-generated information with caution and skepticism.

Source: Tech Radar

Thank you for being a Ghacks reader. The post OpenAI's latest AI models are generating alarming levels of misinformation appeared first on gHacks Technology News.



☞ El artículo completo original de Agencies Ghacks lo puedes ver aquí

No hay comentarios.:

Publicar un comentario