OpenAI’s Sam Altman Shocked ‘People Have a High Degree of Trust in ChatGPT’ Because ‘It Should Be the Tech That You Don't Trust’

An image of Sam Altman in front of a blue background_ Image by jamesonwu1972 via Shutterstock_

OpenAI CEO Sam Altman made remarks on the first episode of OpenAI’s new podcast regarding the degree of trust people have in ChatGPT. Altman observed, “People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don't trust that much.” 

This candid admission comes at a time when AI’s capabilities are still in their infancy. Billions of people around the world are now using artificial intelligence (AI), but as Altman says, it’s not super reliable. 

ChatGPT and similar large language models (LLMs) are known to “hallucinate,” or generate plausible-sounding but incorrect or fabricated information. Despite this, millions of users rely on these tools for everything from research and work to personal advice and parenting guidance. Altman himself described using ChatGPT extensively for parenting questions during his son’s early months, acknowledging both its utility and the risks inherent in trusting an AI that can be confidently wrong.

Altman’s observation points to a paradox at the heart of the AI revolution: while users are increasingly aware that AI can make mistakes, the convenience, speed, and conversational fluency of tools like ChatGPT have fostered a level of trust more commonly associated with human experts or close friends. This trust is amplified by the AI’s ability to remember context, personalize responses, and provide help across a broad range of topics — features that Altman and others at OpenAI believe will only deepen as the technology improves.

Yet, as Altman cautioned, this trust is not always well-placed. The risk of over-reliance on AI-generated content is particularly acute in high-stakes domains such as healthcare, legal advice, and education. While Altman praised ChatGPT’s usefulness, he stressed the importance of user awareness and critical thinking, urging society to recognize that “AI hallucinates” and should not be blindly trusted.

The conversation also touched on broader issues of privacy, data retention, and monetization. As OpenAI explores new features — such as persistent memory and potential advertising products — Altman emphasized the need to maintain user trust by ensuring transparency and protecting privacy. The ongoing lawsuit with The New York Times over data retention and copyright has further highlighted the delicate balance between innovation, legal compliance, and user rights.


On the date of publication, Caleb Naysmith did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. For more information please view the Barchart Disclosure Policy here.