Study Finds Nearly 40% of Americans Fact-Check Chatbots with Google
TECHNOLOGY


Americans Verify AI-generated information using Google or other sources, study finds.
Artificial intelligence (AI) chatbots have rapidly integrated into the daily routines of millions, but a significant trust gap remains. A new survey released by ChatOn, a popular AI chatbot application, reveals that while Americans are embracing AI for speed and creativity, they remain highly skeptical of its accuracy.
The study, which examined Americans' habits and preferences for using AI chatbots, found that a remarkable 39% of users verify AI-generated information using Google or other external sources. This habit underscores a crucial reality: for a large portion of the public, AI is a powerful assistant, but it is not yet a definitive source of truth.
AI is an Everyday Utility
The survey, released by ChatOn (an application developed by AIBY and powered by multiple large language models), confirms that AI is now a core utility in everyday life. The most common uses for chatbots include:
Information Gathering: 74% of respondents use AI to get answers or search for information.
Communication: 65% use it for writing and editing emails, messages, and texts.
Idea Generation: 54% use it for brainstorming and creative inspiration.
In terms of frequency, a total of 72% of Americans use AI chatbots multiple times per week or more, with 22% using them multiple times per day.
Proficiency vs. Trust
While usage is high, so is the desire for mastery. Nearly half of respondents (49%) rated their AI proficiency as intermediate, with 24% describing themselves as advanced. This proficiency, however, doesn't translate to blind trust. The habit of actively verifying information (39%) demonstrates that users are aware of the technology’s limitations.
Beyond external fact-checking, users employ other common safety habits:
Asking Follow-Up Questions: 48% double-check answers by asking the chatbot to elaborate.
Rephrasing Prompts: 42% rephrase their original query to try and improve the quality of the result.
This suggests that users understand they must actively guide the AI to achieve reliable outcomes.
The "Hallucination" Headache and Privacy Concerns
The skepticism is warranted, as users frequently encounter issues known as "AI hallucinations":
Irrelevant Responses: 39% of users sometimes receive answers completely unrelated to their prompt.
Outdated Information: 36% encounter information that is no longer current.
Contradictions and Fake Sources: 33% notice contradictions in the AI's responses, and 19% report seeing made-up sources or references.
Privacy also remains a major concern: 54% of respondents avoid sharing sensitive personal information, and 36% refrain from discussing work-related data in chatbots.
"While most users rate their AI proficiency as intermediate or higher, their strong interest in improving their skills shows that familiarity doesn't equal mastery," said Dmitry Khritankov, Product Director at ChatOn.
This presents a clear challenge and opportunity for developers: to make AI more trustworthy, accessible, and safe, ultimately allowing users to rely on the assistant without needing to jump back to Google to double-check the results.
