OpenAI released ChatGPT in late 2022, almost two and a half years ago. Since then, generative AI tools have rapidly evolved from experimental novelties to everyday assistants — shaping how people work, learn, and even seek emotional support.
On the corporate side, early fears around the security of genAI led nearly a third of companies to introduce bans on its use. But by 2025, only 6% of firms still maintained those bans — a shift driven not only by growing institutional trust but also by AI providers making their tools more enterprise-friendly.
But has the introduction and excitement around generative AI changed public trust in AI?
Global Trust Levels: A Mixed Bag
Public trust in artificial intelligence has not improved as much as many had hoped. Instead, we’re seeing fragmented sentiment that varies by geography, demographics, and use cases.
Geographic Differences
Trust in AI is highest in India (77%), with China not far behind at 72%. Meanwhile, in the United States, only 32% of adults say they trust AI. Trust levels are even lower in Canada (30%), Germany (29%), the Netherlands (29%), United Kingdom (28%), Australia (25%), and Ireland (24%).
In the United States, views are nuanced:
- 56% believe AI has a net-neutral effect — doing equal amounts of harm and good. The good news: The percentage of those who believe AI is more harmful than helpful decreased nine percentage points from 2023 to 2024, from 40% to 31%.
- Similar to 2023, 77% of adults said in 2024 they do not trust businesses to use AI responsibly. Job fears play a part in this, with 58% of people worried about displacement due to automation. Additionally, more than 3 in 5 people are worried about AI-driven misinformation.
- Interestingly, confidence in AI to act in the public interest (47%) exceeds that of both social media (39%) and Congress (42%).
- 47% predict AI will ultimately be less biased than humans in the future.
Demographic Differences
- Age: Trust in AI declines with age. 57% of those aged 18-34 trust AI, compared to 52% for those between 35-54, and only 38% for those over age 55.
- Gender: Men are slightly more trusting of AI than women — 52% vs. 46%.
- Income: Trust increases with income level. While 51% of high-income respondents trust AI, that drops to 45% among middle-income individuals and just 36% among those with low incomes.
- U.S. political affiliation: Democrats show slightly more trust in AI (38%) than Republicans (34%). However, trust among Republicans grew significantly from 24% in 2024, while Democrat trust remained steady. Independents reported the lowest trust at just 23%.
Sector-Specific Sentiment
Public trust in AI varies substantially across industries and use cases.
Healthcare
In healthcare, public sentiment shows cautious optimism:
- 52% of people are excited about AI’s potential to improve diagnostic accuracy or recovery times.
- 56% support using AI tools for diagnosis if they can improve their condition or speed up recovery.
- 55% support using AI tools to diagnose or treat them or a loved one if there are strict safeguards to ensure ethical use of patient data; 57% say qualified professionals should oversee AI tools.
- 55% of 18-to-29-year-old Americans even feel comfortable chatting with AI about mental health concerns.
Yet this trust is inconsistent. Another survey from September 2024 found that only 25% of customers feel confident about using AI chatbots for medical information. And when it comes to health insurance, things look worse. Just 30% of insured Americans said they trust AI to evaluate and approve claims — and 55% actively distrust it.
Financial Services
Customers show higher trust in AI for tasks like fraud detection and prevention (77%) but are more cautious about using AI for financial information and advice, with only 27% expressing trust in such applications. When compared with other types of advice, this ranked lower than travel information (37%) but was slightly ahead of medical information (25%).
Legal Services
Only 12% feel comfortable with AI making legal decisions, even though 35% believe it will take over most tasks that law professionals do soon. This highlights an important issue: AI’s capabilities are advancing faster than public trust.
Customer Experience
Despite widespread implementation, generative AI earned the lowest average customer experience (CX) score of any emerging technology in 2024. GenAI services received a CX score of 69.5, with the broader AI category only slightly higher at 71.5, also below the average for emerging tech. These low scores likely reflect early-stage growing pains, as many enterprises are still figuring out how to integrate genAI tools effectively and deliver consistent, tangible value.
Hiring
Confidence in AI hiring tools is rising — at least among HR professionals. According to a 2025 HireVue report, trust in AI systems for hiring decisions grew from 37% in 2024 to 51% in 2025. But while adoption is rising inside companies, job candidate trust is more complex.
79% of candidates say they want transparency about AI’s role in hiring, and 30% are concerned it could replace the human factor entirely. That said, about half of candidates say they’d be comfortable applying to a job where AI is used to support hiring decision-making — suggesting openness when AI augments, rather than replaces, human judgment.
Public Sector and Institutional Concerns
Trust in AI is especially sensitive within the public sector. A 2025 Amazon Web Services survey found that:
- 83% of public sector organizations are concerned about public trust in genAI.
- 48% cited data privacy and security as their top concern.
- 94% said explainability — understanding how an AI system arrived at a result — is essential.
Transparency also matters to consumers. When asked how businesses could reduce concerns about AI, 57% of Americans said companies should be transparent about how AI is used in their business practices.
Building Trust in the Next Generation of AI
Demographics and geography clearly play a role in attitudes toward AI. Younger, wealthier, and more educated individuals tend to trust AI more. While this raises concerns about an emerging “AI divide,” it also suggests that increased access to information and a better understanding of the technology may help build trust.
Countries in the developing world are also displaying higher levels of optimism and trust in AI. This trend may reflect broader trust in institutions: China and India, for example, report trust in business and government above 79%, compared to just 55% and 41% in the United States, respectively. India’s significantly younger population may also contribute to its more optimistic outlook.
The next wave of generative AI is already taking shape — with autonomous agents, deep research copilots, and more proactive, context-aware tools. These systems won’t just answer questions — they’ll anticipate needs, act on a user’s behalf, and integrate more deeply into personal and professional life.
Whether this inspires greater confidence or deepens existing concerns will depend on how these technologies are built and communicated. Lasting trust in AI will require not just better, more accurate large language models but more transparency and public understanding of AI systems. The good news: with reasoning LLMs, we’re getting to see what the AI models are “thinking.”
As noted by a former Chief Architect at Microsoft in an AlphaSense transcript, ChatGPT’s reasoning model does “a pretty good” job of transparently showing “how it’s thinking about doing things and where it’s going and why it’s making certain decisions.” The expert highlights the need for this to be done for AI agents too, allowing users to understand how decisions are made.
With more transparency from AI tools and the right education, people can better grasp this technology’s capabilities and limitations — leading them to use it wisely, question it thoughtfully, and trust it when it’s earned.