Trust in Generative AI: A Fragmented Landscape

OpenAI released ChatGPT in late 2022, almost two and a half years ago. ​Since then, generative AI tools have rapidly evolved from experimental novelties to everyday assistants — shaping how people work, learn, and even seek emotional support. 

On the corporate side, early fears around the security of genAI led nearly a third of companies to introduce bans on its use. But by 2025, only 6% of firms still maintained those bans — a shift driven not only by growing institutional trust but also by AI providers making their tools more enterprise-friendly.

But has the introduction and excitement around generative AI changed public trust in AI?

Global Trust Levels: A Mixed Bag

Public trust in artificial intelligence has not improved as much as many had hoped. Instead, we’re seeing fragmented sentiment that varies by geography, demographics, and use cases.

Geographic Differences 

Trust in AI is highest in India (77%), with China not far behind at 72%. Meanwhile, in the United States, only 32% of adults say they trust AI. Trust levels are even lower in Canada (30%), Germany (29%), the Netherlands (29%), United Kingdom (28%), Australia (25%), and Ireland (24%).

In the United States, views are nuanced:

Demographic Differences

Sector-Specific Sentiment

Public trust in AI varies substantially across industries and use cases.

Healthcare

In healthcare, public sentiment shows cautious optimism: 

Yet this trust is inconsistent. Another survey from September 2024 found that only 25% of customers feel confident about using AI chatbots for medical information. And when it comes to health insurance, things look worse. Just 30% of insured Americans said they trust AI to evaluate and approve claims — and 55% actively distrust it. 

Financial Services

Customers show higher trust in AI for tasks like fraud detection and prevention (77%) but are more cautious about using AI for financial information and advice, with only 27% expressing trust in such applications. When compared with other types of advice,  this ranked lower than travel information (37%) but was slightly ahead of medical information (25%). 

Legal Services

Only 12% feel comfortable with AI making legal decisions, even though 35% believe it will take over most tasks that law professionals do soon. This highlights an important issue: AI’s capabilities are advancing faster than public trust.

Customer Experience

Despite widespread implementation, generative AI earned the lowest average customer experience (CX) score of any emerging technology in 2024. GenAI services received a CX score of 69.5, with the broader AI category only slightly higher at 71.5, also below the average for emerging tech. These low scores likely reflect early-stage growing pains, as many enterprises are still figuring out how to integrate genAI tools effectively and deliver consistent, tangible value. 

Hiring

Confidence in AI hiring tools is rising — at least among HR professionals. According to a 2025 HireVue report, trust in AI systems for hiring decisions grew from 37% in 2024 to 51% in 2025. But while adoption is rising inside companies, job candidate trust is more complex.

79% of candidates say they want transparency about AI’s role in hiring, and 30% are concerned it could replace the human factor entirely. That said, about half of candidates say they’d be comfortable applying to a job where AI is used to support hiring decision-making — suggesting openness when AI augments, rather than replaces, human judgment.

Public Sector and Institutional Concerns

Trust in AI is especially sensitive within the public sector. A 2025 Amazon Web Services survey found that:

  • 83% of public sector organizations are concerned about public trust in genAI.
  • 48% cited data privacy and security as their top concern.
  • 94% said explainability — understanding how an AI system arrived at a result — is essential.

Transparency also matters to consumers. When asked how businesses could reduce concerns about AI, 57% of Americans said companies should be transparent about how AI is used in their business practices.

Building Trust in the Next Generation of AI

Demographics and geography clearly play a role in attitudes toward AI. Younger, wealthier, and more educated individuals tend to trust AI more. While this raises concerns about an emerging “AI divide,” it also suggests that increased access to information and a better understanding of the technology may help build trust. 

Countries in the developing world are also displaying higher levels of optimism and trust in AI.  This trend may reflect broader trust in institutions: China and India, for example, report trust in business and government above 79%, compared to just 55% and 41% in the United States, respectively. India’s significantly younger population may also contribute to its more optimistic outlook. 

The next wave of generative AI is already taking shape — with autonomous agents, deep research copilots, and more proactive, context-aware tools. These systems won’t just answer questions — they’ll anticipate needs, act on a user’s behalf, and integrate more deeply into personal and professional life. 

Whether this inspires greater confidence or deepens existing concerns will depend on how these technologies are built and communicated. Lasting trust in AI will require not just better, more accurate large language models but more transparency and public understanding of AI systems. The good news: with reasoning LLMs, we’re getting to see what the AI models are “thinking.” 

As noted by a former Chief Architect at Microsoft in an AlphaSense transcript, ChatGPT’s reasoning model does “a pretty good” job of transparently showing “how it’s thinking about doing things and where it’s going and why it’s making certain decisions.” The expert highlights the need for this to be done for AI agents too, allowing users to understand how decisions are made.

With more transparency from AI tools and the right education, people can better grasp this technology’s capabilities and limitations — leading them to use it wisely, question it thoughtfully, and trust it when it’s earned.

ABOUT THE AUTHOR
Sarah Hoffman
Sarah Hoffman
Director of Research, AI

Sarah Hoffman is Director of Research, AI at AlphaSense, where she explores artificial intelligence trends that will matter most to AlphaSense’s customers. Previously, Sarah was Vice President of AI and ML Research for Fidelity Investments, led FactSet’s ML and Language Technology team and worked as an Information Technology Analyst at Lehman Brothers. With a career spanning two decades in AI, ML, natural language processing, and other technologies, Sarah’s expertise has been featured in The Wall Street Journal, CNBC, VentureBeat, and on Bloomberg TV. Sarah holds a master’s degree from Columbia University in computer science with a focus on natural language processing, and a B.B.A. from Baruch College in computer information systems. Sarah is based in New York.

Read all posts written by Sarah Hoffman