Skip to content
Resources > Research Articles

Beyond Efficiency: Redefining AI Metrics for a Hyper-Personalized Era

By Sarah Hoffman, Director of AI Thought LeadershipMarch 5, 2026
impacts of ai beyond efficiency

A few years ago, while writing a report on healthcare and AI, I asked ChatGPT to review my outline and identify gaps. One recommendation stood out: to add a section on mental health and AI.

That suggestion did not save me time — quite the opposite. It expanded the scope of the report, forced me to research a new area, and added days of work. By most traditional metrics, AI made me less productive. But it also made the report stronger. It surfaced a gap I had not fully considered.

How can we measure AI’s impact outside of simple time efficiency? This question is becoming more urgent as generative AI systems move from experimentation to execution. AlphaSense data shows increasing mentions in earnings transcripts of phrases like “ROI,” “impact,” and “AI investments,” suggesting that companies that began experimenting with generative AI in 2025 are now anticipating tangible business impact this year.

Keyword analysis of earnings transcripts capturing “AI” within close proximity to “ROI” or “impact,” and direct mentions of “AI investments.” Source: AlphaSense

According to Deloitte’s 2026 State of AI in the Enterprise report, a widening performance divide is emerging between companies that treat AI as core to strategy and those that view it primarily as a cost-saving tool. Yet most AI metrics still revolve around efficiency. That works when AI is automating a stable task. But sometimes AI broadens perspective, challenges an implicit assumption, and improves the quality of the outcome or decision-making.

Making things even more challenging, research shows that “shadow AI” — unsanctioned use of consumer AI tools by employees — often delivers stronger perceived ROI than formal enterprise initiatives. And as AI becomes more personalized and proactive, measuring its impact becomes even more challenging.

Measuring Hyper-Personalized AI

Today, AI is moving beyond role-based personalization. Hyper-personalized AI adapts to an individual’s goals, questions, and past interactions. Two people can use the same system and receive different outputs, and both can have successful experiences.

For example, I can create an AlphaSense custom workflow agent that sends me a daily news alert tailored to the specific industries and topics I care about, formatted in a way that fits how I work. This is personalized to me, by me.

We already see signs that hyper-personalized AI is reshaping user behavior. At AlphaSense, around 40% of the users who created custom agents engaged with the scheduled agent alert. Users are investing time to configure personalized systems and returning to them consistently. That suggests real, perceived value and integration into daily decision workflows.

Adoption metrics are meaningful indicators. But they are only part of the story. Are these personalized alerts expanding perspective, improving analysis, and leading to stronger decisions over time?

Can we capture measures such as:

  • Did the AI surface information the user was not already considering?
  • Did it influence the direction or quality of a decision?
  • Did it increase confidence about that decision?

In structured domains, this is easier. Tools like GitHub Copilot can track suggestion acceptance rates or measure how often generated code is incorporated into production. But in areas like research, strategy, or investment analysis, value is less straight-forward. An AI system might surface a weak signal that reshapes a thesis days later, or introduce a risk that prevents a flawed decision. These are harder to quantify, but they are closer to the actual value of hyper-personalization.

Measuring Breadth, Not Just Relevance

There is another dimension that metrics often ignore entirely: risk.

Highly personalized systems can quietly narrow what people see. When AI optimizes only for relevance based on past behavior, it can reduce exposure to unfamiliar ideas or dissenting views. A system that feels perfectly aligned with a user’s preferences may be reinforcing blind spots while appearing highly effective by engagement metrics. Great advisors don’t just help you find what you’re looking for; they tell you why you’re looking for the wrong thing.

To address this, metrics for personalized AI need to account for exposure and exploration. That might mean tracking:

  • Source Diversity: The diversity of sources or perspectives surfaced over time.
  • Perspective Expansion: Whether the system strengthens decision-making by bringing forward signals that would otherwise be overlooked.

A healthy hyper-personalized system should occasionally surprise the user. Perhaps to try and capture these, companies can track what changes after exposure. For example, is there a new watchlist? Maybe instead of a thumbs up, there could be an indicator for “Did this surface something new or influence your thinking?”

The Shift from Efficiency to Judgment

If we define ROI narrowly, in terms of hours saved or tasks automated, we risk optimizing for the wrong outcome. Hyper-personalized AI reshapes what individuals see, question, and prioritize. It influences what gets elevated to the decision table and what quietly falls away.

The more personalized and proactive these systems become, the more they function as cognitive infrastructure. They filter information, surface signals, and frame context before a human ever makes a call.

That means measurement must move beyond efficiency and toward judgment. Did it surface risks earlier? Did it challenge an assumption that would otherwise have gone unexamined? Did it strengthen conviction for the right reasons?

These questions are harder to quantify. But as AI increasingly mediates how individuals engage with information, the metrics we choose will quietly shape the systems we build. On the individual level, the future of work will depend on how well people collaborate with intelligent systems. At the organizational level, it will depend on how intentionally we measure their impact.

Discover how you can transform your research process with AlphaSense’s Generative Search. Start your free 2-week trial of AlphaSense today.

About the Author
  • Sarah Hoffman, Director of AI Thought Leadership

    Sarah Hoffman is Director of AI Thought Leadership at AlphaSense, where she explores artificial intelligence trends that will matter most to AlphaSense’s customers. Previously, Sarah was Vice President of AI and ML Research for Fidelity Investments, led FactSet’s ML and Language Technology team and worked as an Information Technology Analyst at Lehman Brothers. With a career spanning two decades in AI, ML, natural language processing, and other technologies, Sarah’s expertise has been featured in The Wall Street Journal, CNBC, VentureBeat, and on Bloomberg TV. Sarah holds a master's degree from Columbia University in computer science with a focus on natural language processing, and a B.B.A. from Baruch College in computer information systems. Sarah is based in New York.

Explore more

AI Trends and Outlook for 2026

Discover the top trends shaping the AI space in 2026, as well as the outlook for the future of the industry.
ai trends

From Scale to Specialization: AI’s Next Phase

Here’s what to expect in 2026 from the latest AI evolution.
next phase of ai scale to specialization

Proactive AI in 2026: Moving Beyond the Prompt

AI is changing. Here’s what you need to know about its next evolution, including key opportunities and challenges.
proactive ai moving beyond the prompt

Transform intelligence
into advantage

Develop bold strategies, seize opportunities,
and lead with clarity and confidence.