AlphaSense has remained steadfast in its mission to radically improve how professionals make business decisions through the power of AI. As we look towards the future of an AI-enabled world, we spoke with two titans of the tech and financial worlds, Eric Schmidt, Co-Founder of Schmidt Futures and Former CEO & Chairman of Google and David Solomon, Chairman and CEO of Goldman Sachs, on what we can expect to see unfold with the rapid growth and business implications of generative AI (genAI).
In the below interview, Solomon and Schmidt discuss the wider world of generative AI and the impact on our business and society as a whole. They answer some of the most pressing questions in the genAI space like, which industries are ripe for disruption? What are the risks and unknowns with leaning on genAI for information? What societal behavior changes does he foresee?
The below Q&A is lightly edited for clarity, however, you can watch the video of the entire conversation here at alpha-sense.com/genai.
David Solomon: Many AI experts believe building bigger large language models (LLMs) won’t improve performance gains that significantly. Do you agree?
Eric Schmidt: Well, the [broader AI] industry doesn’t. What the industry’s doing today is spending about $100 million on training frontier models, of which there are currently four: OpenAI, Google, Anthropic, and Inflection. And there are more coming.
The leaders of those initiatives have told me their next scale goal is $1 billion in training runs, most of which goes to electricity. This has never happened in my experience and has never happened in my industry. The scale of money, capital, and hardware—something that you understand extremely well—is new to us.
Solomon: Does it scale linearly? If you invested $10 billion more into GPT4, for example, would you guess it gets 10% better or 100 times better? How does it scale on a linear basis with capital dollars spent?
Schmidt: People believe that we’ll go from 100 million to a billion to 10 billion. Much more subtlety, better reasoning, and better intelligence. So if you believe there’s a return for a difference in intelligence, insights, and so forth, you can see it.
You can see it in the difference between ChatGPT, which is essentially GPT3.5, and GPT4. Ask it a question, and look at the different answers. You can see the progression. OpenAI has said that GPT5 and GPT6 will be on the order of 18 months, two years between each other. That’s the nature of the cost of the training.
Solomon: If you looked out five years from now, what sort of LLMs do you think most businesses will be using?
Schmidt: Let’s distinguish between humanity and businesses. Some AI companies are valued at many, many billions of dollars with no revenue plan whatsoever.
With respect to these frontier models, they’re so interesting because it’s a new form of intelligence and we don’t know how they’ll be used. Will they be used by consumers or governments and businesses?
I recently published an article that said the majority of the business uses will not be in these LLMs for a reason. The uses may not be obvious. In your business here in the bank, do you really want your system learning something every day? Can you imagine the regulators calling up and saying, “Oh, it learned something! Oh, my god! It just learned a new negative interest rate and that’s not legal.”
Ultimately, businesses are not going to be tolerant of emergent knowledge and unpredictability. You’re going to want open-source models. The most well-known one right now is called Llama. There is Bloom from a group called Hugging Face. And there are a number of others coming.
You’ll tailor AI use cases to your business problem. Let’s think about a bank. How many different uses do you have for AI? Customer service, writing letters back and forth to regulators, marketing, and writing press releases. All those make sense for generative AI, but you don’t want it randomly inventing new languages and new math in the middle of your business operations.
You’ll use them, but you’ll probably use smaller, free ones.
Solomon: How do you think about the economic flow to all these different scenarios?
Schmidt: How do you think about the breadth of business applications and how do you prioritize?
Well, you’re a CEO, so what do you care about? Revenue over expenses. Most people, when they look at these tactically, say, “Let’s improve our customer service. That’s great. Let’s lower our operating cost and our inefficiency. Let’s get our G&A smaller.” All of those are true. Those are normal automation.
You want more revenue. Doesn’t everyone? Everyone listening to this wants more revenue. How do you do it?
Solomon: More productivity.
Schmidt: More productivity, but fundamentally revenue. So how do we get more revenue? We have more products. We have more customers. We have more messaging. We have more demand. We create that. My favorite example is at Google. I helped build an industry where everyone has to generate their own ads. Why in the world do I have to generate the ad? Why can’t I have the computer generate an ad, and the computer knows how to score my ad?
It currently says this idea from Eric sucks and this idea is brilliant. Instead, I want it to build a great ad.
I’ll give you another example. I want to create a viral tweet. However, I don’t really know how to do that, so I write my tweet in my own language and it’s a dud. Why can’t I tell the system, “This is what I want to express,” and have it design the most viral tweet to get my point across? You get the idea. So the ability for the computer and a human to work together to have an impact and spread that impact drives revenue. That’s the best way to think about generative AI.
The above conversation concludes the first part of our three part blog series. Stay tuned next week for the second part of this riveting interview: GenAI: What’s Next? With Eric Schmidt and David Solomon.