NVIDIA Corp Earnings - Q4 2025 Analysis & Highlights

NVIDIA's Q4 2026 earnings call highlighted record financial performance driven by exceptional data center demand, with management emphasizing the inflection point of agentic AI and the company's strategic positioning across the entire AI infrastructure stack, while providing optimistic guidance for continued sequential growth throughout 2026 despite acknowledging supply constraints in gaming and potential challenges in China.

Key Financial Results

  • Total revenue of $68 billion in Q4 2026, representing 73% year-over-year growth and accelerating from Q3.
  • Record sequential revenue growth of $11 billion in data center revenue across diverse customer segments including cloud providers, hyperscalers, AI model makers, enterprises, and sovereign nations.
  • Full-year data center revenue of $194 billion, up 68% year-over-year, with the data center business scaled by nearly 13x since the emergence of ChatGPT in fiscal 2023.
  • GAAP gross margin of 75% and non-GAAP gross margin of 75.2%, increasing sequentially as Blackwell continued to ramp.
  • Free cash flow of $35 billion in Q4 and $97 billion in fiscal year 2026.
  • $41 billion returned to shareholders in fiscal 2026, representing 43% of free cash flow through share repurchases and dividends.
  • Business Segment Results

  • Data Center revenue of $62 billion in Q4, increased 75% year-over-year and 22% sequentially, driven primarily by sustained strength in Blackwell and the Blackwell Ultra ramp.
  • Networking revenue of $11 billion in Q4, up more than 3.5x year-over-year, with full-year networking revenue exceeding $31 billion, up more than 10x compared to fiscal 2021 when Mellanox was acquired.
  • Gaming revenue of $3.7 billion, increased 47% year-over-year, driven by strong Blackwell demand and improved supply.
  • Professional Visualization revenue of $1.3 billion, crossed the $1 billion mark for the first time, up 159% year-over-year and 74% sequentially.
  • Automotive revenue of $604 million, up 6% year-over-year, driven by robust demand for self-driving solutions.
  • Sovereign AI business more than tripled year-over-year to over $30 billion, driven primarily by customers based in Canada, France, the Netherlands, Singapore, and the UK.
  • Physical AI contributed north of $6 billion in NVIDIA revenue in fiscal year 2026.
  • Capital Allocation

  • Share repurchases and dividends totaling $41 billion in fiscal 2026, representing 43% of free cash flow.
  • Strategic inventory and capacity secured extending into calendar 2027 to address future demand, with inventory growing 8% quarter-over-quarter and purchase commitments increasing significantly.
  • Continued investment in technology and ecosystem to cultivate market development and drive long-term growth.
  • Annual R&D budget approaching $20 billion, fueling pace of innovation and extreme co-design across compute, networking, chips, systems, algorithms, and software.
  • Industry Trends and Dynamics

  • Fundamental platform shift from classical machine learning to generative AI, with strong evidence of ROI as hyperscalers upgrade massive traditional workloads to generative AI including search, ad generation, and content recommender systems.
  • Frontier agentic systems have reached an inflection point, with Claude Code, Claude Cowork, and OpenAI Codex achieving useful intelligence and adoption skyrocketing.
  • Inference deployments growing in addition to training, with demand for Blackwell architecture extreme co-designed at data center scale continuing to strengthen.
  • Nearly nine gigawatts of infrastructure on Blackwell deployed and consumed by major cloud service providers, hyperscalers, AI model makers, and enterprises.
  • Analysts' expectations for 2026 CapEx across top five cloud providers and hyperscalers (accounting for little over 50% of NVIDIA's data center revenue) up nearly $120 billion since the start of the year and approaching $700 billion.
  • Every data center is power constrained, with customers making critical architectural decisions based on performance per watt to maximize AI factory revenue.
  • Robotaxi rides growing exponentially with commercial fleets from Waymo, Tesla, Uber, WeRide, and Zoox expected to scale from thousands of vehicles in 2025 to millions over the next decade.
  • Competitive Landscape

  • NVIDIA declared "Inference King" by SemiAnalysis, with GB300 NVL72 achieving up to 50x performance per watt and 35x lower cost per token compared with Hopper.
  • Continuous optimization of CUDA software delivered up to five times better performance on GB200 NVL72 within four months.
  • NVIDIA produces the lowest cost per token and data centers running on NVIDIA generate the highest revenues.
  • Pace of innovation at NVIDIA's scale is unmatched, with commitment to deliver X factor leaps in performance per watt every generation and extend leadership position over the long term.
  • Hopper and six-year-old Ampere-based products are sold out in the cloud, demonstrating sustained demand for NVIDIA infrastructure.
  • Competitors in China, bolstered by recent IPOs, are making progress and have the potential to disrupt the structure of the global AI industry over the long-term.
  • NVLink scale up fabric has revolutionized computing and demonstrates the power of extreme co-design across all chips of the supercomputer and the full stack.
  • Macroeconomic Environment

  • Transition of classic data center workloads to GPU accelerated computing and use of AI to enhance hyperscale workloads expected to contribute toward roughly half of NVIDIA's long-term opportunity.
  • Every country will build and operate some parts of its AI infrastructure, just like with electricity and Internet today.
  • Long-term sovereign opportunity expected to grow at least in line with the AI infrastructure market, as countries spend on AI proportional to their GDP.
  • H200 products for China-based customers: While small amounts were approved by the US government, NVIDIA has yet to generate any revenue and does not know whether any imports will be allowed into China.
  • America must engage every developer and be the platform of choice for every commercial business, including those in China, to sustain its leadership position in AI compute.
  • Growth Opportunities and Strategies

  • Vera Rubin platform unveiled at CES, comprised of six new chips including Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9, SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet Switch.
  • Rubin platform will train MoE models with one-fourth the number of GPUs and reduce inference token costs by up to 10x compared to Blackwell.
  • First Vera Rubin samples shipped to customers, with production shipments expected to commence in the second half of the year.
  • Rubin will deliver improved resiliency and serviceability relative to Blackwell based on its modular cable-free tray design.
  • Every cloud model builder expected to deploy Vera Rubin.
  • RTX PRO 5000 Blackwell Workstation launched with 72 gigabytes of fast memory for AI developers running LLMs and agentic workflows.
  • Alpamayo introduced at CES, the world's first open portfolio of reasoning, vision, language, action models, simulation blueprints, and data sets enabling vehicles that can think.
  • First passenger car featuring Alpamayo built on NVIDIA Drive will be on the road soon in the new Mercedes-Benz CLA.
  • New NVIDIA Cosmos and Isaac GROOT open models and frameworks announced to advance robotics development with leading companies including Boston Dynamics, Caterpillar, Franka Robotics, LG Electronics, and NEURA Robotics.
  • Expanding partnerships with Dassault Systèmes, Siemens, and Synopsys to bring NVIDIA AI infrastructure, Omniverse digital twins, world models, and CUDA-X libraries to millions of researchers, designers, and engineers.
  • Strategic investments in frontier model makers including $10 billion investment in Anthropic and ongoing partnerships with OpenAI, Meta, and xAI.
  • Non-exclusive licensing agreement with Groq for its low-latency inference technology, with Groq's team of engineers welcomed to NVIDIA.
  • NVIDIA's ecosystem expanded beyond computing platform to become a computing AI infrastructure company with computing platforms across every aspect of the stack.
  • Investments focused on expanding and deepening ecosystem reach across AI for language, physical AI, AI physics, biology, robotics, and manufacturing.
  • Financial Guidance and Outlook

  • Total revenue expected to be $78 billion, plus or minus 2% for Q1 fiscal 2027, with most growth driven by Data Center.
  • No Data Center compute revenue from China assumed in the outlook, consistent with last quarter.
  • GAAP and non-GAAP gross margins expected to be 74.9% and 75%, respectively, plus or minus 50 basis points for Q1 fiscal 2027.
  • Full-year gross margins expected to remain in the mid-70s.
  • GAAP and non-GAAP operating expenses expected to be approximately $7.7 billion and $7.5 billion, respectively for Q1 fiscal 2027, including stock-based compensation expense of $1.9 billion.
  • Full-year non-GAAP operating expenses expected to grow in the low 40s on a year-over-year basis as NVIDIA continues to invest in expanding opportunity set.
  • GAAP and non-GAAP tax rates expected to be between 7% and 19% for full year fiscal 2027, excluding discrete items and material changes to tax environment.
  • Sequential revenue growth expected throughout calendar 2026, exceeding the $500 billion Blackwell and Rubin revenue opportunity shared previously.
  • Supply constraints expected to be a headwind to gaming in Q1 and beyond, though end demand remains strong and channel inventory levels are healthy.
  • Gaming growth potential dependent on supply improvements, with year-over-year growth possible if supply constraints ease by end of year.
  • Strategic Positioning and Ecosystem

  • NVIDIA positioned as the only accelerated computing platform in every cloud, available through every single computer maker, available at the edge, and cultivating telecommunications.
  • 1.5 million AI models on Hugging Face all run on NVIDIA CUDA, with open source representing the second largest model ecosystem after OpenAI.
  • Diversity of customer base is one of NVIDIA's greatest strengths, spanning cloud providers, AI model makers, enterprises, supercomputing, sovereigns, and other diverse customer types.
  • CUDA architecture unquestionably more effective and efficient than any competing computing architecture, delivering more performance per flop and per watt.
  • Architecture compatibility across GPU generations allows NVIDIA to invest enormously in software engineering and optimization, benefiting entire installed base across generations.
  • Compute Economics and AI Infrastructure

  • In the new world of AI, compute equals revenues, with compute directly translating to intelligence and revenue growth.
  • Inference equals revenues for customers, as agents generate tokens that are profitable and drive extreme urgency to scale up compute.
  • Tokens are profitable and driving extreme urgency to scale up compute, with agentic systems generating thousands to hundreds of thousands of tokens.
  • Performance per watt is critical metric, with NVLink 72 enabling 50 times better performance per watt and 35 times lower cost per token compared to previous generation.
  • Data center CapEx expected to reach $3 trillion to $4 trillion by 2030, driven by token generation requirements and agentic AI inflection.
  • World investing approximately $300 billion to $400 billion annually in classical computing, with AI requiring a thousand times higher computation demand.