Back to dashboard
Mag 7 Equities

NVIDIA (NVDA)

NVIDIA is the dominant supplier of accelerated computing infrastructure for the AI era — the company whose GPUs train and run nearly every commercially significant large language model and AI service. Its market capitalization grew from approximately $500 billion in late 2022 to over $4 trillion by 2025, the fastest trillion-dollar accumulation in stock-market history.

NVDAmag-7 · semiconductors · ai-infrastructure · data-center · cuda
NVDA

The picks-and-shovels company of the AI era. NVIDIA doesn't build AI models — its customers do — but every commercially significant model is trained and run on NVIDIA hardware, and the company captures roughly 30-50 cents of revenue for every dollar of compute spent in the AI infrastructure boom. The financial scale of this dominance is unprecedented: NVDA went from a $500 billion company in late 2022 to over $4 trillion by mid-2025, gaining $3+ trillion in market capitalization in less than three years. There is no precedent for this rate of corporate value creation in equity market history.

NVDA

What it measures

NVDA is the equity of NVIDIA Corporation, a Santa Clara-based semiconductor company:

The dashboard toggle between Price and Market Cap shows two views of the same company. Price view (currently in the $130-180 range post-2024 stock splits) is useful for option-pricing and per-share analysis. Market cap view ($3-4 trillion range as of 2025) shows the total scale of the business and its position in market-cap rankings — currently in the top 3 globally, sometimes #1.

NVDA's stock has had two recent splits: a 4-for-1 in July 2021 and a 10-for-1 in June 2024. The 10-for-1 in 2024 brought the share price from approximately $1,200 to $120, which is the modern level. We track via NVDA on Yahoo Finance.

Why it matters

Two angles.

The AI-capex-bellwether angle. Every quarter, NVDA's earnings call is the closest thing the AI industry has to a state-of-the-union address. The company's reported data-center revenue and forward guidance set expectations for hyperscaler capex spending; the customer commentary (which hyperscalers are accelerating, which are pausing) is followed obsessively; the announced product roadmap (H100 → H200 → Blackwell → Rubin) shapes capacity planning across the industry. NVDA's stock is therefore one of the cleanest single reads on the strength of the AI infrastructure investment cycle.

The concentration-risk angle. As of 2025, NVDA alone accounts for roughly 6-7% of the S&P 500 by weight, more than 9% of the Nasdaq 100, and close to 50% of all gains in the S&P 500 in some quarters of 2023-2024. Any portfolio that holds passive index funds has substantial implicit exposure to NVIDIA-specific risk: AI capex slowdown, customer-concentration risk, regulatory action, or simple multiple compression on a forward P/E that's elevated by historical standards. The 2023-2024 rally that lifted broad equity benchmarks was substantially NVDA's individual story; a reversal would similarly drag indices.

What moves it, and what it moves

Moves NVDA:

NVDA moves:

A worked example: the 2023-2024 vertical move

NVDA closed 2022 at approximately $146 (pre-split) with a market cap around $365 billion. ChatGPT had been released a month earlier (November 30, 2022), but Wall Street hadn't yet priced in the full implications for hyperscaler AI capex.

The inflection point: the May 24, 2023 earnings call. NVDA reported Q1 fiscal-2024 revenue of $7.2 billion (vs. $6.5 billion expected) and guided next-quarter revenue to $11.0 billion — a number more than 50% above prior consensus. The forward guidance implied that hyperscaler AI capex was inflecting massively higher, with NVDA capturing essentially all of the incremental spending.

The next day, NVDA closed +24.4%, adding approximately $200 billion in market cap in a single session — at the time, the largest one-day market cap gain in any stock's history. The company crossed $1 trillion in market cap on May 30, 2023, the first chip company to reach that milestone.

From there the acceleration continued. $2 trillion in February 2024. $3 trillion in June 2024. $4 trillion in late 2024. Crossed above $4 trillion in mid-2025. The cumulative move from January 2023 to mid-2025 was roughly +750% — turning a $146 share price (split-adjusted: $14.60) into approximately $135-150 in the modern era.

The financial fundamentals justified much of this. NVDA's data-center revenue went from $15 billion in fiscal 2023 to over $100 billion annualized by fiscal 2025 — a 6-7x increase in 24 months. Operating margins expanded from 30% to 60%+. EPS grew over 600% in the same period. The stock's multiple actually compressed slightly over the rally — much of the price action was earnings-realization, not multiple expansion.

Market cap progression milestones

Specific dates and approximate share prices (split-adjusted to post-June-2024 10-for-1):

NVDA's market cap progression from $1T to $4T took roughly 13 months — the fastest accumulation of $3 trillion in market value in stock-market history, by a wide margin.

The current cycle, and the open question

The dominant debate around NVDA:

Watch points: hyperscaler capex guidance from Microsoft, Amazon, Google, and Meta on their own earnings calls (these are the most reliable forward indicators); NVDA's quarterly data-center revenue (the cleanest single financial metric); competitive product launches from AMD and hyperscaler-custom silicon; US-China export control developments; and the spread between NVDA forward P/E and SOXX (the semiconductor ETF) forward P/E — when it widens, NVDA-specific risk premium is expanding.

Further reading

FAQ

Why is NVIDIA so dominant in AI training chips?
Two reasons. First, the hardware: NVIDIA's H100, H200, and Blackwell-generation GPUs are 2-3x more performant for AI training workloads than the closest competing chip (AMD's MI300, Google's TPU, plus various startup AI accelerators). Second, and more importantly, the software stack: CUDA — NVIDIA's parallel-computing platform — has been the dominant programming framework for GPU compute since 2007. The vast majority of academic and industry AI research codebases are written against CUDA. Switching to a competing chip requires substantial code-porting work that AI labs almost never have spare engineering bandwidth for. The CUDA moat is genuinely durable; the hardware advantage is durable as long as NVIDIA stays at the front of the technology roadmap.
What's the difference between gaming GPUs and data-center GPUs?
Same underlying chip architecture but very different products. Gaming GPUs (GeForce RTX line) are optimized for consumer graphics rendering and sold to gamers and content creators; they retail at $400-2,000 per unit. Data-center GPUs (H100, H200, Blackwell B100/B200) are optimized for AI training and inference workloads; they sell to hyperscalers (AWS, Azure, Google Cloud, Meta), enterprise AI builders, and increasingly nation-state AI initiatives. Single H100 GPU prices have ranged from $25,000-50,000. NVIDIA's data center revenue has gone from $4 billion in 2020 to over $100 billion annualized in 2024-2025 — making the gaming business, which was the company's entire identity in 2015, almost a side-business by comparison.
Why was NVDA's stock so dramatically re-rated starting in 2023?
Because ChatGPT was released in November 2022 and produced an inflection in AI infrastructure demand that NVIDIA was uniquely positioned to supply. By May 2023, NVDA had reported Q1 results showing that hyperscaler AI capex was already vastly outpacing what the market had assumed. The May 2023 earnings call ('the most important earnings call of the decade,' per one analyst) caused a single-day stock move of +24%, adding $200 billion in market cap. From the start of 2023 through mid-2024, NVDA's stock rose roughly 600% as quarterly revenue grew at unprecedented rates (revenue YoY growth peaked above 200% in Q4 2023). The fundamental driver is real: AI infrastructure capex from hyperscalers grew from ~$150 billion annually in 2022 to ~$400 billion projected for 2025, with the bulk flowing through NVIDIA.
How sustainable is NVIDIA's competitive position?
Several attacks are coming. Hyperscalers (Google with TPU, Amazon with Trainium, Microsoft with Maia, Meta with MTIA) are designing their own AI chips to reduce NVIDIA dependency — these custom silicon programs are technically advancing rapidly. AMD's MI300 series is meaningfully closing the hardware gap. Startups (Cerebras, Groq, SambaNova, Tenstorrent) are designing specialized chips for inference workloads where NVIDIA's architecture is less optimal. And there's geopolitical pressure: US export controls have limited NVIDIA's sales to China, forcing domestic Chinese alternatives (Huawei's Ascend, Cambricon, Biren) to develop. Most analysts expect NVIDIA to retain dominance in training workloads for at least 3-5 more years (CUDA moat is durable) but to face material share loss in inference workloads over the same period.
Why does NVDA stock move so sharply on quarterly earnings?
Because every quarter, NVIDIA provides forward guidance that effectively sets the market's expectation for the entire AI-infrastructure industry. When NVDA guides revenue 10% above consensus, every AI-adjacent stock rallies; when it guides below, the same stocks sell off. NVIDIA also reports more concentrated revenue (top customers — Microsoft, Amazon, Meta, Google — account for 40-50% of revenue) than other Mag 7 names, which makes its data-center revenue particularly sensitive to individual customer capex decisions. Add to that the elevated forward P/E (35-40x as of 2025) and you have a stock that can move 10-15% on a single earnings beat or miss. Volatility on earnings days has been roughly 3-4x what NVDA was experiencing pre-2023.

Related indicators