The AI ASIC Market Report, Part 1: Hyperscaler Silicon — Google TPU, AWS Trainium, Microsoft Maia, Meta MTIA & OpenAI's Custom Chip
Google, AWS, Microsoft, Meta, and OpenAI are all building custom AI chips. Here's where each program stands.
Broadcom reported $8.4 billion in AI semiconductor revenue in Q1 FY2026 — up 106% year-over-year — and guided to $10.7 billion for Q2, up 140%. CEO Hock Tan's $100 billion FY2027 AI revenue target is now backed by a $73 billion committed customer backlog and a long-term Google TPU supply agreement running through 2031. The hyperscaler AI ASIC trade isn't theoretical anymore. It's in the acceleration phase.
Every major hyperscaler is now building custom AI silicon at industrial scale. This is Part 1 of Hashrate Index's AI ASIC Market Report, focused on the hyperscaler camp — Google, AWS, Microsoft, Meta, and OpenAI — and the Broadcom-enabled design ecosystem powering most of it. Part 2 covers the independent AI chip companies competing with both hyperscalers and NVIDIA. If you haven't yet, start with our primer on what an AI ASIC is for the foundational concepts this report builds on.
Table of Contents
- Why Hyperscalers Are Building Their Own AI Chips
- Google TPU: The 7-Generation Head Start
- AWS Trainium & Inferentia: The Two-Chip Cost Play
- Microsoft Maia: The Catch-Up Investment
- Meta MTIA: The Recommendation-Optimized Bet
- OpenAI Custom ASIC: The Hyperscaler Club Expands
- Hyperscaler AI ASIC Comparison
- The Broadcom Concentration: Why One Company Enables Most of It
- Supply Chain Chokepoints: TSMC and HBM
- What This Means for Bitcoin Miners
- Frequently Asked Questions
Why Hyperscalers Are Building Their Own AI Chips
Three forces drive every hyperscaler AI ASIC program: cost savings on inference at scale, supply chain independence from NVIDIA, and competitive differentiation through vertical integration of hardware and software.
The inference-era math is the trigger. At hyperscale, a 50–67% reduction in cost-per-token turns the $10M–$100M+ design cost of a custom ASIC into a first-year payback event. Once a workload is mature, predictable, and running at billions of inferences per day, the flexibility premium of a GPU becomes an unjustified tax. Google has been reasoning through this math since 2013. AWS since 2015. The remaining hyperscalers are now catching up.
The NVIDIA-dependence problem compounds the first driver. NVIDIA controls roughly 86% of AI GPU supply and extracts gross margins north of 73%. Every dollar Microsoft or Meta pays NVIDIA for GPU capacity is a dollar of their margin going to NVIDIA's margin. Custom silicon is the only structural escape hatch. This is true even when the custom chip isn't as capable as a flagship GPU — the economics of paying 73% gross margin to a supplier incentivize paying significantly less for something that does 70% of the job.
The vertical integration play is the third driver, and it's the most strategic. When Google owns the TPU, the interconnect, the software stack (JAX and TensorFlow), the data center, and the cloud customer relationship, it captures the economics of the full AI compute stack. NVIDIA gets a smaller slice — or none. Vertical integration is also what makes hyperscaler AI products harder to compete with: if AWS prices Trainium-based inference 50% below NVIDIA-based inference, competitors who don't have their own ASIC can't match the price without eating their own margin.
Before we get to the individual hyperscaler programs, one structural fact needs to be on the table: despite the "every hyperscaler is building their own chips" framing, most of them aren't building anything alone. Broadcom and Marvell together enable 80%+ of hyperscaler custom AI silicon per Bloomberg Intelligence. The accurate framing of the hyperscaler ASIC era is that hyperscalers are designing chips with Broadcom and Marvell, not building them from scratch. Section 8 covers this concentration in depth.
Google TPU: The 7-Generation Head Start
Google's TPU program is the oldest, most mature, and most externally deployed hyperscaler ASIC. The program began in 2013, first deployed in 2015, and is now on its seventh generation with TPU v7 Ironwood, released in November 2025. The roughly six-year lead Google has over Microsoft and AWS shows up everywhere — software stack maturity, cluster coordination scale, external customer count, and public deployment numbers.
The economics are structurally better than GPU alternatives for the workloads TPUs target. Trillium (v6) delivered 4.7x performance-per-chip versus its predecessor. Google claims 4.7x better price-performance and 67% lower inference power consumption versus high-end NVIDIA GPUs. At the rental layer, 8 TPU v5e chips run LLM inference at approximately $11 per hour — an order of magnitude less than 8 H100 GPUs for comparable throughput. System-level maturity is the quiet advantage: Google routinely deploys 10,000+ TPU coordinated clusters, a scale no other hyperscaler has reached.
The Anthropic relationship is the external validation point. In October 2025, Anthropic announced a landmark expansion of its TPU usage, securing access to up to one million TPU chips and bringing well over a gigawatt of capacity online in 2026. The deal is worth "tens of billions of dollars" per Anthropic's own framing. Thomas Kurian, CEO of Google Cloud, explicitly cited TPU price-performance as Anthropic's motivation — not strategic alignment, not regulatory pressure, but unit economics.
Then in April 2026, the commitment expanded significantly. Anthropic signed an additional multi-gigawatt deal with Google and Broadcom for up to 5GW of next-generation TPU capacity starting in 2027. The context matters: Anthropic's run-rate revenue jumped from approximately $9 billion at the end of 2025 to over $30 billion by April 2026, with business customers spending $1M+ annually more than doubling in two months. Anthropic needed compute at a scale that only the most aggressive hyperscaler AI ASIC deployment could supply — and TPU was the architecture that met the demand.
Broadcom is confirmed as Google's TPU design partner through 2031. This five-year supply agreement is the clearest commitment to the TPU roadmap any external party has made. External TPU customers beyond Anthropic include Midjourney, Salesforce, and Safe Superintelligence (Ilya Sutskever's startup). The TPU ecosystem is building customer lock-in the same way CUDA did for NVIDIA — code optimized for TPU running on JAX or TensorFlow is expensive to migrate to other platforms, which makes the TPU stickier with every deployment.
AWS Trainium & Inferentia: The Two-Chip Cost Play
AWS runs a two-chip AI ASIC strategy that mirrors the training-versus-inference economic split. Trainium handles training. Inferentia handles inference. The architectural logic is the same logic driving the broader AI ASIC market: flexible GPUs for training, purpose-built ASICs for production inference.
AWS's AI silicon program predates most hyperscaler ASIC efforts. The Annapurna Labs acquisition in 2015-2016 gave AWS an in-house silicon team years before Microsoft, Meta, or OpenAI had their own. Current products are Trainium2 (launched 2025) and Inferentia2. Trainium2 delivers 83.2 petaflops in ultra-server configurations, and AWS claims up to 50% cost savings for inference workloads versus equivalent NVIDIA GPU configurations. Mixed-precision Trainium strategies yield approximately 25% throughput increase per dollar in hardware-aware optimized workloads.
Scale tells the adoption story. AWS has deployed more than 500,000 Trainium2 chips in production as of late 2025 — the second-largest hyperscaler ASIC fleet after Google TPU. Anthropic is a notable customer here as well, training models on Trainium as part of a multi-platform strategy that also includes Google TPU and NVIDIA GPUs. The November 2025 Anthropic commitment to invest $50 billion in U.S. compute infrastructure feeds both Google TPU and AWS Trainium simultaneously — a single AI lab is underwriting both dominant hyperscaler ASIC programs at the same time.
The competitive positioning is cost-disruptor. AWS isn't claiming Trainium beats NVIDIA on peak performance. It's claiming Trainium delivers acceptable performance at meaningfully lower cost, which is exactly the positioning that wins at scale for cost-sensitive enterprise inference customers. The primary Trainium design services partner is Marvell, which anchors Marvell's 20-25% share of the custom ASIC design services market.
The system-level maturity gap versus Google is real. AWS has scaled Trainium to 1,000+ chips in customer programs, but Google routinely deploys 10,000+ TPU clusters. Closing this gap is a multi-year engineering exercise, not a product cycle. For training the largest frontier models, the maturity gap matters. For production inference at the volumes AWS customers actually run, the gap is less relevant — and that's where Trainium's cost advantage compounds fastest.
Microsoft Maia: The Catch-Up Investment
Microsoft's AI ASIC program started in 2019 — roughly six years behind Google. The software stack maturity reflects that gap, and so does the current deployment picture. The current products are Maia 100 (launched 2024) and Maia 200 (announced, TSMC 3nm, 216GB HBM3e, reportedly faster than competing bespoke NVIDIA alternatives).
The reality check is in the deployment share. Approximately 70% of Azure AI workloads still run on NVIDIA hardware as of late 2025. Maia is a long-term strategy, not a current dominant deployment. Customer wait times for committed Maia capacity run 18-24 months, versus 2-3 months for Google TPU — a concrete maturity signal that separates Google's program from everyone else's.
What justifies the Maia investment despite the late start is the OpenAI workload volume. The Microsoft-OpenAI relationship creates enormous inference demand — Copilot alone, plus the broader OpenAI API workload running on Azure, produces enough inference volume that even a Maia chip running at 70% the efficiency of a TPU is economically superior to running that workload on NVIDIA GPUs. Microsoft's ASIC investment is as much about reducing NVIDIA dependency on OpenAI-driven workloads as it is about delivering a best-in-class chip to external customers.
The design partner for Maia is Marvell. Combined with AWS Trainium, the Marvell portfolio concentrates on the two hyperscaler ASIC programs that are in catch-up positions relative to Google TPU — which is Broadcom's. The hyperscaler ASIC design services market is effectively divided: Broadcom works with the leader, Marvell works with the chasers.
Meta MTIA: The Recommendation-Optimized Bet
Meta's AI ASIC — MTIA, the Meta Training and Inference Accelerator — is the only major hyperscaler ASIC that isn't commercialized externally at all. MTIA is purpose-built for a specific class of workloads: ad ranking, content recommendation, and feed-ranking systems. Not general-purpose LLM training. Not general-purpose inference.
The current generation is MTIA v2, which Meta positions as competitive with Google TPU v5 on recommendation workloads. The CapEx backdrop is what makes the specificity interesting. Meta's 2025 CapEx guidance was $60-72 billion, still predominantly NVIDIA GPUs. At that scale, even a small percentage shift to internal ASICs represents billions of dollars of potential NVIDIA displacement.
The more important development happened in October 2025. Per The Information and Reuters, Meta entered advanced talks with Google for a multibillion-dollar TPU deployment starting mid-2026, with on-premises TPU pods possible by 2027. NVIDIA's largest single customer is actively diversifying away from NVIDIA for LLM inference. Meta's MTIA handles recommendation workloads internally; external LLM inference at scale may move to Google TPU.
Mark Zuckerberg framed the shift directly in Q3 2025 earnings: Meta is "exploring multiple silicon providers to optimize for different workload types." This is the hyperscaler ASIC thesis spoken aloud by one of NVIDIA's biggest customers. The fact that Meta — a company with its own mature ASIC program — is still paying Google billions of dollars for TPU capacity signals two things simultaneously. First, the maturity gap between Google TPU and every other hyperscaler ASIC is real and meaningful. Second, the economic case for hyperscaler ASIC alternatives to NVIDIA is strong enough that even Meta's own MTIA program can't satisfy its full LLM inference demand.
OpenAI Custom ASIC: The Hyperscaler Club Expands
OpenAI's first custom AI ASIC is in development through a $10 billion partnership with Broadcom. The project is led by an internal team of roughly 40 engineers under Richard Ho (ex-Alphabet), with TSMC planned as the manufacturer. Initial Q2 2026 ambitions have slipped to Q3 2026 at the earliest. The chip targets inference workloads across OpenAI's growing data center fleet.
The strategic context is more important than the chip specs. OpenAI is simultaneously building its own custom silicon with Broadcom, signing an infrastructure agreement with NVIDIA potentially worth over $100 billion for GPU clusters, and serving as the largest customer of Cerebras through a multi-year $10B+ compute deal. This is a three-way strategy: custom silicon with Broadcom, commercial ASICs from Cerebras, and GPU clusters from NVIDIA. ASICs and GPUs aren't mutually exclusive at OpenAI's scale. They're complementary.
The implication for the broader hyperscaler ASIC market is clear. OpenAI's custom silicon program is the clearest signal that hyperscaler custom silicon is no longer limited to the Big Four clouds. The effective definition of "hyperscaler" is expanding to include major AI labs with hyperscaler-scale compute needs — Anthropic is the other obvious example, with its multi-gigawatt TPU and Trainium deployments. Both companies have reached compute scales where building custom silicon makes economic sense, even without running a consumer cloud business.
Hyperscaler AI ASIC Comparison
The five programs covered above differ substantially in maturity, commercialization, deployment scale, and strategic positioning. The table below summarizes the state of each program as of mid-2026.
| Company | Chip Family | Current Generation | Design Partner | Est. Cost Savings vs GPU | Commercialized Externally? | Scale Indicator |
| TPU | v7 Ironwood (Nov 2025) | Broadcom (through 2031) | ~65–67% inference | Yes (Google Cloud) | 10,000+ chip clusters; 1M+ Anthropic commitment | |
| AWS | Trainium / Inferentia | Trainium2 / Inferentia2 | Marvell | ~50% inference | Yes (AWS) | 500,000+ Trainium2 deployed |
| Microsoft | Maia | Maia 100 / Maia 200 | Marvell | Internal est. | Limited (Azure) | 70% of Azure AI still on NVIDIA |
| Meta | MTIA | v2 | Internal | Internal est. | No | Competitive with TPU v5 on recommendation workloads |
| OpenAI | (unnamed) | Pre-production | Broadcom | TBD | No (planned internal) | Q3 2026 target |
Google leads on every dimension that matters — maturity, software stack, deployment scale, external customer count. AWS leads on raw deployment volume of commercial chips. Microsoft and Meta are catching up at different speeds for different reasons: Microsoft's OpenAI relationship creates the inference volume that justifies Maia even at lower maturity; Meta's MTIA is narrow but deep, and the Google TPU pivot signals that Meta's own program can't satisfy full LLM demand. OpenAI's pre-production chip validates the thesis one level further up the stack.
The Broadcom Concentration: Why One Company Enables Most of It
The "every hyperscaler is building their own chips" narrative obscures the concentration happening behind it. Broadcom enables most of it. The financial trajectory tells the story:
| Period | AI Semi Revenue | YoY Growth | Total Revenue | Adj. EBITDA Margin |
| Q1 FY2025 | $4.1B | +77% | $14.9B | ~68% |
| Q4 FY2025 | $6.5B | +74% | $18.0B | ~68% |
| Q1 FY2026 | $8.4B | +106% | $19.3B | 68% |
| Q2 FY2026 (guidance) | $10.7B | $22.0B | +140% | 68% |
Broadcom's full FY2025 totals were $63.9 billion in total revenue (+24%), $23.1 billion GAAP profit (nearly 4x year-over-year), and $26.9 billion in free cash flow. Gross margins of 78.6% exceed NVIDIA's ~73.5%. CEO Hock Tan's $100 billion FY2027 AI chip revenue target is backed by a $73 billion committed customer backlog and long-term supply agreements like the Google TPU deal through 2031.
Broadcom's confirmed XPU customer list now includes Google, Meta, OpenAI, Anthropic, and Apple — six major customers disclosed as of Q1 FY2026 earnings. Apple's addition was a new 2026 disclosure not previously public. Market share estimates now place Broadcom at 70%+ of the custom AI accelerator design services market, up from the 60-80% range Bloomberg Intelligence flagged earlier in 2026. The concentration is increasing as Broadcom's customer wins compound.
Marvell occupies a similar strategic position at smaller scale. Marvell commands 20-25% market share anchored by AWS Trainium and Microsoft Maia design wins. It's the secondary option hyperscalers choose when they want a dual-source relationship, which is a structural incentive to keep Marvell in the game even as Broadcom dominates. Marvell's Q1 FY2026 earnings explicitly flagged the competitive risk that customers develop their own fully in-house design capabilities — a direct acknowledgment of the dynamic Broadcom and Marvell both face.
One more structural fact worth flagging: Broadcom is also enabling NVIDIA's response to the ASIC threat. NVLink Fusion — NVIDIA's move to open its NVLink interconnect fabric to third-party ASICs — is partly a Broadcom-facilitated arrangement. Hyperscalers get high-speed connectivity without building their own interconnect; NVIDIA locks customers into its broader ecosystem even as those customers build competing silicon. We cover NVIDIA's strategic response in depth in Part 2.
Supply Chain Chokepoints: TSMC and HBM
Every chip covered in this report — Google TPU, AWS Trainium, Microsoft Maia, Meta MTIA, and OpenAI's future custom chip — manufactures at TSMC. The foundry produces approximately 92% of advanced AI chips at 7nm and below. The hyperscaler ASIC market, the independent ASIC market, and the NVIDIA GPU market all rely on the same manufacturing base.
TSMC's financial trajectory matches the demand. Q4 2025 revenue hit $33.73 billion (+20.5% YoY), with net income up 35% to $15.2 billion. The company is investing $100 billion in five new U.S. fabs — two Arizona fabs at 4nm and 3nm completing around 2026, three additional planned — and expanding simultaneously to Japan to reduce geopolitical concentration. The non-Taiwan fabs run at thinner margins due to higher operating costs, a cost TSMC is absorbing deliberately to reduce Taiwan-specific risk exposure.
The geopolitical risk is real and elevated in 2026. Taiwan produces roughly 90% of advanced chips and runs on imported energy. Middle East conflict and threats to the Strait of Hormuz are raising fresh concerns about TSMC's LNG import routes. Analyst estimates put the cost of a Taiwan semiconductor disruption at up to $2.5 trillion in annual global economic losses — an exposure that affects every AI ASIC program simultaneously.
Memory is the second chokepoint. SK Hynix provides approximately 62% of HBM used in AI chips. DRAM prices surged 171% in late 2025. NVIDIA Blackwell is sold out through 2027. There are no short-term alternatives to TSMC's CoWoS advanced packaging or SK Hynix's HBM3/HBM4 stacks. The structural supply constraint affects both hyperscaler ASICs and NVIDIA GPUs, which is why some independent architectures — Groq's SRAM-based LPU and Cerebras' wafer-scale engine — deliberately design around HBM dependency. Part 2 covers those architectural choices in depth.
What This Means for Bitcoin Miners
The hyperscaler AI ASIC buildout has three direct implications for Bitcoin mining operators and investors.
First, the power demand signal. Anthropic alone is bringing over 1 gigawatt of TPU capacity online in 2026, with 5GW+ more slated for 2027. Add AWS, Microsoft, Meta, and OpenAI deployments and the total hyperscaler AI compute power demand in 2026-2027 is unprecedented. That demand is landing in the same regions miners already operate in — ERCOT, SPP, MISO, parts of PJM — competing directly for grid capacity, interconnection queue positions, and long-term power purchase agreements. Miners with established utility relationships and existing interconnect capacity have assets that hyperscalers are now paying premium prices to acquire.
Second, the site strategy signal. Hyperscaler AI ASICs need the same physical inputs as Bitcoin mining ASICs — power at scale, cooling capacity, data center real estate, and increasingly the same constrained supply chain for HBM and advanced packaging. Mining operators who understand power procurement, grid relationships, demand response, and cooling economics already understand the hardest operational parts of AI infrastructure. The mullet mining thesis — pairing Bitcoin mining operations with adjacent AI compute infrastructure — gets stronger as hyperscaler AI power demand expands, not weaker.
Third, the custom silicon validation signal. Every dollar Broadcom earns on its $100 billion FY2027 AI chip revenue run rate is a dollar validating the exact same industrial logic that underpins Bitmain, MicroBT, and Canaan. The hyperscaler AI ASIC market isn't a separate opportunity existing in parallel to Bitcoin mining — it's independent confirmation that the Bitcoin mining ASIC business model (custom silicon for well-defined, high-volume workloads, sold at industrial scale) is the correct long-term architecture for compute. That validation matters for anyone evaluating Bitcoin mining infrastructure capital allocation, because it confirms the thesis is durable across two very different end markets.
Frequently Asked Questions
Which hyperscaler has the most advanced AI ASIC program?
Google has the most advanced hyperscaler AI ASIC program. Its TPU project started in 2013 and first deployed in 2015 — roughly six years ahead of Microsoft and AWS. Google is on its seventh generation (TPU v7 Ironwood, released November 2025), routinely deploys 10,000+ TPU coordinated clusters that no other hyperscaler has matched, and serves the largest external ASIC customer in the market (Anthropic, with up to 1 million TPU commitment). Customer wait times for committed Google TPU capacity run 2-3 months versus 18-24 months for Microsoft Maia — a concrete signal of operational maturity.
How does Google TPU compare to AWS Trainium?
Google TPU and AWS Trainium are the two largest hyperscaler AI ASIC programs but differ significantly in positioning. Google's TPU is a unified training and inference chip now in its seventh generation (Ironwood), with ~6 years more maturity than Trainium and roughly 10x greater cluster scale. AWS splits training (Trainium) from inference (Inferentia) in a two-chip strategy focused on cost-disruption — claiming up to 50% savings versus NVIDIA GPU inference. AWS has deployed 500,000+ Trainium2 chips in production, the largest commercial hyperscaler ASIC fleet by unit count, though Google wins on cluster coordination scale and software stack maturity.
Who designs Google's TPU chips?
Broadcom is Google's primary TPU design partner and has been for years. The relationship was confirmed publicly in April 2026 as a long-term supply agreement running through 2031 — Google's most significant external commitment for custom AI silicon design services. Broadcom provides the foundational IP, networking technology, interconnect fabric, and ASIC design expertise that makes Google's TPU designs possible. Google owns the chip architecture and software stack; Broadcom enables the silicon implementation. This partnership is the largest single contributor to Broadcom's ~70% share of the custom AI accelerator design services market.
What is the Anthropic-Google TPU deal worth?
The Anthropic-Google TPU relationship has two distinct commitments. The October 2025 deal secured Anthropic access to up to 1 million TPU chips and brought over a gigawatt of capacity online in 2026, worth "tens of billions of dollars" per Anthropic's own framing. In April 2026, Anthropic expanded the partnership with Google and Broadcom to secure an additional multi-gigawatt deal for up to 5GW of next-generation TPU capacity starting in 2027. The combined commitment is the single largest external customer arrangement in the AI ASIC market and is part of Anthropic's $50 billion U.S. compute infrastructure commitment announced in November 2025.
How many Trainium chips has AWS deployed?
AWS has deployed more than 500,000 Trainium2 chips in production as of late 2025 — the largest commercial hyperscaler ASIC fleet by unit count. Trainium2 delivers 83.2 petaflops in ultra-server configurations, and major customers include Anthropic (which uses Trainium for model training as part of a multi-platform strategy alongside Google TPU and NVIDIA GPUs). AWS has scaled Trainium to 1,000+ chips in individual customer programs, though this remains a meaningful gap versus Google's 10,000+ TPU coordinated clusters. AWS positions Trainium as a cost-disruptor, claiming up to 50% savings on inference workloads versus equivalent NVIDIA GPU configurations.
Why is Meta working with Google on TPUs if Meta has MTIA?
Meta's own AI ASIC, MTIA, is purpose-built for narrow workloads — ad ranking, content recommendation, and feed ranking. MTIA v2 is competitive with Google TPU v5 on recommendation workloads specifically, but Meta doesn't run general-purpose LLM inference on MTIA at scale. For LLM workloads at Meta's scale, the economics favor Google TPU even with the licensing cost, because TPU's system-level maturity and software stack are several years ahead of MTIA. Mark Zuckerberg framed the approach in Q3 2025 earnings: Meta is "exploring multiple silicon providers to optimize for different workload types." MTIA handles what it's built for; Google TPU handles what Meta's own silicon can't yet.
When will OpenAI's custom chip be available?
OpenAI's first custom AI ASIC is now targeting Q3 2026 at the earliest, after an initial Q2 2026 ambition slipped. The chip is being developed through a $10 billion partnership with Broadcom, with TSMC as the planned manufacturer. An internal OpenAI team of approximately 40 engineers led by Richard Ho (ex-Alphabet) is driving the program. The chip focuses on inference workloads across OpenAI's growing data center fleet. Importantly, OpenAI is simultaneously running GPU infrastructure — including a potential $100B+ deal with NVIDIA — and deploying Cerebras wafer-scale systems through a $20B+ commitment. OpenAI's strategy is three-way: custom silicon, commercial ASICs, and GPUs in parallel.
Who manufactures hyperscaler AI chips?
TSMC manufactures essentially all advanced hyperscaler AI chips. The foundry produces approximately 92% of AI chips at 7nm process nodes and below, including Google TPU, AWS Trainium and Inferentia, Microsoft Maia, Meta MTIA, and OpenAI's planned custom chip. TSMC's Q4 2025 revenue was $33.73 billion (+20.5% YoY), and the company is investing $100 billion in five new U.S. fabs plus additional Japan expansion to reduce geopolitical concentration. This concentration of advanced chip manufacturing at a single foundry is the single largest supply chain risk facing the hyperscaler AI ASIC market — a Taiwan disruption could cost the global economy up to $2.5 trillion annually per analyst estimates.
This report's second installment will cover the independent AI chip companies competing with both hyperscaler silicon and NVIDIA GPUs — Broadcom and Marvell as the dominant design enablers, plus direct competitors including Groq (acquired by NVIDIA in a $20B deal), Cerebras (targeting a $35B IPO later in 2026), Etched, Tenstorrent, and Tensordyne.
Hashrate Index Newsletter
Join the newsletter to receive the latest updates in your inbox.