- Nexan Insights
- Posts
- Semiconductors, AI, and the Future
Semiconductors, AI, and the Future
A Deep Dive for Investors
This table provides a structured analysis of key trends shaping the semiconductor industry, particularly in relation to AI. It highlights major players like ASML, TSMC, NVIDIA, and AMD, explaining their roles in advancing semiconductor technology and scaling AI infrastructure. The table also explores critical investment themes such as the impact of EUV machines, high-bandwidth memory, vertical integration by tech giants, and geopolitical factors influencing semiconductor production. Investors can use this guide to navigate the opportunities and challenges in this rapidly evolving space.
Artificial intelligence is reshaping the global semiconductor industry across fabrication, architecture, and vertical integration. From EUV lithography to high-bandwidth memory and geopolitically-driven capacity shifts, this analysis outlines the competitive landscape, technology inflection points, and investment implications. ASML, TSMC, NVIDIA, and hyperscalers like Google and Amazon now form an intertwined ecosystem defining the next era of compute infrastructure.
1. AI as Demand Catalyst: Rewriting the Semiconductor Stack
The rise of AI is driving an unprecedented surge in compute demand, transforming the traditional GPU supply chain into a vertically optimized AI infrastructure stack.
GPUs, originally designed for rendering graphics, have become the backbone for AI training due to their high parallelism.
NVIDIA and AMD lead on architecture, but their output depends entirely on foundry support.
Inference at scale (e.g., LLMs) demands not just raw compute, but throughput per watt and optimized interconnects.
Metric | 2020 | 2024 Est. |
---|---|---|
AI GPU ASP Growth | ~$3,000 | ~$25,000 |
GPU TAM for AI (units) | ~800K | ~4.2M |
Investor Insight: As training costs scale super-linearly, demand is shifting toward specialized silicon, memory-optimized stacks, and energy-efficient architectures.
AI vs. Traditional GPU Demand: AI-driven GPU demand is skyrocketing, far outpacing traditional applications.

2. ASML and EUV: Enabling Node Compression
ASML’s extreme ultraviolet (EUV) lithography machines are the enabling bottleneck for advanced node progression (<7nm). Each tool:
Costs $150M–$350M.
Enables sub-13nm patterning.
Is sold in limited volumes to TSMC, Intel, and Samsung.
High-NA EUV, ASML’s next-gen platform, will further shrink feature sizes and improve yield for AI chips—but with longer ramp timelines and higher CapEx per unit.
Lithography Type | Feature Size Capability | Key Customers |
---|---|---|
DUV | ≥10nm | Globalfoundries |
EUV | 5–7nm | TSMC, Samsung |
High-NA EUV | <2nm | TSMC (pilot 2025) |
Investor Insight: ASML operates as a hardware tollbooth for Moore’s Law progression. Scarcity of High-NA tools will shape foundry competitiveness and pricing power for the next 5+ years.

EUV Lithography: Advanced EUV machines enable denser, more efficient chips compared to older DUV technology.

3. TSMC: The Strategic Fulcrum of AI Compute
TSMC’s foundry dominance underpins the modern AI stack. Key metrics:
56% of global wafer starts flow through TSMC.
>90% of AI-optimized chips (e.g., A100, H100, TPU) are manufactured at TSMC nodes ≤7nm.
3nm process (N3) now in production, with 2nm (N2) entering pilot phase by late 2025.
TSMC combines scale, yield leadership, and node availability—becoming a geopolitical and commercial chokepoint.
Investor Insight: AI scaling relies on foundry access. Exposure to TSMC, directly or via dependent customers (NVIDIA, AMD), is now a levered bet on AI infrastructure continuity.
TSMC's Node Evolution: Smaller nodes increase transistor density and improve AI performance efficiency.

4. Memory Bottleneck: The Rise of HBM
High-bandwidth memory (HBM) has emerged as a core enabler of AI workloads:
Delivers >1TB/s bandwidth for multi-GPU systems.
Co-packaged with AI accelerators (HBM3, HBM3e).
Enables LLMs to operate at lower latency with larger context windows.
Memory Type | Bandwidth/Stack | Vendors |
---|---|---|
DDR5 | ~64 GB/s | Micron, SK hynix |
HBM2e | ~460 GB/s | SK hynix, Samsung |
HBM3 | ~819 GB/s | SK hynix (lead) |
HBM packaging costs have risen, sometimes exceeding $6,000 per server GPU due to tight supply.
Investor Insight: Memory bandwidth, not just compute, is the new ceiling. Investment in memory vendors and advanced packaging (TSMC CoWoS, Samsung I-Cube) is crucial to track.

High-Bandwidth Memory (HBM): Enhancing AI chip performance by ensuring fast and efficient data access.

5. Vertical Integration: Hyperscalers Enter the Arena
Google, Amazon, Microsoft, and Meta are developing custom AI silicon to reduce dependency on NVIDIA:
Google TPU (Tensor Processing Unit): Inference-optimized, datacenter-scale.
AWS Trainium/Inferentia: Designed for cloud AI economics.
Meta MTIA: Internal chip for LLM inference.
Advantages:
Cost reduction (up to 40% vs. GPU).
Power and footprint optimization.
Proprietary performance tuning.
Player | AI Chip Name | Fab Partner | Target Use Case |
---|---|---|---|
TPUv5 | TSMC | Training + Inference | |
AWS | Trainium | TSMC | Training |
Meta | MTIA | TSMC | Inference |
Investor Insight: Vertical integration is compressing value chain margins. Chip vendors without strong IP defensibility or customization paths risk disintermediation.
Tech Giants' Vertical Integration: Rapidly increasing investment in custom semiconductor development.

6. Geopolitics: Sovereignty and Capacity Diversification
The global semiconductor supply chain is bifurcating due to:
U.S. export controls on AI chipsets to China.
China’s retaliatory focus on domestic foundry scaling (SMIC, YMTC).
Increased CapEx by TSMC, Intel, and Samsung in U.S., Japan, and EU.
Notable tension:
China’s 7nm breakthrough (via DUV multi-patterning) proves capability but remains cost-inefficient.
U.S. CHIPS Act funding is fueling reshoring, but timelines remain long and yield risks persist.
Investor Insight: Equipment and materials suppliers with global diversification (e.g., LAM Research, KLA, Applied Materials) will benefit from reshoring and capacity duplication.
Geopolitical Semiconductor Race: The U.S., China, and South Korea compete for dominance in chip innovation and production.


Moore’s Law vs. Performance per Watt: The shift from raw transistor scaling to energy-efficient computing.

7. Takeaways for Investors
ASML is irreplaceable in the near term. Its EUV monopoly makes it the most defensible infrastructure player in the chain.
TSMC is the operational linchpin. Its ability to scale advanced nodes underpins all AI silicon production.
Memory and packaging are capacity bottlenecks. HBM and 2.5D/3D packaging are capital-intensive and structurally undersupplied.
Vertical integration is eroding traditional supplier margins. Custom silicon reduces TAM for merchant chip vendors.
Geopolitics introduces volatility and opportunity. CapEx duplication benefits equipment vendors, but long-term node parity remains a Western advantage.

The Future of Semiconductors: ASML, TSMC, and NVIDIA drive advancements in AI, EUV, and vertical integration.

