Memory has always been the unglamorous end of the semiconductor industry. It is the commodity component — cyclical, brutal, prone to oversupply, subject to price crashes that periodically destroy the balance sheets of even the most technically accomplished manufacturers. The conventional wisdom among technology investors has been to own the designers of the chips that do the thinking, not the manufacturers of the memory that feeds them. That conventional wisdom is being systematically dismantled by the AI infrastructure buildout, and the country at the centre of the demolition is South Korea.

High Bandwidth Memory — HBM — is the specific technology that has changed the calculus. It is not standard DRAM. It is a fundamentally different memory architecture: DRAM dies stacked vertically using Through-Silicon Via connections, mounted directly on the same package as the GPU or AI accelerator via a silicon interposer, enabling bandwidths that standard DDR or GDDR memory cannot approach. An Nvidia H100 GPU, which became the defining piece of AI infrastructure hardware of the current era, consumes HBM3 with aggregate bandwidth of approximately 3.35 terabytes per second. The HBM4 standard entering production in 2026 doubles the interface width to 2,048 bits and delivers bandwidths exceeding 2 terabytes per second per stack. For context: the consumer DDR5 memory in a high-end gaming PC delivers approximately 100 gigabytes per second total. A single HBM stack in a current-generation AI accelerator delivers twenty times that.1

This is not an incremental improvement. HBM is the reason that training large language models is computationally feasible at all. The memory bandwidth requirement of modern AI — moving billions of parameters between memory and compute cores on every forward pass — exceeds what any previous memory architecture could provide. The "memory wall," the long-standing constraint on computing performance caused by the gap between processor speed and memory bandwidth, has not been broken. It has been pushed back, for now, by HBM. And Korea makes essentially all of it.

The market structure as it stands

SK Hynix entered 2026 with 62% of global HBM market share, having displaced Samsung from its decades-long position as the world's largest DRAM manufacturer for the first time in history. The achievement reflects a decision made several years earlier to prioritise HBM development and to build the customer relationship — specifically with Nvidia — that would make SK Hynix the primary memory supplier for the AI accelerator market. That decision, which looked aggressive at the time, has proven to be one of the most consequential strategic choices in the semiconductor industry's recent history.2

Samsung, which held 41% of the HBM market in Q2 2024, saw its share collapse to 17% in Q2 2025 after its HBM3E products failed to pass Nvidia's qualification tests — a technical setback that allowed SK Hynix to capture the supply contracts for the Blackwell generation of AI accelerators. The reversal is striking in its speed: Samsung went from dominant incumbent to distant third in the highest-value segment of the memory market in a single product generation. Samsung's Q4 2025 saw a partial recovery, and it is aggressively pursuing HBM4 qualification for Nvidia's Rubin platform, but the gap in operating profit tells the underlying story — SK Hynix posted record operating profit of ₩47.2 trillion for FY2025, surpassing Samsung's ₩43.6 trillion, the first time in the two companies' histories that SK Hynix has led on this metric.3

Micron, the US-based memory manufacturer, holds approximately 11-21% of the HBM market depending on the quarter, having positioned itself primarily with hyperscale cloud customers rather than Nvidia. Micron's December 2025 decision to exit the consumer memory market entirely — winding down its "Crucial" brand — signals a clear strategic choice to focus exclusively on high-margin data centre and automotive memory. The consolidation of the HBM market around three players, with two of them Korean, is effectively complete.4

The global HBM market will grow from $38 billion in 2025 to $58 billion in 2026, and Bank of America forecasts it will reach $100 billion by 2028. Korea controls the dominant share of an infrastructure component that the entire AI industry cannot function without. The strategic significance of that position extends well beyond any individual earnings cycle.
62% SK Hynix's HBM market share in Q2 2025, with all DRAM, NAND, and HBM capacity sold out through 2026 — primarily to Nvidia for Blackwell and upcoming Rubin AI accelerators
$58B Projected global HBM market in 2026, up from $38bn in 2025 — with Bank of America forecasting the market to reach $100bn by 2028, a ~40% CAGR from current levels
2TB/s Per-stack bandwidth of HBM4 entering production in 2026 — double the HBM3E standard, enabled by a 2,048-bit interface and 5nm logic base die, powering the next generation of AI accelerators

Why HBM4 is not just the next version

The transition from HBM3E to HBM4 is not the routine generational upgrade that characterises most semiconductor roadmaps. It is an architectural inflection that raises the barriers to entry in ways that matter structurally. Previous HBM generations used memory-process base dies — the bottom layer of the stack that contains the logic interface. HBM4 moves the base die to a 5nm or 4nm logic process, the same advanced nodes used for GPUs and processors. This means that HBM4 manufacturers need not just memory manufacturing capability but leading-edge logic process access — either their own, or through a foundry relationship with TSMC.

SK Hynix has addressed this by partnering with TSMC for its HBM4 base die — a relationship that gives it access to TSMC's 5nm process without needing to develop leading-edge logic manufacturing in-house. Samsung's response is to leverage its own foundry operations, attempting to manufacture the HBM4 base die internally using its own 4nm process. The question of whose approach wins in the HBM4 era is not settled, but the structural implication is clear: the combination of advanced DRAM stacking capability and access to leading-edge logic process creates a moat around HBM production that is substantially higher than anything in the HBM3 era. Chinese entrants, who were already struggling to match HBM3-class performance, face an effectively insurmountable barrier in HBM4.5

The geopolitical dimension

Korea's HBM dominance creates a geopolitical exposure that is as significant as the commercial opportunity. The US-China technology bifurcation — which has accelerated through successive rounds of export controls targeting advanced semiconductors — places Korean memory manufacturers in an awkward position. They are major suppliers to US chip designers, embedded in the AI infrastructure that the United States considers strategically critical. They also have substantial manufacturing operations in China and, until recently, meaningful sales into the Chinese market.

The export control regime has progressively restricted what Korean manufacturers can supply to Chinese customers, including in DRAM. The Chinese response — accelerating indigenous memory development through companies like CXMT (ChangXin Memory Technologies) — has not yet produced HBM products competitive with Korean standards, but the direction of travel is unambiguous. China is applying to memory the same patient, resource-intensive approach it applied to display panels, solar cells, and standard DRAM — building capability over a decade rather than competing immediately. The risk is not that China displaces Korean HBM dominance in 2026 or 2027. It is whether, by the early 2030s, Chinese manufacturers achieve HBM3-class capability at a cost structure that makes them competitive for less demanding applications, gradually eroding the commodity memory market while Korea retreats upmarket.

The United States creates a different kind of exposure. Korea's semiconductor industry sits at the intersection of US-China competition in a way that makes it simultaneously indispensable and potentially captive. Washington's interest in ensuring that the most advanced AI memory does not reach China — and its interest in reshoring semiconductor manufacturing through the CHIPS Act — creates pressure on Korean companies to make production location decisions that serve US strategic interests rather than purely commercial ones. The recent US-Korea semiconductor cooperation framework, and the involvement of Korean manufacturers in US-based production discussions, reflect this dynamic. Korea's memory dominance gives it leverage in these negotiations. It also makes it a target for the geopolitical demands of both superpowers simultaneously.

The long duration of the thesis

The conventional objection to Korean memory as a long-term investment thesis is the cyclicality argument: memory is a commodity, oversupply cycles are inevitable, and today's margin expansion will be tomorrow's price war. That argument has destroyed capital in Korean semiconductor investments many times in the past and deserves serious consideration rather than dismissal.

What is genuinely different about HBM is the demand side. Standard commodity DRAM serves a market — PCs, smartphones, servers — where supply can, over a two to three year horizon, be expanded to meet demand. HBM serves a market where demand is being driven by a structural transformation in computing — the shift to AI-centric infrastructure — that is operating on a timeline of decades rather than years. The hyperscalers building AI data centres are not buying HBM because of a temporary enthusiasm; they are building the physical infrastructure for a computing paradigm shift. The capex commitments from Microsoft, Google, Amazon, and Meta for AI infrastructure through 2027 and beyond are not cyclical purchasing decisions.

The supply side constraint is equally structural. Building new HBM capacity is not a matter of ordering more wafers. It requires advanced packaging infrastructure — the bonding, stacking, and interposer systems that enable the 3D architecture — that takes years to establish and cannot be replicated quickly by new entrants. The new mega-fabs being built by both Samsung and SK Hynix — the P5 facility in Pyeongtaek and the M15X facility — are not expected to reach volume production until 2027 and mid-2027 respectively. The supply constraint that has sold out all HBM capacity through 2026 is not a temporary aberration. It reflects the genuine difficulty of scaling a manufacturing process that has no historical precedent for the volume being demanded.

Korea's position in HBM is therefore best understood not as a cyclical trade on a commodity upcycle, but as a structural position in the infrastructure of the AI era. The companies that manufacture HBM are, in a meaningful sense, the companies that manufacture the substrate on which AI runs. The returns from that position will not be linear — there will be qualification cycles, competitive pressures, and moments when Samsung's HBM4 competitiveness relative to SK Hynix reshapes the market dynamics. But the underlying direction — Korea at the centre of the most strategically important memory market in history — is a thesis with a duration measured in years, not quarters.