Moore's Law

Moore's Law is the observation, made by Intel co-founder Gordon Moore in 1965, that the number of transistors on an integrated circuit doubles approximately every two years — with the cost per transistor halving on a similar schedule. For five decades, this exponential curve was the metronome of the technology industry: it set the pace of computing improvement, defined hardware product cycles, and created the economic conditions for the PC, the internet, mobile computing, and cloud infrastructure.

The Engine of the Digital Age

Moore's original paper predicted the trend would hold for at least a decade. It held for fifty years. The number of transistors on a leading-edge chip went from about 2,300 (Intel 4004, 1971) to over 100 billion (Apple M2 Ultra, 2023) — a roughly 40-million-fold increase. This exponential improvement didn't just make computers faster; it made entirely new categories of computing possible. Each order-of-magnitude increase in transistor density enabled new applications that couldn't have existed at the previous level: mainframes begat minicomputers, which begat PCs, which begat smartphones, which begat the cloud infrastructure running today's AI.

Moore's Law is an exponential, and it demonstrates the classic Six Ds pattern. For decades, improvements were "deceptive" — visible only to engineers tracking die sizes. Then they became disruptive (the PC revolution), then demonetizing (computing became cheap enough for everyone), then democratizing (a smartphone in every pocket with more compute than a 1990s supercomputer). The entire trajectory of deflationary technology in computing rests on Moore's foundation.

The Slowdown and What Replaced It

Moore's Law in its classical form has slowed. Transistor dimensions are now measured in single-digit nanometers, approaching the scale where quantum effects make further miniaturization unreliable. Dennard Scaling — the companion observation that smaller transistors use proportionally less power — broke down around 2006, which is why clock speeds plateaued even as transistor counts continued to rise. The industry shifted from making individual cores faster to adding more cores, but general-purpose software couldn't easily exploit parallelism.

This is precisely what created the opening for NVIDIA and the GPU revolution. GPUs were already designed for massive parallelism (rendering millions of pixels simultaneously), which made them naturally suited for the matrix operations that neural networks require. As Moore's Law decelerated for general-purpose CPUs, Huang's Law accelerated for AI-optimized GPUs — delivering performance improvements of roughly 1,000× over a decade through a combination of architectural innovation, specialized silicon (tensor cores), new numerical formats, and software co-optimization. Moore's Law gave us the digital age; Huang's Law is giving us the AI age.

The relationship between the two laws matters for understanding the economics of the agentic economy. Moore's Law improvements were broadly distributed — every computer, phone, and device benefited. Huang's Law improvements are concentrated in AI workloads, which means the performance frontier is diverging: AI compute is advancing far faster than general compute, creating a growing gap between what AI systems can do and what traditional software can do. This gap is one reason the Scaling Hypothesis has held: AI capabilities have been improving on a faster exponential than the one the rest of the industry runs on.

Moore's Law as Economic Force

Beyond the technology, Moore's Law was an economic phenomenon. It functioned as a coordination mechanism for the entire semiconductor industry: chipmakers, equipment suppliers, software vendors, and device manufacturers all planned their roadmaps around the expected pace of improvement. Wright's Law operated underneath it — the learning curve effects of cumulative chip production drove the cost reductions that Moore's Law described. The two laws reinforced each other: Moore's Law set expectations, Wright's Law delivered the cost curves, and Jevons' Paradox ensured that cheaper compute created more demand, funding the next cycle of investment.

The compute capital markets that now define AI investment are a direct descendant of Moore's Law economics: the expectation of exponential improvement drives massive upfront capital expenditure, justified by the belief that the performance curve will continue. Whether that curve is now driven by Moore's Law, Huang's Law, or some combination of both, the economic logic — invest ahead of the curve, because the curve rewards scale — remains the same.