Back in 1965, the founder of Fairchild Semiconductor and CEO of Intel wrote a paper that predicted the future of chip manufacturing for the next 50 years.   The paper, written by Gordon Moore, became known as Moore’s Law.

Moore predicted that the number of transistors on a computer chip would double every year.   The paper stated that this would be true for the next decade, however it turned out to be true for the next 50 years!

This doubling of transistors was only possible due to developments in technology which meant that component sizes were shrinking at a similar rate.   In 1974 the Intel 8080, which was the first Microprocessor, was built using a 6-micrometer process.  Compare that to modern processes which use a 7-nanometer process which is nearly 1000 times smaller.   To put that into perspective, a sheet of paper is 100,000 nanometers thick!  That’s pretty small!

It’s this miniscule size which is bringing an end to the rule of Moore’s Law.   The microscopic scale of transistors on modern chips is reaching a limit that science can’t overcome.

Instead, chip manufacturing firms like AMD and Intel are looking to other technologies in order to keep progressing the speed of these chips.   After all – who’s going to buy a new computer in a few years, if it’s still the same speed as the last one you bought?

A little background on chips first though.

There are three major factors which dictate how fast a chip is.

  • The clock speed of the chip
  • The number of cores on a chip
  • The amount of memory on a chip

The clock speed is the number of processing steps it can take a second.   Think of a chip as a production line – instructions are passed around the line one at a time.   The faster the line, the faster the instructions are processed.   Modern chips are measured Gigahertz.   One Gigahertz equates to 1 thousand million cycles per second.   Imagine the chaos if a production line ran that fast!

Using the same analogy of a production line, think of processor cores as separate production lines.   For example, a processor with 4 cores effectively has 4 production lines running in parallel to each other.   Now this isn’t always technically true, as not all problems can be split this way, but modern computers are very effective at using multiple cores.   Imagine you’re playing music on a computer while browsing the web – this could be using two cores at the same time without slowing down.   Taking this to the extreme – highly complex computer problems such as nuclear simulations use thousands of processors each with multiple cores to compute thousands of instructions simultaneously.

Thirdly is on-chip memory.   This is different to the general memory in your computer.   It takes a long time (relatively speaking) for a chip to read and write to memory.   If it’s reading and writing the same information repeatedly, it makes sense to keep a hold of this on the chip.   This temporary on-chip memory is called a cache.   The more cache you have, the less time you have to spend waiting around for the slower memory.

Still with me?

The race for more cores, larger cache and faster clock speed has led to chips that run so hot and are so large, that they’re on the limits of what a desktop computer can cope with (let alone a laptop computer that has tiny heatsinks and fans).   The smaller the transistor size, the cooler and smaller the chip, but chip manufacturers are running out of options fast as they can no longer shrink any smaller.

Instead they’re looking at new ways to join smaller chips together.   AMD has developed a fast interconnect that means that cores and cache that had to be on the same chip can now be on separate chips, joined together using a technology that AMD calls “Infinity Fabric”.   This meant that AMD were able to leapfrog Intel by announcing a 32-core chip – previously thought of as impossible.

Intel are catching up with their own technology called “Embedded Multi-Die Interconnect Bridge” (or EMIB).   However, Intel has been struggling to reach the same 7 nanometer size and have been stuck on 14 nanometers.

This has meant that computer manufacturers have been unable to announce new faster computers this year, with advancements in speed lacking compared to previous years.

Intel’s bet seems to be with a completely new technology which they’ve been working on in the background.   Typically, transistors have been placed on a flat die next to each other.  The more transistors you need – the larger the die/chip.   Intel’s new approach is to “stack” these dies on top of, instead of next to each other.  This new direction could result in a big speed bump, however, it’ll be a challenge to keep these chips cool.  Large surface areas are able to keep direct contact with a heat sink, however in this approach – only the top layer of the stack will be in contact.   Instead, they’re designing it so that the coolest layers are at the bottom, with the hottest at the top.

It’ll be interesting to watch the race between AMD and Intel over the next year as they both work to revolutionise computer chips as we known them!