Breaking the MegaHertz barrier

Tagged: cpu, Computer Hardware, Technology
Source: Ars Technica - Read the full article
Posted: 4 years 38 weeks ago

We're rapidly closing in on a decade since the first desktop processors cleared the 3GHz mark, but in a stunning break from earlier progress, the clock speed of the top processors has stayed roughly in the same neighborhood since. Meanwhile, the feature shrinks that have at least added additional processing cores to the hardware are edging up to the limits of photolithography technology. With that as a backdrop, today's issue of Science contains a series of perspectives that consider the question of whether it's time to move beyond semiconductors and, if so, what we might move to.

The basic problem, as presented by IBM research's Thomas Theis and Paul Solomon, is that scaling the frequency up has required scaling the switching voltage down as transistors shrink. Once that voltage gets sufficiently small, the difference between on and off states causes problems from some combination of two factors: the off state leaks (leading to heat and power use problems), or the device switches slowly, meaning lower clock speed. Faced with a "choose any two" among speed, size, and power, we've been doing pretty well via chipmakers' focus on the latter two, but that's now gotten physicists and materials scientists thinking it might be time to look elsewhere.

With four individual perspectives loaded with technical information, it's not realistically possible to dive into the details of each, so what follows is a top-down overview of some of the arguments that are advanced by the various authors.
Forget clockspeed entirely

It's not that the authors of this perspective think continued progress in the sort of electronics that appear in laptops is unimportant; they just suggest it will be increasingly less interesting as we focus on small, flexible systems that can be put in portable devices like smartphones, integrated into things (like clothing) that don't currently contain electronics, and ultimately find their way into implantable medical devices. For all of these applications, flexing and stretching are more important than raw speed.

We've covered a variety of approaches to getting bits to bend, and the perspective breaks approaches down into two basic categories: either make the electronics flexible, or make them small, and connect them with flexible material. In the former category, the obvious choice would be some sort of organic transistor, but the authors suggest that the need for this is overstated. If standard silicon is fashioned into a silicon ribbon, it's actually remarkably robust when flexed. The trick is to embed the ribbon in a stable, flexible substrate, as well as accepting that the device will never have the same power as a complex, multi-layer chip of the sort that we use today.

The alternative is to make the electronics rigid, but extremely small and simple, so that they don't occupy much space. These mini-chips can then be embedded in a flexible material without changing its bulk properties. All that's left is connecting them up and providing them with power, but a number of materials—metals, silicon, and a carbon-nanotube derivative called "buckypaste"—can provide flexible and bendable wiring. Both approaches are already working in the lab, and the primary challenges tend to involve integrating materials that have very different properties in terms of hydrophobicity, heat dissipation, etc.
More bang for your volt

The IBM duo mentioned above reason that, if the problem is that we can't switch existing gates well with small voltage changes, it's time to find a switch that will amplify the impact of a small voltage change. So they consider two approaches that allow a voltage change to have nonlinear effects. The first is something called "interband tunnel FET." In the on state, electrons have easy access to a valence band they can tunnel into. A small change in voltage, however, makes this valence band inaccessible, creating a sharp, and leakage-proof off state. The problem with this approach is that, right now, we can make these devices with carbon nanotubes, but not silicon.

The alternative is to create some sort of gain device into the circuitry that amplifies a small input voltage. A sandwich of ferroelectric and dielectric layers will apparently allow the ferroelectric layer to switch its bulk behavior between two polarization states, giving a small voltage input an all-or-nothing impact. Adding these devices would obviously increase the size of a gate but, at the moment, the real problem is switching speed: theoretically, these things could switch in less than a picosecond, but actual implementations are taking 70 to 90ps.
Forget silicon entirely

The remaining two perspectives focus on the promise of transition metal oxides. The unusual electronic properties of these materials were made famous via high temperature superconductivity, but it's a very diverse group of materials with a huge range of properties. Bonds between oxygen and metals like titanium and lanthanum are extremely ionic in nature, which brings the large, electron-rich d-orbitals of the metals to the fore. Depending on the precise structure of the material and the additional metals present (Zn, Mg, and Sr appear common), the large collection of d-orbital electrons act as a bulk material.

And, just like any bulk material, the electrons can have phases, including solids, liquids, gasses, superfluids, and liquid crystals; there are also property-based phases, like spin- and orbital-liquids. Where there are phases, there are phase transitions, which can be induced by electric and magnetic fields, among other factors. So, the potential is there for a small input to have a significant impact on a large collection of electrons. So far, the first demonstrations of this have come in the form of different types of RAM based on ferroelectric, magnetic, and resistance.

Things get even more interesting when the interfaces between different oxide layers are considered. We've covered one report in the past that described how the interface between two transition oxide insulators could allow superconductivity, and a variety of other interesting effects are described here. Some of these have already been demonstrated to switch states at features below 10nm; an atomic force microscope has created conducting lines at 2nm resolution in a different material.

A decade or more ago, the problem with these materials was having any control over their formation, but we've now gained the ability to deposit layers of the stuff with precisions of a single unit cell of the crystal. The roadblock now is theory; as one perspective puts it, the large numbers of electrons present create a many-body problem that we can't really solve. More generally, there are a lot of transition metals, and a lot of complex oxide combinations (LaAlO3-SrTiO3 and La2/3Ca1/3MnO3 are just two of the many combinations mentioned). Right now, theory simply hasn't reached the point where we can accurately model the effect of bringing these materials together, which makes designing anything with specific properties very hit-or-miss.

The overall message is that we're a long way from seeing anything resembling these ideas in a device, with the possible exception of bendable circuitry. For the moment, this hasn't been a crisis, as the fab-makers have managed to stretch out photolithography, and multicore processors are being put to reasonably good use. Still, the payoff from additional cores is likely to shrink fast, and it's nice to think that there may be something on the horizon that could restart a MegaHertz race.