Google uses AI to design processors

Google engineers tasked AI with designing faster, more efficient processors and then using their chip designs to develop the next generation of specialized computers that use the same type of AI algorithms.

Google operates on such a large scale that it designs its own computer chips rather than buying commercial products. This allows you to optimize the chips to run your own software, but the process is expensive and time-consuming. It usually takes two to three years to develop a custom chip.

A chip design phase is a planned process of creating the entire circuitry of a new chip and organizing the millions of components into an efficient production layout. Although the functional design of the chip is finalized at this stage, the layout can have a big impact on speed and power consumption. For smartphone chips, power consumption to extend battery life may be a priority, but for a data center, maximum speed may be more important.

Planning the plans was a very convenient and time-consuming activity, says Anna Goldie of Google. The teams divided the larger chips into blocks and worked the pieces in parallel, looking for small refinements, he says.

But Goldie and his colleagues created software that turns the scheduling problem into a neural network task. Treat an empty slide and its millions of components like a complex puzzle with a wide variety of possible solutions. The goal is to optimize the parameters that engineers consider most important by precisely positioning all components and the connections between them.

The team’s software produced layouts for a chip in less than 6 hours that were comparable to or better than man-made chips in terms of power consumption, performance, and disk density within a few months. An existing software tool called RePlAce, which completes projects at a similar speed, did not have humans and AI in all of its tests.

The slide design used in the experiments was the latest version of Google’s Tensor Processing Unit (TPU), designed to use the same type of neural network algorithm found in the company’s search engine and machine translation tool. It is conceivable that this new AI-designed chip will be used in the future to design its successor, and the successor, in turn, will be used to design its own replacement.

The team believes that the same neural network approach can be applied to several other time-consuming phases of disk design, reducing overall design time from years to days. The company plans to repeat itself soon, as even small improvements in speed or power consumption can make a big difference in the scale at which it operates.

“There are high opportunity costs of not launching the next generation. Let’s assume that the new one is much more energy-efficient. The impact that machine learning can have on the environmental footprint of machine learning, as it is used in many different data centers, is very valuable. Even the day before, it makes a big difference,” says Goldie.

Translate »