Google DeepMind says its artificial intelligence has helped design chips that are already being used in data centres and even smartphones. But some chip design experts are sceptical of the company’s claims that such AI can plan new chip layouts better than humans can.
The newly named AlphaChip method can design “superhuman chip layouts” in hours, rather than relying on weeks or months of human effort, said Anna Goldie and Azalia Mirhoseini, researchers at Google DeepMind, in a blog post. This AI approach uses reinforcement learning to figure out the relationships among chip components and gets rewarded based on the final layout quality. But independent researchers say the company has not yet proven such AI can outperform expert human chip designers or commercial software tools – and they want to see AlphaChip’s performance on public benchmarks involving current, state-of-the-art circuit designs.
Advertisement
“If Google would provide experimental results for these designs, we could have fair comparisons, and I expect that everyone would accept the results,” says Patrick Madden at Binghamton University in New York. “The experiments would take at most a day or two to run, and Google has near-infinite resources – that these results have not been offered speaks volumes to me.”
Google DeepMind’s blog post accompanies an update to Google’s 2021 Nature journal paper about the company’s AI process. Since that time, Google DeepMind says that AlphaChip has helped design three generations of Google’s Tensor Processing Units (TPU) – specialised chips used to train and run generative AI models for services such as Google’s Gemini chatbot.
The company also claims that the AI-assisted chip designs perform better than those designed by human experts and have been improving steadily. The AI achieves this by reducing the total length of wires required to connect chip components – a factor that can lower chip power consumption and potentially improve processing speed. And Google DeepMind says that AlphaChip has created layouts for general-purpose chips used in Google’s data centres, along with helping the company MediaTek develop a chip used in Samsung mobile phones.
Sign up to our The Weekly newsletter
Receive a weekly dose of discovery in your inbox.
“We really don’t know what AlphaChip is today, what it does and what it doesn’t do,” says Igor Markov, a chip design researcher at a competing firm. “We do know that reinforcement learning takes two to three orders of magnitude greater compute resources than methods used in commercial tools and is usually behind [in terms of] results.”
Markov and Madden critiqued the original paper’s claims about AlphaChip outperforming unnamed human experts. “Comparisons to unnamed human designers are subjective, not reproducible, and very easy to game. The human designers may be applying low effort or be poorly qualified – there is no scientific result here,” says Markov. “Imagine if AlphaGo reported wins over unnamed Go players.” A Google DeepMind spokesperson described the experts as members of Google’s TPU chip design team using the best available commercial tools.
In 2023, an independent expert who had reviewed Google’s paper retracted his Nature commentary article that had originally praised Google’s work but had also urged replication. That expert, Andrew Kahng at the University of California, San Diego, also ran a public benchmarking effort that tried to replicate Google’s AI method and found it did not consistently outperform a human expert or conventional computer algorithms. The best-performing methods used for comparison were commercial software or internal research tools for chip design from companies such as Cadence and NVIDIA. In a 2023 statement, Goldie and Mirhoseini disputed Kahng’s benchmarking results. They said his tests had not pretrained the AI method on specific chip designs – a crucial factor in its performance – and relied upon “far fewer compute resources” than Google DeepMind’s team to train the AI.
“On every benchmark where there’s what I would consider a fair comparison, it seems like reinforcement learning lags behind the state of the art by a wide margin,” says Madden. “For circuit placement, I don’t believe that it’s a promising research direction.”
Journal reference
Article amended on 3 October 2024
Topics: