Today, we release V2 of the State of AI Report Compute Index in collaboration with the team at Zeta Alpha.
You'll now find live counts of AI research papers using chips from NVIDIA, TPUs, ASICs, FPGAs, and AI semi startups. You heard it here first 😉
NVIDIA is 2 orders of magnitude ahead of others
First, here's an updated count of papers using: any NVIDIA chip, Google’s TPU, ASICS, FPGAs, and chips from AI semiconductor challengers Graphcore, SambaNova Systems, Cerebras, Habana/Intel, and Cambricon. We also included Huawei’s Ascend 910.
You’ll notice that the 2022 full year citation count (extrapolated from 4 Dec 2022) shows NVIDIA clock over 21k papers using their technology. By contrast, all FPGAs sum to 740 papers, Google’s TPU come in at 257, and the sum of the 5 AI startups comes to 172 papers.
This is an enormous gap (note the log scale).
NVIDIA’s most popular chip for AI research: the V100
In this graph, we show you NVIDIA specific data. You can see that their most successful chip for AI research is the V100, released in Dec 2017. Rising fast is the RTX 3090 and the A100, a workhorse for AI workloads featured in our Compute Index focusing on private, public and national HPC clusters. Both the RTX 3090 and the A100 are roughly 50% of the V100 volume.
Since first publishing this data in mid-October 2022, we can now see the appearance of NVIDIA’s latest chip, the hotly awaited H100.
Graphcore leads its peer AI semiconductor startups
While overall counts are very low amongst AI semiconductor startups in AI research, the most usage in papers comes from Graphcore. Following them is Habana/Intel, Cambricon, Cerebras and SambaNova.
A few notes
- We take the view that usage of chips in AI research papers (early adopters) is a leading indicator of industry usage.
- Papers using AI semi startup chips almost all have authors from the startup.
- FY2022 is extrapolated from 4 Dec '22.
See the live charts here: www.stateof.ai/compute
This info in fantastic! Awesome site!