

Discover more from State of AI Report
The State of AI Report 2022 is now live!
This year, new research collectives have open sourced breakthrough AI models developed by large centralized labs at a never before seen pace. By contrast, the large-scale AI compute infrastructure that has enabled this acceleration, however, remains firmly concentrated in the hands of NVIDIA despite investments by Google, Amazon, Microsoft and a range of startups.
Produced in collaboration with my friend Ian Hogarth, this year’s State of AI Report also points to an increase in awareness among the AI community of the importance of AI safety research, with an estimated 300 safety researchers now working at large AI labs, compared to under 100 identified in last year’s report.
Small, previously unknown labs like Stability.ai and Midjourney have developed text-to-image models of similar capability to those released by OpenAI and Google earlier in the year, and made them available to the public via API access and open sourcing. Stability.AI’s model cost less than $600,000 to train, while Midjourney’s is already proving profitable and has become one of the leaders in the text-to-image market alongside OpenAI’s Dall-E 2. This demonstrates a fundamental shift in the previously accepted AI research dynamic that larger labs with the most resources, data, and talent would continually produce breakthrough research.
Meanwhile, AI continues to advance scientific research. This year saw the release of 200M protein structure predictions using AlphaFold, DeepMind’s advancement in nuclear fusion by training a reinforcement learning system to adjust the magnetic coils of a tokamak, and the development of a machine learning algorithm to engineer an enzyme capable of degrading PET plastics. However, as more AI-enabled science companies appear in the landscape, we also explore how methodological failures like data leakage and the ongoing tension between the speed of AI development and the slower pace of scientific discovery might affect the landscape.
Key takeaways
We hope the report has something for everyone- from AI research to politics. Here are five key findings:
1. Independent research labs are rapidly open sourcing the closed source output of major labs. New independent research labs are developing research competitive with the output of major players. Despite the dogma that AI research would be increasingly centralized among a few large players, the lowered cost of and access to compute has led to state-of-the-art research coming out of much smaller, previously unknown labs.
2. While software is rapidly diffusing through open source, the AI compute layer is still firmly dominated by NVIDIA. Relative usage of NVIDIA vs. TPUs or other AI startup chip platforms is separated by orders of magnitude.
3. Safety is gaining awareness among major AI research entities, with an estimated 300 safety researchers working at large AI labs, compared to under 100 in last year’s report, and the increased recognition of major AI safety academics is a promising sign when it comes to AI safety becoming a mainstream discipline.
3. AI-driven scientific research continues to lead to breakthroughs, ranging from Covid-19 variant prediction, to the identification of small molecules from natural compounds, to the control of a nuclear fusion reactor.
The report is a collaborative project and we’re incredibly grateful to Othmane Sebbouh, who made significant contributions for a second year running, and Nitarshan Rajkumar, who supported us this year, particularly on A(G)I Safety. Thank you to our Reviewers and to the AI community who continue to create the breakthroughs that power this report.
We write this report to compile and analyze the most interesting things we’ve seen, with the aim of provoking an informed conversation about the state of AI. So, we would love to hear any thoughts on the report, your take on our predictions, or any contribution suggestions for next year’s edition.
Enjoy reading!
Nathan and Ian