The 10 Hottest AI Chipmakers You Should Be Watching In 2021
Investors are putting hundreds of millions of dollars into AI chip startups that are developing new optimized architectures for training and inference while semiconductor giants Intel and Nvidia seek to protect their market-share dominance with expanding product portfolios.
The AI Chip Race Continues
The demand for differentiated, optimized AI chip architectures remains high if the recent massive funding rounds of several startups are any indication.
Just a handful of AI chip startups have raised nearly $1.5 billion alone in the past six or so months, and that doesn’t even cover all the AI hardware activity that has happened in the past few years. It also doesn’t include the investments semiconductor giants Intel and Nvidia have made in expanding their AI chip design efforts and product portfolios.
[Related: 10 Hottest Semiconductor Companies To Watch In 2021]
What follows are the 10 hottest AI chipmakers you should be watching in 2021, which includes several startups that have raised funding rounds as well as a couple of large, established companies.
Cerebras Systems
Top Executive: Andrew Feldman, CEO
Cerebras Systems is thinking big—real big—with its massive Wafer Scale Engine 2 chip, which it calls the “largest AI processor ever made.” The Los Altos, Calif.-based startup’s recently unveiled WSE-2 chip comes with 2.6 trillion transistors, 850,000 cores and 40 GB of on-chip memory, which it says gives the WSE-2 a massive advantage over Nvidia’s flagship A100 data center GPU. The WSE-2 powers the startup’s purpose-built CS-2 AI system, which Cerebras says can deliver “more compute performance at less space and less power than any other system.” The startup’s systems have been adopted by the University of Edinburgh’s EPCC supercomputing center, GlaxoSmithKline, Tokyo Electron Devices as well as the U.S. Department of Energy’s Argonne National Laboratory and Livermore National Laboratory.
Graphcore
Top Executive: Nigel Toon, CEO
Graphcore is taking on Nvidia with its intelligence processing unit chips, which it says are built from the ground up for the “fine-grained parallelism” and high memory capacity required by AI applications, unlike GPUs. The Bristol, U.K.-based chip startup is making a big sales push this year for its Colossus MK2 IPUs along with the M2000 systems and IPU-POD clusters that use the IPUs. The startup received a major boost for sales, marketing and research and development with a $222 million funding round it raised late last year, which brought its valuation to $2.77 billion. Most recently, the startup expanded its sales presence in North America with new and existing channel partners, including Trace3.
Groq
Top Executive: Jonathan Ross, CEO
Groq wants to win the AI chip crown from Nvidia with a simplified chip architecture design that it says is “radically improved” over GPU architectures, allowing it to provide 10 times lower latency. The Mountain View, Calif.-based startup said in April that it had closed $300 million in new funding from investors, which it said reflected the “strong customer endorsement” it has received for its first generation of tensor streaming processors. The startup last fall started shipping TSP-packed Groq cards along with Groq server nodes, which it says offers “unprecedented” total cost of ownership benefits thanks to the system’s high performance and low power.
Intel
Top Executive: Pat Gelsinger, CEO
Intel is taking a heterogenous approach to AI computing that makes use of the chipmaker’s CPUs, discrete and integrated graphics products as well as specialty processors like the Gaudi and Goya accelerator chips from the company’s Habana business. The semiconductor giant’s Habana chips have been recently adopted by Amazon Web Services in new cloud instances—which Intel says provides 40 percent better price-performance than similar GPU-based instances—and by the San Diego Supercomputer Center at UC San Diego for a new supercomputer. The Santa Clara, Calif.-based company has also been building out its software capabilities with its oneAPI and OpenVINO tool sets as well as its recent acquisition of SigOpt, which provides an optimization platform for AI models.
Lightmatter
Top Executive: Nick Harris, CEO
Lightmatter is using the power of light to fuel new advances in AI computing. The Boston-based startup recently raised $80 million in a funding round from Hewlett Packard Enterprise, Google’s venture arm and other investors to commercialize its silicon photonics chip technology. The startup says a 4U blade server using 16 of its Envise AI accelerator chips can provide three times higher inferences per second than Nvidia’s DGX A100 system while also offering seven times greater inferences per second per watt. The startup has also devised a way to allow different kind of computer chips to communicate through a wafer-scale photonic interconnect called Passage.
Mythic
Top Executive: Mike Henry, CEO
Mythic says it can overcome the speed, memory and manufacturing bottlenecks associated with digital inference solutions like CPUs and GPUs with its Analog Matrix Processor, which is targeted at edge AI applications. The Redwood City, Calif.-based company recently said that it has raised $70 million in a funding round from Hewlett Packard Enterprise and other investors to mass-produce its first-generation M1108 AMP chip, ramp up sales and marketing and develop next-generation hardware. The startup says the processor’s underlying Analog Compute Engine can “deliver the compute resources of a GPU at one-tenth the power consumption.”
Nvidia
Top Executive: Jensen Huang, CEO
Nvidia saw massive sales growth for AI chips last year, which resulted in a 124 percent increase in data center revenue in the chipmaker’s 2021 fiscal year. The Santa Clara, Calif.-based company is now making its biggest push yet to get enterprise customers to adopt GPU servers for AI applications and other kinds of GPU-accelerated software. The company recently released the new A10 and A30 GPUs for mainstream enterprise servers, and it revealed an Arm-based data center CPU named Grace that will target large AI model training when the processor launches in early 2023.
SambaNova Systems
Top Executive: Rodrigo Liang, CEO
Like other companies on the list, SambaNova Systems is taking a holistic approach to AI computing with hardware, software and services that take advantage of the startup’s Reconfigurable Dataflow Unit chip. The Palo Alto, Calif.-based startup recently said it had raised $676 million in a funding round led by SoftBank Group that also included the venture arms of Google and Intel. The startup is using the funding to grow market share against Nvidia and other competitors with its subscription-based Dataflow-as-a-Service AI platform, which relies on SambaNova’s RDU-based DataScale system to deliver what it said are “unmatched capabilities and efficiency” for AI.
Sima.AI
Top Executive: Krishna Rangasayee, CEO
SiMa.ai says its Machine Learning System-on-Chip, or MLSoC for short, is the first chip to combine high performance, low power and hardware security for machine learning inference at the edge. The San Jose, Calif.-based startup recently raised $80 million in funding from Dell Technologies Capital and other investors to commercialize its first-generation chip and accelerate development of a second generation. The startup is taking a software-first approach to simplify machine learning integration for several application areas, including robotics, smart cities, autonomous vehicles and medical imaging.
Tenstorrent
Top Executive: Ljubisa Bajic, CEO
Tenstorrent is working on a new kind of AI processor, called Grayskull, that it says is the first to use a condition execution architecture to dynamically eliminate unnecessary computation, allowing the processor to adapt to increasingly larger AI models. At the beginning of the year, the Toronto-based chipmaker said it had hired chip design legend and early investor Jim Keller—who previously worked at Intel, Tesla, Apple and AMD—as president and CTO. The startup has raised a total of $34 million in funding from investors.