VMware, Nvidia Bring GPUs To vSphere For Virtualized AI, HPC Workloads
'We live in a multi-cloud world, just like anybody else. We believe extending AI and deep learning into that multi-cloud world is pretty important,' an Nvidia and VMware partner says of the new vCompute platform that allows enterprises to provision virtualized GPUs for emerging workloads.
Nvidia and VMware aim to accelerate artificial intelligence adoption in the enterprise by allowing them to run GPU-accelerated virtual machines for AI, high-performance computing and analytics workloads on the virtualization giant's vSphere platform.
This capability is made possible by the Santa Clara, Calif.-based chipmaker's new vComputeServer software, which was announced Monday at VMware's VMworld conference in San Francisco and will allow IT administrators to flexibly manage GPU resources like they do for the rest of their data center.
[Related: 4 Bold Statements From Nvidia CEO Jensen Huang On The Future Of Computing]
John Fanelli, a vice president of product management at Nvidia, said the virtualization of the chipmaker's T4 and V100 GPUs for emerging workloads like machine learning and deep learning is an industry first and will make it easier for enterprises to adopt AI.
"What we're really doing with the Nvidia GPU infrastructure is we're accelerating the analytics and pipeline processing so customers can get to their insights much more rapidly," he said, adding that vCompute comes with container support through Nvidia NGC.
Beyond vSphere, Nvidia's vCompute platform also supports KVM-based hypervisors from Nutanix and Red Hat. OEMs and hardware vendors supporting vCompute include Cisco, Dell, Hewlett Packard Enterprise, Lenovo and Super Micro.
In addition to making vCompute available for on-premise servers, Nvidia announced the platform will be available for VMware Cloud on Amazon Web Services, which Fanelli said will give enterprises important flexibility to move workloads as they develop, train and tune AI models.
"We've now enabled between Nvidia and VMware a hybrid cloud for AI, machine learning and deep learning," Fanelli said.
Mike Trojecki, vice president of IoT and analytics for New York-based Logicalis, called the new partnership a "significant" development that will make AI more affordable and accessible for businesses. As an Nvidia and VMware partner, Logicalis is planning to sell vCompute in both on-premise and cloud deployments, he said.
"We live in a multi-cloud world, just like anybody else. We believe extending AI and deep learning into that multi-cloud world is pretty important," he said.
Fanelli said vCompute gives IT administrators the flexibility in how they provision virtualized GPUs, whether it's a single GPU being used for multiple virtual machines or multiple GPUs being used for one virtual machine. Other features of vCompute include the ability to migrate GPU-accelerated virtual machines with minimal disruption or downtime, increased security, and multi-tenant isolation.
"The idea is it makes your GPU infrastructure much more flexible as a pool of resources, so it drives your [return on investment] up," he said.
In comparison to CPU-only servers, vCompute combined with four Nvidia v100 GPUs can perform deep learning workloads 50 times faster, according to Nvidia.
Nvidia and VMware plan to sell vCompute through the channel in multiple ways. On Nvidia's side, the chipmaker plans to sell the software through its partners like it does for its other virtualization solutions. As for vCompute on VMware Cloud for AWS, the cloud-based option will be sold through VMware's partner and AWS' partners.
VMware has expressed an increased interest in using accelerators such as GPUs to better support emerging workloads. In July, the company acquired Bitfusion, whose virtualization software allows enterprises to share GPUs, FPGAs and ASICs as a pool of network-accessible resources.
"Today's enterprises are running compute-intensive AI and data analytics workloads, and the need for hardware acceleration is becoming more prominent. With our Nvidia partnership and our recent acquisition of Bitfusion, we are delivering the next generation of efficient and performant AI infrastructure," said Krish Prasad, senior vice president and general Manager of VMware's cloud platform business, in a statement.
Nvidia isn't new to the virtualization game. The GPU powerhouse already provides GPU-based virtualization software for high-performance virtual desktops, whether for general-purpose applications or for content creation tools such as Solidworks and Autodesk Maya.
Unlike Nvidia's other virtualized GPU solutions that are licensed based on the number of users, vCompute is licensed on a per-GPU basis, according to Fanelli. A virtual GPU licensing document on Nvidia's website from June said vCompute will cost $50 per GPU annually.
In Nvidia's most recent earnings call, CEO Jensen Huang said while the chipmaker is seeing lower demand from cloud providers, the company's data center growth over the last couple of quarters has come from enterprises adopting AI for internal processes as well as new products and services.
"We are building a broad base of customers across multiple industries as they adopt GPUs to harness the power of AI," he said in mid-August.