AWS launches next-gen GPU instances for machine learning
P4d adds Intel Cascade Lake CPUs and Nvidia GPUs for high-performance computing
AWS has launched its latest GPU-equipped instances aimed at machine learning and high-performance computing (HPC) workloads.
Called P4d, the new instances come ten years the first set of GPU instances were launched. They feature Intel Cascade Lake processors and eight of Nvidia's A100 Tensor Core GPUs. These connect via NVLink with support for Nvidia GPUDirect and offer 2.5 PetaFLOPS of floating-point performance and 320GB of high-bandwidth GPU memory.
AWS claimed that the instances offer 2.5x the deep learning performance, and up to 60% lower cost to train when compared to P3 instances.
In addition, the P4 instances include 1.1TB of system memory and 8TB of NVME-based SSD storage with up to 16 gigabytes of read throughput per second. The instances can combine over 4,000 GPUs into an on-demand EC2 UltraCluster.
Among the use cases touted by AWS for these instances include supercomputer-scale machine learning and HPC workloads: natural language processing, object detection & classification, scene understanding, seismic analysis, weather forecasting, financial modelling, etc.
The P4 instances are available in one size (p4d.24xlarge) and can be launched in the US East (N.Virginia) and US West (Oregon) Regions with immediate effect.
Among the companies that have already been working with the P4 instances include Toyota Research Institute (TRI), GE Healthcare and Aon.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
"At TRI, we're working to build a future where everyone has the freedom to move,” said Mike Garrison, technical lead, Infrastructure Engineering at TRI.
"The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours and we are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed."
Its on-demand price will be $32.77 per hour, going down to approximately $20 per hour for one-year reserved instances, and $11.57 for three-year reserved instances.
Rene Millman is a freelance writer and broadcaster who covers cybersecurity, AI, IoT, and the cloud. He also works as a contributing analyst at GigaOm and has previously worked as an analyst for Gartner covering the infrastructure market. He has made numerous television appearances to give his views and expertise on technology trends and companies that affect and shape our lives. You can follow Rene Millman on Twitter.