IBM unveils the world’s first quad-core AI accelerator chip
The chip can be scaled for commercial use in hybrid-cloud environments
IBM has unveiled the world’s first quad-core artificial intelligence (AI) accelerator chip, built using seven-nanometer (7nm) MOSFET technology.
The company says it has optimized the novel chip for low-precision workloads with support for many AI models.
"In a new paper presented at the 2021 International Solid-State Circuits Virtual Conference (ISSCC), our team details the world’s first energy-efficient AI chip at the vanguard of low precision training and inference built with 7nm technology," said IBM researchers Ankur Agrawal and Kailash Gopalakrishnan.
"Through its novel design, the AI hardware accelerator chip supports a variety of model types while achieving leading edge power efficiency on all of them."
AI accelerators are specialized hardware designed to enhance AI applications’ performance, including deep learning, machine learning, and neural networks. They use in-memory computing or low-precision arithmetic, resulting in faster execution of large and complex AI algorithms.
IBM claims its new AI accelerator chip is the first to include an ultra-low precision hybrid 8-bit floating-point (HFP8) format for training deep learning models in a silicon technology node (7 nm EUV-based chip). The chip can also self-maximize its performance by slowing down during high-power computation phases, thanks to an integrated power-management feature.
Six reasons to accelerate remote asset monitoring with AI
How to optimise resources, increase productivity, and grow profit margins with AI
Furthermore, IBM said its AI chip “routinely achieved more than 80% utilization for training and more than 60% utilization for inference” as compared to mainstream GPU utilizations, which are typically below 30%.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
According to IBM, the chip technology can be scaled and deployed commercially to support large-scale deep learning models in the cloud.
“Our new AI core and chip can be used for many new cloud to edge applications across multiple industries. For instance, they can be used for cloud training of large-scale deep learning models in vision, speech and natural language processing using 8-bit formats (vs. the 16- and 32-bit formats currently used in the industry),” said IBM.
“They can also be used for cloud inference applications, such as for speech to text AI services, text to speech AI services, NLP services, financial transaction fraud detection and broader deployment of AI models in financial services.”