IBM unveils on-chip AI accelerator for fraud detection
The first system to use the 'Telum' processor could be ready by the first half of 2022
IBM has unveiled its long-awaited 'Telum' chip, built with AI inference acceleration that will allow for fraud detection while a transaction is occurring.
The new processor was showcased at the annual Hot Chips conference with the first Telum-based system planned for 2022.
Telum is IBM's first processor to contain "on-chip" acceleration for artificial intelligence (AI) inference. The tech giant spent three years developing the "breakthrough" hardware, which is designed to help customers across banking, finance, trading, insurance applications and customer interactions.
The processor is designed to enable applications to run efficiently where the data resides, differentiating it from traditional enterprise AI approaches that tend to require significant memory and data movement capabilities to handle inferencing. With the accelerator in close proximity to mission critical data and applications, however, IBM suggests that enterprises can conduct high volume inferencing for real time sensitive transactions without invoking off platform AI solutions, which could potentially impact performance.
"Today, businesses typically apply detection techniques to catch fraud after it occurs, a process that can be time consuming and compute-intensive due to the limitations of today's technology, particularly when fraud analysis and detection is conducted far away from mission critical transactions and data," IBM said.
Prevent fraud and phishing attacks with DMARC
How to use domain-based message authentication, reporting, and conformance for email security
"Due to latency requirements, complex fraud detection often cannot be completed in real-time - meaning a bad actor could have already successfully purchased goods with a stolen credit card before the retailer is aware fraud has taken place."
The chip was built on 7nm extreme ultraviolet tech, created by Samsung, and features eight processor cores that have a "deep-scalar out-of-order instruction pipeline" running with more than 5GHz clock frequency. IBM said that these were optimised for the demands of heterogeneous enterprise class workloads.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
It features a completely redesigned cache and chip-interconnection infrastructure that provides 32MB cache per core that can scale to 32 Telum chips. The dual-chip module design contains 22 billion transistors and 19 miles of wire on 17 metal layers.
Bobby Hellard is ITPro's Reviews Editor and has worked on CloudPro and ChannelPro since 2018. In his time at ITPro, Bobby has covered stories for all the major technology companies, such as Apple, Microsoft, Amazon and Facebook, and regularly attends industry-leading events such as AWS Re:Invent and Google Cloud Next.
Bobby mainly covers hardware reviews, but you will also recognize him as the face of many of our video reviews of laptops and smartphones.