Microsoft, in collaboration with OpenAI, unveils AI supercomputer
AI models and tools to be opened to developers through AI at Scale Initiative

During its Build 2020 developer conference on Tuesday, Microsoft unveiled the new supercomputer built in collaboration with OpenAI, an artificial intelligence startup found by Elon Musk. Microsoft announced that the supercomputer hosted in Azure was developed exclusively to train OpenAI’s large-scale artificial intelligence models.
Microsoft joined hands with OpenAI back in 2019 under a multiyear supercomputer partnership, where the tech giant invested $1 billion. The supercomputer developed is a combination of over 285,000 CPU cores, 10,000 GPUs and 400 Gbps of network connectivity for each GPU server in a single system. This has helped Microsoft’s supercomputer developed for OpenAI bag one of the top five positions on the list of the top 500 supercomputers in the world.
Hosted in Azure, the supercomputer benefits from robust modern cloud infrastructure, sustainable data centers, rapid deployments and access to Azure services.
“This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now,” Microsoft’s chief technical officer, Kevin Scott, explained.
Previously, there have been AI implementations dedicated to performing single tasks, like translating languages and recognizing specific objects in images. However, modern research focuses on developing massive models to perform multiple tasks at the same time. According to Microsoft, this can involve moderating game streams or possibly creating codes after analyzing Github. Convincingly, such large-scale models will make AI more beneficial to consumers and developers alike.
As a part of its ‘AI at Scale’ Microsoft has built a cluster of large AI models - the Microsoft Turing models, to improve the language understanding tasks across Bing, Dynamic, Office, and other productivity products.
Microsoft intends to make its large AI models, optimization tools and supercomputing resources available to developers, data scientists and business customers through Azure AI services and GitHub to help them leverage the power of AI at Scale.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Microsoft also revealed a new version of DeepSpeed, an open-source deep-learning library for PyTorch that cuts the amount of computing power required for large distributed model training. The update is more efficient than the previous version released just three months ago and train models 15 times larger and 10 times faster.
Susan Johnson is a content writer and a doctor in the making. She's on the mission to make boring content sparkle.
-
Should AI PCs be part of your next hardware refresh?
AI PCs are fast becoming a business staple and a surefire way to future-proof your business
By Bobby Hellard Published
-
Westcon-Comstor and Vectra AI launch brace of new channel initiatives
News Westcon-Comstor and Vectra AI have announced the launch of two new channel growth initiatives focused on the managed security service provider (MSSP) space and AWS Marketplace.
By Daniel Todd Published