Gluon provides machine learning for AWS and Azure
Amazon and Microsoft unveil artificial intelligence ecosystem
Amazon and Microsoft have launched a new open-source deep learning interface called Gluon.
The artificial intelligence system has been jointly developed by the companies to let developers "prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps", according to the companies.
Dr Matt Wood, general manager of Deep Learning and AI at AWS, said that Gluon provides a clear, concise API (application programming interface) for defining machine learning models using a collection of pre-built, optimised neural network components.
"Developers who are new to machine learning will find this interface more familiar to traditional code since machine learning models can be defined and manipulated just like any other data structure. More seasoned data scientists and researchers will value the ability to build prototypes quickly and utilize dynamic neural network graphs for entirely new model architectures, all without sacrificing training speed," he said.
Eric Boyd, CVP of AI Data and Infrastructure at Microsoft, said that Gluon could be used with either Apache MXNet or Microsoft Cognitive Toolkit, and will be supported in all Azure services, tools and infrastructure.
"Gluon offers an easy-to-use interface for developers, highly-scalable training, and efficient model evaluationall without sacrificing flexibility for more experienced researchers," he added.
Gluon will support symbolic and imperative programming, something that is not found in other toolkits, claimed Microsoft. It will also include fully symbolic automatic differentiation of code that has been procedurally executed including control flow. This is achieved from hybridisation: static compute graphs are computed the first time and then cached and reused for subsequent iterations. The compute graphs can also be exported, e.g., for serving on mobile devices, said Boyd.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
There is also a built-in layers library that simplifies the task of defining complex model architectures through reuse of the pre-built building blocks from the library.
As well as this, there is native support for loops and ragged tensors (batching variable length sequences) which translates into execution efficiency for RNN and LSTM models. It also supports sparse and quantised data and operations, both for computation and communication. Gluon also provides advanced scheduling on multiple GPUs.
"This is another step in fostering an open AI ecosystem to accelerate innovation and democratisation of AI-making it more accessible and valuable to all," said Boyd. "With Gluon, developers will be able to deliver new and exciting AI innovations faster by using a higher-level programming model and the tools and platforms they are most comfortable with."
Rene Millman is a freelance writer and broadcaster who covers cybersecurity, AI, IoT, and the cloud. He also works as a contributing analyst at GigaOm and has previously worked as an analyst for Gartner covering the infrastructure market. He has made numerous television appearances to give his views and expertise on technology trends and companies that affect and shape our lives. You can follow Rene Millman on Twitter.