Microsoft challenges Google with Project Adam artificial intelligence
Microsoft developed AI can tell its Alsatian from its Shih Tzu.
A team of Microsoft Research engineers developed an artificial intelligence system that claims to have beaten Google in tests for accuracy.
Project Adam is Microsoft's attempt at building deep neural networks more effectively. It said the system is built using commodity hardware and is very good at telling breeds of dog apart.
The inspiration for Project Adam was a project carried out by Google in 2012, which used 16,000 machines to teach itself to identify cat pictures from YouTube.
The engineers said the project was 50 times faster than current image recognition systems and twice as accurate. It also uses 30 times fewer machines to carry out the task.
Project Adam is capable of distinguishing between different dog breeds even if two breeds are the same colour and virtually identical. It does this be scanning parts of an image that do not match when it looks it up in its large dataset.
"The machine-learning models we have trained in the past have been very tiny, especially in comparison to the size of our brain in terms of connections between neurons," said Trishul Chilimbi, one of the Microsoft researchers who lead the Project Adam effort. "What the Google work had indicated is that if you train a larger model on more data, you do better on hard AI [artificial intelligence] tasks like image classification."
The project used 14 million images from ImageNet, an image database divided into 22,000 categories. According to Microsoft, the aim of Project Adam is to move computers beyond being simple number crunchers to teaching them to be pattern recognisers.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
"Marrying these two things together will open a new world of applications that we couldn't imagine doing otherwise," said Chilimbi.
"Imagine if you could help blind people see by pointing a cell phone at a scene and having it describe the scene to them. We could do things like take a photograph of food we're eating and have it provide us with nutritional information. We can use that to make smarter choices."
Chilimbi said how the deep neural networks inside Project Adam work is still very much a mystery.
"How does a DNN, where all you're presenting it is an image, and you're saying, This is a Pembroke Welsh corgi'how does it figure out how to decompose the image into these levels of features?" he said.
"There's no instruction that we provide for that. You just have training algorithms saying, This is the image, this is the label.' It automatically figures out these hierarchical features. That's still a deep, mysterious, not well understood process. But then, nature has had several million years to work her magic in shaping the brain, so it shouldn't be surprising that we will need time to slowly unravel the mysteries."
Microsoft said the project is still in its early stages and looks unlikely to be released to the public. However, some of the technology could eventually find its way into its Bing search engine.
Rene Millman is a freelance writer and broadcaster who covers cybersecurity, AI, IoT, and the cloud. He also works as a contributing analyst at GigaOm and has previously worked as an analyst for Gartner covering the infrastructure market. He has made numerous television appearances to give his views and expertise on technology trends and companies that affect and shape our lives. You can follow Rene Millman on Twitter.