Microsoft’s new vision-language model outranks humans at image captioning
The company will integrate the new model into Azure Cognitive Services to support vision-language tasks
Microsoft researchers have developed a new object-attribute detection model for image encoding: VinVL (visual features in vision-language)
Vision-language (VL) systems make it possible to search relevant images for a text query (or vice versa). They also help describe an image’s content.
In most cases, the systems use two modules to achieve the VL understanding: an image encoding module to generate feature maps of an input image and a vision-language fusion module to map the encoded image and text into vectors in the same semantic space.
Microsoft’s new research focuses on improving the image-encoding module. When combined with VL fusion modules such as OSCAR and VIVO, Microsoft’s newest VL system scored big on the most competitive artificial intelligence (AI) benchmarks, including visual question answering (VQA), Microsoft COCO Image Captioning, and novel object captioning (nocaps).
The tech giant also highlighted that VinVL significantly surpasses human performance on the nocaps leaderboard for consensus-based image description evaluation (CIDEr).
Microsoft trained its VinVL object-attribute detection model using a large object detection dataset containing 2.49 million images ascribed to 1,848 object classes and 524 attribute classes to achieve the results mentioned above. Microsoft formed the dataset by merging four public object detection datasets (COCO, Open Images, Objects365, and VG).
Getting started with Azure Red Hat OpenShift
A developer’s guide to improving application building and deployment capabilities
“We first pretrained an object detection model on the merged dataset, and then fine-tuned the model with an additional attribute branch on VG, making it capable of detecting both objects and attributes,” said Microsoft.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
“Our object-attribute detection model can detect 1,594 object classes and 524 visual attributes. As a result, the model can detect and encode nearly all the semantically meaningful regions in an input image, according to our experiments.”
Despite the promising results, Microsoft said its model is by no means close to the human-level VL understanding.
Microsoft also announced VinVL would be available to the public for general use. Additionally, it will integrate VinVL into Azure Cognitive Services to power a wide range of Microsoft services, including Image Captioning in Office and LinkedIn, and Seeing AI.