ObjectCompose: Evaluating Resilience of Vision-Based Models on Object-to-Background Compositional Changes

Document Type

Conference Proceeding

Source of Publication

Computer Vision (ACCV 2024)

Publication Date

12-8-2024

Abstract

Given the large-scale multi-modal training of recent vision-based models and their generalization capabilities, understanding the extent of their robustness is critical for their real-world deployment. In this work, our goal is to evaluate the resilience of current vision-based models against diverse object-to-background context variations. The majority of robustness evaluation methods have introduced synthetic datasets to induce changes to object characteristics (viewpoints, scale, color) or utilized image transformation techniques (adversarial changes, common corruptions) on real images to simulate shifts in distributions. Recent works have explored leveraging large language models and diffusion models to generate changes in the background. However, these methods either lack in offering control over the changes to be made or distort the object semantics, making them unsuitable for the task. Our method, on the other hand, can induce diverse object-to-background changes while preserving the original semantics and appearance of the object. To achieve this goal, we harness the generative capabilities of text-to-image, image-to-text, and image-to-segment models to automatically generate a broad spectrum of object-to-background changes. We induce both natural and adversarial background changes by either modifying the textual prompts or optimizing the latents and textual embedding of text-to-image models. This allows us to quantify the role of background context in understanding the robustness and generalization of deep neural networks. We produce various versions of standard vision datasets (ImageNet, COCO), incorporating either diverse and realistic backgrounds into the images or introducing color, texture, and adversarial changes in the background. We conduct thorough experimentation and provide an in-depth analysis of the robustness of vision-based models against object-to-background context variations across different tasks. Our code and evaluation benchmark will be available at https://github.com/Muhammad-Huzaifaa/ObjectCompose.

ISBN

978-981-96-0916-1, 978-981-96-0917-8

ISSN

0302-3349

Publisher

Springer Nature Singapore

Volume

15476

First Page

400

Last Page

417

Disciplines

Computer Sciences

Indexed in Scopus

no

Open Access

no

Share

COinS