Learning to Learn with Variational Information Bottleneck for Domain Generalization

ORCID Identifiers

0000-0001-7537-6457

Document Type

Conference Proceeding

Source of Publication

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Publication Date

1-1-2020

Abstract

© 2020, Springer Nature Switzerland AG. Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift. In this paper, we address both problems. We introduce a probabilistic meta-learning model for domain generalization, in which classifier parameters shared across domains are modeled as distributions. This enables better handling of prediction uncertainty on unseen domains. To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB. MetaVIB is derived from novel variational bounds of mutual information, by leveraging the meta-learning setting of domain generalization. Through episodic training, MetaVIB learns to gradually narrow domain gaps to establish domain-invariant representations, while simultaneously maximizing prediction accuracy. We conduct experiments on three benchmarks for cross-domain visual recognition. Comprehensive ablation studies validate the benefits of MetaVIB for domain generalization. The comparison results demonstrate our method outperforms previous approaches consistently.

ISBN

9783030586065

ISSN

0302-9743

Publisher

Springer International Publishing

Volume

12355 LNCS

First Page

200

Last Page

216

Disciplines

Computer Sciences | Education

Keywords

Domain generalization, Information bottleneck, Meta learning, Variational inference

Indexed in Scopus

yes

Open Access

yes

Open Access Type

Green: A manuscript of this publication is openly available in a repository

Share

COinS