Alignflow: Learning from multiple domains via normalizing flows

Aditya Grover, Christopher Chute, Rui Shu, Zhangjie Cao, Stefano Ermon

Research output: Contribution to conferencePaperpeer-review


The goal of unpaired cross-domain translation is to learn useful mappings between two domains, given only unpaired sets of datapoints from these domains. While this formulation is highly underconstrained, recent work has shown that it is possible to learn mappings useful for downstream tasks by encouraging approximate cycle consistency in the mappings between the two domains (Zhu et al., 2017a). In this work, we propose AlignFlow, a framework for unpaired cross-domain translation that ensures exact cycle consistency in the learned mappings. Our framework uses a normalizing flow model to specify a single invertible mapping between the two domains. In contrast to prior works in cycle-consistent translations, we can learn AlignFlow via adversarial training, maximum likelihood estimation, or a hybrid of the two methods. Theoretically, we derive consistency results for AlignFlow which guarantee recovery of desirable mappings under suitable assumptions. Empirically, AlignFlow demonstrates significant improvements over relevant baselines on image-to-image translation and unsupervised domain adaptation tasks on benchmark datasets.

Original languageEnglish (US)
StatePublished - Jan 1 2019
Externally publishedYes
Event2019 Deep Generative Models for Highly Structured Data, DGS@ICLR 2019 Workshop - New Orleans, United States
Duration: May 6 2019 → …


Conference2019 Deep Generative Models for Highly Structured Data, DGS@ICLR 2019 Workshop
Country/TerritoryUnited States
CityNew Orleans
Period5/6/19 → …

ASJC Scopus subject areas

  • Linguistics and Language
  • Language and Linguistics
  • Education
  • Computer Science Applications


Dive into the research topics of 'Alignflow: Learning from multiple domains via normalizing flows'. Together they form a unique fingerprint.

Cite this