CollaGAN : Collaborative GAN for Missing Image Data Imputation - Dongwook Lee - CVPR 2019

 

Info

  • Title: CollaGAN : Collaborative GAN for Missing Image Data Imputation
  • Task: Image Generation
  • Author: Dongwook Lee
  • Arxiv: 1829
  • Published: CVPR 2019

Highlights

Many-to-One image generation: The underlying image manifold can be learned more synergistically from the multiple input data set sharing the same manifold structure, rather than from a single input. Therefore, the estimation of missing data using CollaGAN is more accurate.

Abstract

In many applications requiring multiple inputs to obtain a desired output, if any of the input data is missing, it often introduces large amounts of bias. Although many techniques have been developed for imputing missing data, the image imputation is still difficult due to complicated nature of natural images. To address this problem, here we proposed a novel framework for missing image data imputation, called Collaborative Generative Adversarial Network (CollaGAN). CollaGAN converts an image imputation problem to a multi-domain images-to-image translation task so that a single generator and discriminator network can successfully estimate the missing data using the remaining clean data set. We demonstrate that CollaGAN produces the images with a higher visual quality compared to the existing competing approaches in various image imputation tasks.s

Motivation & Design

Image translation tasks using (a) cross-domain models, (b) StarGAN, and (c) the proposed collaborative GAN(CollaGAN). Cross-domain model needs large number of generators to handle multi-class data. StarGAN and CollaGAN use a single generator with one input and multiple inputs, respectively, to synthesize the target domain image.

D has two branches: domain classification Dclsf and source classficiation Dgan (real/fake). First, Dclsf is only trained by (1) the loss calculated from real samples (left). Then G reconstructs the target domain image using the set of input images (middle). For the cycle consistency, the generated fake image re-entered to the G with inputs images and G produces the multiple reconstructed outputs in original domains. Here, Dclsf and Dgan are simultaneously trained by the loss from only (1) real images and both (1) real & (2) fake images, respectively (right).