PyTorch Code for vid2vid

 

Info

Title: Video-to-Video Synthesis

Project YouTube(short) YouTube(full) Paper Note

Prerequisites

  • Linux or macOS
  • Python 3
  • NVIDIA GPU + CUDA cuDNN
  • PyTorch 0.4

Getting Started

Installation

  • Install python libraries dominate and requests.
    pip install dominate requests
    
  • If you plan to train with face datasets, please install dlib.
    pip install dlib
    
  • If you plan to train with pose datasets, please install DensePose and/or OpenPose.
  • Clone this repo:
    git clone https://github.com/NVIDIA/vid2vid
    cd vid2vid
    
  • Docker Image If you have difficulty building the repo, a docker image can be found in the docker folder.

Testing

  • Please first download example dataset by running python scripts/download_datasets.py.
  • Next, compile a snapshot of FlowNet2 by running python scripts/download_flownet2.py.
  • Cityscapes
    • Please download the pre-trained Cityscapes model by:
      python scripts/street/download_models.py
      
    • To test the model (bash ./scripts/street/test_2048.sh):
      #!./scripts/street/test_2048.sh
      python test.py --name label2city_2048 --label_nc 35 --loadSize 2048 --n_scales_spatial 3 --use_instance --fg --use_single_G
      

      The test results will be saved in: ./results/label2city_2048/test_latest/.

    • We also provide a smaller model trained with single GPU, which produces slightly worse performance at 1024 x 512 resolution.
      • Please download the model by
        python scripts/street/download_models_g1.py
        
      • To test the model (bash ./scripts/street/test_g1_1024.sh):
        #!./scripts/street/test_g1_1024.sh
        python test.py --name label2city_1024_g1 --label_nc 35 --loadSize 1024 --n_scales_spatial 3 --use_instance --fg --n_downsample_G 2 --use_single_G
        
    • You can find more example scripts in the scripts/street/ directory.
  • Faces
    • Please download the pre-trained model by:
      python scripts/face/download_models.py
      
    • To test the model (bash ./scripts/face/test_512.sh):
      #!./scripts/face/test_512.sh
      python test.py --name edge2face_512 --dataroot datasets/face/ --dataset_mode face --input_nc 15 --loadSize 512 --use_single_G
      

      The test results will be saved in: ./results/edge2face_512/test_latest/.

More Training/Test Details

  • We generate frames in the video sequentially, where the generation of the current frame depends on previous frames. To generate the first frame for the model, there are 3 different ways:
      1. Using another generator which was trained on generating single images (e.g., pix2pixHD) by specifying --use_single_G. This is the option we use in the test scripts.
      1. Using the first frame in the real sequence by specifying --use_real_img.
      1. Forcing the model to also synthesize the first frame by specifying --no_first_img. This must be trained separately before inference.
  • The way we train the model is as follows: suppose we have 8 GPUs, 4 for generators and 4 for discriminators, and we want to train 28 frames. Also, assume each GPU can generate only one frame. The first GPU generates the first frame, and pass it to the next GPU, and so on. After the 4 frames are generated, they are passed to the 4 discriminator GPUs to compute the losses. Then the last generated frame becomes input to the next batch, and the next 4 frames in the training sequence are loaded into GPUs. This is repeated 7 times (4 x 7 = 28), to train all the 28 frames.
  • Some important flags:
    • n_gpus_gen: the number of GPUs to use for generators (while the others are used for discriminators). We separate generators and discriminators into different GPUs since when dealing with high resolutions, even one frame cannot fit in a GPU. If the number is set to -1, there is no separation and all GPUs are used for both generators and discriminators (only works for low-res images).
    • n_frames_G: the number of input frames to feed into the generator network; i.e., n_frames_G - 1 is the number of frames we look into the past. the default is 3 (conditioned on previous two frames).
    • n_frames_D: the number of frames to feed into the temporal discriminator. The default is 3.
    • n_scales_spatial: the number of scales in the spatial domain. We train from the coarsest scale and all the way to the finest scale. The default is 3.
    • n_scales_temporal: the number of scales for the temporal discriminator. The finest scale takes in the sequence in the original frame rate. The coarser scales subsample the frames by a factor of n_frames_D before feeding the frames into the discriminator. For example, if n_frames_D = 3 and n_scales_temporal = 3, the discriminator effectively sees 27 frames. The default is 3.
    • max_frames_per_gpu: the number of frames in one GPU during training. If you run into out of memory error, please first try to reduce this number. If your GPU memory can fit more frames, try to make this number bigger to make training faster. The default is 1.
    • max_frames_backpropagate: the number of frames that loss backpropagates to previous frames. For example, if this number is 4, the loss on frame n will backpropagate to frame n-3. Increasing this number will slightly improve the performance, but also cause training to be less stable. The default is 1.
    • n_frames_total: the total number of frames in a sequence we want to train with. We gradually increase this number during training.
    • niter_step: for how many epochs do we double n_frames_total. The default is 5.
    • niter_fix_global: if this number if not 0, only train the finest spatial scale for this number of epochs before starting to fine-tune all scales.
    • batchSize: the number of sequences to train at a time. We normally set batchSize to 1 since often, one sequence is enough to occupy all GPUs. If you want to do batchSize > 1, currently only batchSize == n_gpus_gen is supported.
    • no_first_img: if not specified, the model will assume the first frame is given and synthesize the successive frames. If specified, the model will also try to synthesize the first frame instead.
    • fg: if specified, use the foreground-background separation model as stated in the paper. The foreground labels must be specified by --fg_labels.
    • no_flow: if specified, do not use flow warping and directly synthesize frames. We found this usually still works reasonably well when the background is static, while saving memory and training time.
    • sparse_D: if specified, only apply temporal discriminator on sparse frames in the sequence. This helps save memory while having little effect on performance.
  • For other flags, please see options/train_options.py and options/base_options.py for all the training flags; see options/test_options.py and options/base_options.py for all the test flags.

  • Additional flags for edge2face examples:
    • no_canny_edge: do not use canny edges for background as input.
    • no_dist_map: by default, we use distrance transform on the face edge map as input. This flag will make it directly use edge maps.
  • Additional flags for pose2body examples:
    • densepose_only: use only densepose results as input. Please also remember to change input_nc to be 3.
    • openpose_only: use only openpose results as input. Please also remember to change input_nc to be 3.
    • add_face_disc: add an additional discriminator that only works on the face region.
    • remove_face_labels: remove densepose results for face, and add noise to openpose face results, so the network can get more robust to different face shapes. This is important if you plan to do inference on half-body videos (if not, usually this flag is unnecessary).
    • random_drop_prob: the probability to randomly drop each pose segment during training, so the network can get more robust to missing poses at inference time. Default is 0.05.
    • basic_point_only: if specified, only use basic joint keypoints for OpenPose output, without using any hand or face keypoints.

Core Design

Discriminator

class Vid2VidModelD(BaseModel):
    def forward(self, scale_T, tensors_list, dummy_bs=0):
        lambda_feat = self.opt.lambda_feat
        lambda_F = self.opt.lambda_F
        lambda_T = self.opt.lambda_T
        scale_S = self.opt.n_scales_spatial
        tD = self.opt.n_frames_D
        if tensors_list[0].get_device() == self.gpu_ids[0]:
            tensors_list = util.remove_dummy_from_tensor(tensors_list, dummy_bs)
            if tensors_list[0].size(0) == 0:                
                return [self.Tensor(1, 1).fill_(0)] * (len(self.loss_names_T) if scale_T > 0 else len(self.loss_names))
        
        if scale_T > 0:
            real_B, fake_B, flow_ref, conf_ref = tensors_list
            _, _, _, self.height, self.width = real_B.size()
            loss_D_T_real, loss_D_T_fake, loss_G_T_GAN, loss_G_T_GAN_Feat = self.compute_loss_D_T(real_B, fake_B, 
                flow_ref/20, conf_ref, scale_T-1)            
            loss_G_T_Warp = torch.zeros_like(loss_G_T_GAN)

            loss_list = [loss_G_T_GAN, loss_G_T_GAN_Feat, loss_D_T_real, loss_D_T_fake, loss_G_T_Warp]
            loss_list = [loss.view(-1, 1) for loss in loss_list]
            return loss_list            

        real_B, fake_B, fake_B_raw, real_A, real_B_prev, fake_B_prev, flow, weight, flow_ref, conf_ref = tensors_list
        _, _, self.height, self.width = real_B.size()

        ################### Flow loss #################
        if flow is not None:
            # similar to flownet flow        
            loss_F_Flow = self.criterionFlow(flow, flow_ref, conf_ref) * lambda_F / (2 ** (scale_S-1))        
            # warped prev image should be close to current image            
            real_B_warp = self.resample(real_B_prev, flow)                
            loss_F_Warp = self.criterionFlow(real_B_warp, real_B, conf_ref) * lambda_T
            
            ################## weight loss ##################
            loss_W = torch.zeros_like(weight)
            if self.opt.no_first_img:
                dummy0 = torch.zeros_like(weight)
                loss_W = self.criterionFlow(weight, dummy0, conf_ref)
        else:
            loss_F_Flow = loss_F_Warp = loss_W = torch.zeros_like(conf_ref)

        #################### fake_B loss ####################        
        ### VGG + GAN loss 
        loss_G_VGG = (self.criterionVGG(fake_B, real_B) * lambda_feat) if not self.opt.no_vgg else torch.zeros_like(loss_W)
        loss_D_real, loss_D_fake, loss_G_GAN, loss_G_GAN_Feat = self.compute_loss_D(self.netD, real_A, real_B, fake_B)
        ### Warp loss
        fake_B_warp_ref = self.resample(fake_B_prev, flow_ref)
        loss_G_Warp = self.criterionWarp(fake_B, fake_B_warp_ref.detach(), conf_ref) * lambda_T
        
        if fake_B_raw is not None:
            if not self.opt.no_vgg:
                loss_G_VGG += self.criterionVGG(fake_B_raw, real_B) * lambda_feat        
            l_D_real, l_D_fake, l_G_GAN, l_G_GAN_Feat = self.compute_loss_D(self.netD, real_A, real_B, fake_B_raw)        
            loss_G_GAN += l_G_GAN; loss_G_GAN_Feat += l_G_GAN_Feat
            loss_D_real += l_D_real; loss_D_fake += l_D_fake

        if self.opt.add_face_disc:
            face_weight = 2
            ys, ye, xs, xe = self.get_face_region(real_A)
            if ys is not None:                
                loss_D_f_real, loss_D_f_fake, loss_G_f_GAN, loss_G_f_GAN_Feat = self.compute_loss_D(self.netD_f,
                    real_A[:,:,ys:ye,xs:xe], real_B[:,:,ys:ye,xs:xe], fake_B[:,:,ys:ye,xs:xe])  
                loss_G_f_GAN *= face_weight  
                loss_G_f_GAN_Feat *= face_weight                  
            else:
                loss_D_f_real = loss_D_f_fake = loss_G_f_GAN = loss_G_f_GAN_Feat = torch.zeros_like(loss_D_real)

        loss_list = [loss_G_VGG, loss_G_GAN, loss_G_GAN_Feat,
                     loss_D_real, loss_D_fake, 
                     loss_G_Warp, loss_F_Flow, loss_F_Warp, loss_W]
        if self.opt.add_face_disc:
            loss_list += [loss_G_f_GAN, loss_G_f_GAN_Feat, loss_D_f_real, loss_D_f_fake]   
        loss_list = [loss.view(-1, 1) for loss in loss_list]           
        return loss_list

Generator

class Vid2VidModelG(BaseModel):
    def encode_input(self, input_map, real_image, inst_map=None):        
        size = input_map.size()
        self.bs, tG, self.height, self.width = size[0], size[1], size[3], size[4]
        
        input_map = input_map.data.cuda()                
        if self.opt.label_nc != 0:                        
            # create one-hot vector for label map             
            oneHot_size = (self.bs, tG, self.opt.label_nc, self.height, self.width)
            input_label = torch.cuda.FloatTensor(torch.Size(oneHot_size)).zero_()
            input_label = input_label.scatter_(2, input_map.long(), 1.0)    
            input_map = input_label        
        input_map = Variable(input_map)
                
        if self.opt.use_instance:
            inst_map = inst_map.data.cuda()            
            edge_map = Variable(self.get_edges(inst_map))            
            input_map = torch.cat([input_map, edge_map], dim=2)
        
        pool_map = None
        if self.opt.dataset_mode == 'face':
            pool_map = inst_map.data.cuda()
        
        # real images for training
        if real_image is not None:
            real_image = Variable(real_image.data.cuda())   

        return input_map, real_image, pool_map

    def forward(self, input_A, input_B, inst_A, fake_B_prev, dummy_bs=0):
        tG = self.opt.n_frames_G           
        gpu_split_id = self.opt.n_gpus_gen + 1        
        if input_A.get_device() == self.gpu_ids[0]:
            input_A, input_B, inst_A, fake_B_prev = util.remove_dummy_from_tensor([input_A, input_B, inst_A, fake_B_prev], dummy_bs)
            if input_A.size(0) == 0: return self.return_dummy(input_A)
        real_A_all, real_B_all, _ = self.encode_input(input_A, input_B, inst_A)        

        is_first_frame = fake_B_prev is None
        if is_first_frame: # at the beginning of a sequence; needs to generate the first frame
            fake_B_prev = self.generate_first_frame(real_A_all, real_B_all)                    
                        
        netG = []
        for s in range(self.n_scales): # broadcast netG to all GPUs used for generator
            netG_s = getattr(self, 'netG'+str(s))                        
            netG_s = torch.nn.parallel.replicate(netG_s, self.opt.gpu_ids[:gpu_split_id]) if self.split_gpus else [netG_s]
            netG.append(netG_s)

        start_gpu = self.gpu_ids[1] if self.split_gpus else real_A_all.get_device()        
        fake_B, fake_B_raw, flow, weight = self.generate_frame_train(netG, real_A_all, fake_B_prev, start_gpu, is_first_frame)        
        fake_B_prev = [B[:, -tG+1:].detach() for B in fake_B]
        fake_B = [B[:, tG-1:] for B in fake_B]

        return fake_B[0], fake_B_raw, flow, weight, real_A_all[:,tG-1:], real_B_all[:,tG-2:], fake_B_prev