Dataset Viewer
Auto-converted to Parquet Duplicate
relative_rank
int64
0
16.8k
win_prob
float64
0.01
0.99
title
stringlengths
8
158
abstract
stringlengths
227
2.78k
full_text
stringlengths
76
184k
review
stringlengths
62
28.2k
source
stringclasses
4 values
review_src
stringclasses
1 value
criteria_count
dict
dimension_scores
dict
overall_score
float64
0
6
thinking_trace
stringlengths
5
6.3k
prompt
stringlengths
636
184k
prompt_length
int64
120
82.1k
conversations
listlengths
2
2
year
int64
2.02k
2.02k
0
0.992632
Representational Disentanglement for Multi-Domain Image Completion
Multi-domain data are widely leveraged in vision applications to take advantage of complementary information from each modality, e.g., brain tumor segmentation from multi-parametric magnetic resonance imaging (MRI). However, due to different imaging protocol and data loss or corruption, the availability of images in all domains could vary amongst multiple data sources in practice, which makes it challenging to train and test a universal model with a varied set of input data. To tackle this problem, we propose a general approach to complete the possible missing domain of the input data in a variety of application settings. Specifically, we develop a novel generative adversarial network (GAN) architecture that utilizes a representational disentanglement scheme for shared `*skeleton* encoding and separate `*flesh* encoding across multiple domains. We further illustrate that the learned representation in the multi-domain image translation could be leveraged for higher-level recognition, like segmentation. Specifically, we introduce a unified framework of image completion branch and segmentation branch with a shared content encoder. We demonstrate constant and significant performance improvement by integrating the proposed representation disentanglement scheme in both multi-domain image completion and image segmentation tasks using three evaluation datasets individually for brain tumor segmentation, prostate segmentation, and facial expression image completion.
Title: REPRESENTATIONAL DISENTANGLEMENT FOR MULTI-DOMAIN IMAGE COMPLETION Abstract: Multi-domain data are widely leveraged in vision applications to take advantage of complementary information from each modality, e.g., brain tumor segmentation from multi-parametric magnetic resonance imaging (MRI). However, due to different imaging protocol and data loss or corruption, the availability of images in all domains could vary amongst multiple data sources in practice, which makes it challenging to train and test a universal model with a varied set of input data. To tackle this problem, we propose a general approach to complete the possible missing domain of the input data in a variety of application settings. Specifically, we develop a novel generative adversarial network (GAN) architecture that utilizes a representational disentanglement scheme for shared 'skeleton' encoding and separate 'flesh' encoding across multiple domains. We further illustrate that the learned representation in the multi-domain image translation could be leveraged for higher-level recognition, like segmentation. Specifically, we introduce a unified framework of image completion branch and segmentation branch with a shared content encoder. We demonstrate constant and significant performance improvement by integrating the proposed representation disentanglement scheme in both multi-domain image completion and image segmentation tasks using three evaluation datasets individually for brain tumor segmentation, prostate segmentation, and facial expression image completion. 1. INTRODUCTION: Multi-domain images are often required as inputs due to the nature that different domains could provide complementary knowledge in various vision tasks. For example, four medical imaging modalities, MRI with T1, T1-weighted, T2-weighted, FLAIR (FLuid-Attenuated Inversion Recovery), are acquired as a standard protocol to segment the accurate tumor regions for each patient in the brain tumor segmentation task (Menze et al., 2014) . Each modality can provide distinct features to locate the true tumor boundaries from a differential diagnosis perspective. Additionally, when it comes to the natural image tasks, there are similar scenarios such as person re-identification across different cameras and different times (Zheng et al., 2015; 2019) . Here, the medical images in different contrast modalities or natural images with the person under different appearance or cameras can both be considered as different domains, which all contribute to depict the underlying object or scene from different aspects of view. However, some domains might be missing in practice. In large-scale datasets from multiple institutes, it is generally difficult or even infeasible to guarantee the availability of complete domains for all the data. For example, in some cases, the patients might lack some imaging scans due to different imaging protocol or data loss or corruption. In terms of taking the most advantage of all these rare and valuable data, it is costly to just throw away the incomplete samples during training, and even infeasible to test with missing domain input. Thus, it becomes necessary to effectively artificially generate the missing data. An intuitive approach is to substitute the missing domains with the nearest neighbor among other existing samples, but this might lack of semantic consistency among domains of the input sample since it only focuses on pixel-level similarity with existing data. Additionally, the recent success of GANs (Goodfellow et al., 2014; Mirza & Osindero, 2014; Isola et al., 2017; Zhu et al., 2017a; b; Kim et al., 2017; Liu et al., 2017; Choi et al., 2018; Yoon et al., 2018; Lee et al., 2019a) in image-to-image translation provides another possible solution for this challenge. By training a GAN-based model to generate images for missing domains, we are more likely to keep the semantic consistency with existing domains by learning the semantic representations. CycleGAN (Zhu et al., 2017a) shows an impressive performance in the image-to-image translation task via cycle-consistency constraints between real images and generated images. However, CycleGAN mainly focuses on the 1-to-1 mapping between two specific domains and assumes the corresponding images in two domains strictly share the same representation in latent space. This is limited in multi-domain applications since n(n -1) mapping functions are required if there are n domains. Following this, StarGAN (Choi et al., 2018) proposes a general method for multi-domain image translation using a mask vector in inputs to specify the desired target domain. Meantime, RadialGAN (Yoon et al., 2018) also deals with the multi-domain generation problem with the assumption that all the domains share the same latent space. Although StarGAN and RadialGAN make it possible to generate images in different target domains through the 1-to-n translation, the representation learning and image generation are always conditioned on the input with only one source domain. In order to take advantage of multiple available domains as inputs for representation learning, CollaGAN (Lee et al., 2019a) proposes a collaborative model to incorporate multiple available domains to generate one missing domain. Similar to StarGAN, CollaGAN only relies on the cycle-consistency constraints to preserve the contents in generated images, which are actually implicit constraints between real images and fake images in pixel level. Additionally, since the target domain is controlled by an one-hot mask vector in the input, CollaGAN is essentially doing n-to-1 translation in one inference. Our goal in this work is to propose a more general n-to-n image translation framework that could overcome the aforementioned limitations as illustrated in the Fig. 1 . Recently, learning disentangled representation is proposed to capture the full distribution of possible outputs by introducing a random style code (Chen et al., 2016; Higgins et al., 2017; Huang et al., 2018; Lee et al., 2018) . InfoGAN (Chen et al., 2016) and β-VAE (Higgins et al., 2017) learn the disentangled representation without supervision. In image-to-image translation tasks, DIRT (Lee et al., 2018) learns disentangled content and attribute features by exchanging the features encoded from images of two domains respectively and then reverting back again. The image consistency during translation is constrained by code reconstruction and image reconstruction. With a similar code exchange framework, MUNIT (Huang et al., 2018) assumes a prior distribution on style code, which allows sampling different style codes to generate multiple images in target domain. However, both DIRT and MUNIT only deal with image translation between two domains, which is not efficient to train n(n -1) mapping functions in a n-domain task. Inspired by previous works, we propose a n-to-n multi-domain image translation approach based on the representational disentanglement scheme for multi-domain image completion (ReMIC). Specifically, our contributions are three-fold: (1) We propose a GAN architecture with representation disentanglement to learn domain-shared features and domain-specific features for the more general and flexible n-to-n image translation; (2) We illustrate the effectiveness of the representation learning for high-level tasks by building a unified framework for jointly learning the generation and segmentation with a shared content encoder; (3) Extensive experiments with three different datasets demonstrate that the proposed method achieves better performance than previous state-of-the-art methods on both generation and segmentation. 2. METHOD: As discussed above, images from different domains for the same data sample could present their exclusive features of the data subject. Nonetheless, they also inherit some global content structures Figure 2 : Overview of the proposed nto-n multi-domain completion and segmentation framework. n = 4 and two domain data are missing in this example. Our model contains a unified content encoder E c (red lines), domainspecific style encoders E s i (1 ≤ i ≤ n, orange lines) and generators G i (1 ≤ i ≤ n, blue lines). A variety of loss was adopted (burgundy lines), i.e., image consistency loss for visible domains, latent consistency loss, adversarial loss and reconstruction loss for the generated images. Furthermore, the representational learning framework is flexible to combine a segmentation generator G S following the content code for the unified image generation and segmentation. of it. Let us take the parametric MRI for brain tumors as an example. T2 and FLAIR MRI can highlight the differences in tissues' water relaxational properties, which will distinguish the tumor tissue from normal ones. Contrasted T1 MRI can examine the pathological intratumoral take-up of contrast agents so that the boundary between tumor core and the rest will be highlighted. However, the underline anatomical structure of the brain is shared by all these three modalities. With the availability of multiple domain data, it will be meaningful to decompose the images into the shared content structure (skeleton) and meanwhile distinguish and model their unique characteristics (flesh) through learning. Therefore, we will be able to reconstruct the missing image during the testing by using the shared skeleton (extracted from the available data domains) and a sampled flesh from the learned model. Without assuming a fixed number of missing domains during the model training, the learned framework could flexibly handle one or more missing domains. In addition, we further enforce the accuracy of the extracted content structure by connecting it to the segmentation tasks. In such manner, the disentangled representations of multiple domain images (both the skeleton and flesh) can help in both the image completion and segmentation tasks. Suppose there are n domains: {χ 1 , χ 2 , • • • , χ n }. Let x 1 ∈ χ 1 , x 2 ∈ χ 2 , • • • , x n ∈ χ n are images from n different domains respectively, which are grouped data describing the same subject x = {x 1 , • • • , x n }. In total, we assume the whole dataset contains M independent data samples. For each data sample, we assume one or more of the n images might be randomly missing. The goal of our first task is to complete all the missing domains for all the samples. To accomplish the completion of all missing domains from a random set of available domains, we assume the n domains share the latent representation of underline structure. We name the shared latent representation as content code and meanwhile each domain also contains the domain-specific latent representation, i.e., style code that is related to the various features or attributes in different domains. The missing domains can be reconstructed from these two-aspect of information through the learning of deep neural networks. Similar to the setting in MUNIT (Huang et al., 2018) , we also assume a prior distribution for style latent code as N (0, I) to capture the full distribution of possible outputs in each domain. However, MUNIT trains separate content encoder for each domain and enforce the disentanglement via coupled cross-domain translation during training while the proposed method employs one single content encoder to extract the true representation shared across all the domains. 2.1. UNIFIED IMAGE COMPLETION AND SEGMENTATION FRAMEWORK: As shown in Figure 2 , our model contains a unified content encoder E c and domain-specific style encoders E s i (1 ≤ i ≤ n). Content encoder E c extracts the content code c from all existing domains: E c (x 1 , x 2 , • • • , x n ) = c. For the missing domains, we use zero padding in corresponding input channels. For each domain, a style encoder E s i learns the domain-specific style code s i from the corresponding domain image x i respectively (1 ≤ i ≤ n): E s i (x i ) = s i . During the training, our model captures the content code c and style codes s i (1 ≤ i ≤ n) (assumed in a prior distribution of N (0, I) as shown in Figure 2 ) through the encoding process (denoted as red and orange arrows respectively) with a random set of input images (in green box). We only need to train a single ReMIC model to generate all these missing images from sets of available domains. In the generation process, our model samples style codes from prior distribution and recombines content code to generate images in n domains through generators G i (1 ≤ i ≤ n) (denoted as blue arrows). The generator G i for each domain generates the fake images in corresponding domain from the domain-shared content code and the domain-specific style code: G i (c, s i ) = xi . As discussed before, it is fairly common to find that there are missing domains for input data. Some straight-forward solutions to complete all the missing images include zero filling, average image, or generating images via image translation model, etc. Alternatively, based on the content code learned in our model, we could develop a unified model for multi-task learning of both generation and segmentation. Specifically, another branch of segmentation generator G S is added after content encoder to generate segmentation mask. By optimizing the generation loss and segmentation Dice loss (detailed in Section 2.2) simultaneously, the model could adaptively learn how to generate missing images to promote the segmentation performance. 2.2. TRAINING LOSS: In the training of GAN models, the setting of losses is of paramount importance to the final generation results. Our loss objective contains the cycle-consistency loss of images and codes within and across domains, adversarial loss and reconstruction loss on the generation and input images. Image Consistency Loss: For each sample, the content and style encoders are able to extract a domain-shared content code and domain-specific style codes respectively for each available domain. Then by recombining the content and style codes, the domain generators are expected to reconstruct the input images from each domain. The image consistency loss is defined to constrain the reconstructed images and real images as in the flow chat of "Image → Code → Image". L xi cyc = E xi∼p(xi) [ G i (E c (x 1 , x 2 , • • • , x n ), E s i (x i )) -x i 1 ] (1) where p(x i ) is the data distribution in domain χ i (1 ≤ i ≤ n). Latent Consistency Loss: Next, the generated fake images can also be encoded as content and style codes by using the same encoders. The latent consistency loss constrains the code before decoding and after encoding again in the direction of "Code → Image → Code". EQUATION EQUATION where p(s i ) is the prior distribution N (0, I), p(c) is given by c = E c (x 1 , x 2 , • • • , x n ) and x i ∼ p(x i ) (1 ≤ i ≤ n). Adversarial Loss: The adversarial learning between generators and discriminators forces the data distribution of the generated fake images to be close to the real images' distribution for each domain. EQUATION ) where D i is the discriminator for domain i to distinguish the generated fake images xi and real images x i ∈ χ i . Reconstruction Loss: In addition to the feature-level consistency mentioned above to constrain the relationship between the generated fake images and real images in different domains, we also constrain the pixel-level similarity between generated images and ground truth images in the same domain during training stage. EQUATION Total Loss: The encodes, decoders and discriminators are jointly trained to optimize the total objective as follows. min E c ,E s 1 ,••• ,E s n max D1,••• ,Dn L(E c , E s 1 , • • • , E s n , D 1 , • • • , D n ) = λ adv n i=1 L xi adv + λ x cyc n i=1 L xi cyc + λ c cyc L c cyc + λ s cyc n i=1 L si cyc + λ recon n i=1 L xi recon (6) where λ adv , λ x cyc , λ c cyc , λ s cyc , and λ recon are weights. In the n-to-n image translation task, the model learns a complementary representation of multiple domains, which can also facilitate the high-level recognition tasks. For example, extracted content code (containing the underline anatomical structures) can largely benefit the segmentation of organs and lesions in medical image analysis. On the other side, the segmentation task can enforce the learning of a more representative content encoder. Therefore, we train a multi-task network for both the segmentation and generation. In the proposed framework, we construct a unified generation and segmentation model by adding a segmentation generator G S following the content code out from the generated fake images as shown in Fig. 2 . We utilize dice loss (Salehi et al., 2017) for accurate segmentation of multiple input images. L seg = 1 - 1 L L l=1 p ŷp (l)y p (l) p ŷp (l) + y p (l) -ŷp (l)y p (l) (7) where L is the total number of classes, p is the pixel position in the image, ŷ is the predicted segmentation probability map from G S and y is the ground truth segmentation mask. The segmentation can loss can be added into the total loss in Equation 6for an end-to-end joint learning. 3. EXPERIMENTAL RESULTS AND DISCUSSION: We firstly show the advantage of our proposed method in the n-to-n image completion task given a random set of available domains. Moreover, we illustrate that the proposed model (a variation with two branches) provides an efficient solution to multi-domain segmentation with missing image inputs. To demonstrate the generalizability of the proposed algorithm, we evaluate the proposed method on a natural image dataset as well as two medical image datasets. Totally three datasets are employed in our experiments, i.e., BraTS, ProstateX, and RaFD. ProstateX The ProstateX dataset (Litjens et al., 2014) contains multi-parametric prostate MR scans for 98 subjects. Here, we use the three modalities of each sample: 1) T2-weighted (T2), 2) Apparent Diffusion Coefficient (ADC), 3) high b-value DWI images (HighB). We randomly divide the dataset to 78 and 20 subjects for training and testing respectively. Similar to BraTS, 2D slices are extracted from 3D volumes for each modality. In total, the training and testing sets contain 3,540 and 840 images respectively. We resize the images of 384 × 384 to 256 × 256. Prostate regions are manually labeled as the whole prostate (WP) by board-certificated radiologists. BraTS: RaFD The Radboud Faces Database (RaFD) (Langner et al., 2010) contains eight facial expressions collected from 67 participants respectively: neutral, angry, contemptuous, disgusted, fearful, happy, sad, and surprised. We adopt images from three different camera angles (45 • , 90 • , 135 • ) along with three different gaze directions (left, frontal, right), 4,824 images in total. We treat the eight paired images in eight expression domains as an data sample. 54 participants (3,888 images) and 13 participants (936 images) are randomly divided for training and testing set. Following the setting in StarGAN (Choi et al., 2018) , we crop the image with the face in the center and then resize them to 128 × 128. 3.1. RESULTS OF MULTI-DOMAIN IMAGE COMPLETION: For comparison purpose, we firstly assume there are always only one domain missing for each sample and the algorithm will be evaluated for multiple times when one domain is missed at each time. Then we investigate a more general scenario when there are more than one missing domains and show that our proposed method is capable to handle the general random n-to-n image completion. Multiple quantitative metrics are used to evaluate the similarity between the generated image and original one, i.e., normalized root mean-squared error (NRMSE) and mean structural similarity index (SSIM) between generated images and target images. We compare our results with previous arts on all three datasets for this task. The results of the proposed method ("ReMIC"), ReMIC without reconstruction loss ("ReMIC w/o Recon") and ReMIC with cross-domain translation ("ReMIC+Multi-Sample", more details in Appendix C.2) are reported. Note that in our method, by leveraging the unified content code and sampling the style code for each domain respectively, the proposed model could handle any number of domain missing, which is more general and flexible for the random n-to-n image completion. Besides, these compared methods have their own limits. MUNIT (Huang et al., 2018 ) conducts 1-to-1 image translation between two domains through representational disentanglement, as shown in Fig. 1(a) . In experiments, we train and test MUNIT models between any pair of two domains. Without loss of generality, we select "neural" images to generate all the other domains by following the StarGAN setting, and "angry" image is used to generate "neural" one. In BraTS , "T1" is selected to generate other domains since it is the most common modality while "T1" is generated from "T1Gd". Similarly, "T2" is selected to generate other domains in ProstateX while "T2" is generated from "ADC". StarGAN (Choi et al., 2018) adopts a mask vector to generate image in the specified target domain. Thus, different target domains could be generated from one source domain in multiple inference passes while only a single model is trained. This is actually a 1-to-n image translation, since only one domain can be used as input in StarGAN, we use the same domain pair match as MUNIT, which is also the same as the setting in the StarGAN (Choi et al., 2018) . CollaGAN (Lee et al., 2019a; b) carries out the n-to-1 image translation as shown in Fig. 1(b) , where multiple source domains collaboratively generate the only one target domain which is assumed missing in inputs. But CollaGAN cannot deal with multiple missing domains. In CollaGAN experiments, we use the same domain generation setting as ours, that is, all the existing domains are used to complete the one missing domain in sequence. Results of Medical Image Completion: Fig. 3 and Fig. 5 in appendix show some sample results of image completion on BraTS and ProstateX, and corresponding quantitative results are in Table 1 . Each row shows the target and generated images of one domain with the assumption of that domain is missing in inputs. In comparison, our model achieves higher similarity to the target and also produce qualitatively more accurate images, e.g., a more accurate outstanding tumor region in BraTS and prostate regions are well-preserved in ProstateX. This is achieved by learning a better content code through factorized latent space in our method, which is essential in preserving the anatomical structures in medical images. Extended quantitative evaluation metrics are in Appendix. Results of Facial Expression Image Translation: Fig. 4 shows a sample result of facial expression image completion on RaFD dataset. In each column, we show the target and generated images in one domain (expression), where we assume the target domain is missing in the inputs and need to be generated from one or more available domains. In this way, we evaluate the missing domains one by one sequentially. Compared with MUNIT and StarGAN results, our method could generate missing images with a better quality, especially in the generating details like teeth, mouth and eyes. This is paritally due to the fact that our method can incorporate complementary information from multiple domains, while MUNIT and StarGAN can only accept one domain as input source. For example, in the generation of "happy" and "disgusted" expressions, either MUNIT or StarGAN could not generate a good teeth and mouth region, since their source domain "neutral" does not contain the teeth. Compared with CollaGAN, our method could generate images with a better content through explicit disentangled representation learning in feature level instead of implicit cycle-consistency constraints in pixel level. The superior performance could also be observed in the NRMSE and SSIM value across all testing samples in Table 2 with all eight expressions. 3.2. RESULTS OF IMAGE SEGMENTATION WITH MISSING DOMAIN: Based on the missing-domain image completion as above, we show that our proposed method could go beyond image translation to solve the missing-domain image segmentation problem. Specifically, our model learns factorized representations by disentangling latent space, which could be efficiently leveraged for high-level tasks. As shown in Fig. 2 , a segmentation branch is added after the learned content codes to generate segmentation prediction mask. We evaluate the segmentation accuracy with Dice coefficient on both BraTS and ProstateX datasets. The results of unified image completion and segmentation model as shown in Table 8 achieve the best dice score in both BraTS and ProstateX datasets. We train a fully supervised 2D U-shaped segmentation network (a U-net variation, Ronneberger et al. (2015) ) without missing images as the "Oracle". "Oracle+X" means that the results are computed by testing the missing images generated from the "X" method with the pretrained "Oracle" model. "All" represents the full testing set without any missing domains. "ReMIC+Seg" stands for using separate content encoders for generation and segmentation tasks in our proposed unified framework, while "ReMIC+Joint" indicates sharing the weights of content encoder for the two tasks. All methods obtain similar segmentation performances in ProstateX, but ReMIC is still relative closer to the Oracle results. In a more difficult segmentation task (like ET segmentation in BraTS), our proposed method shows significant improvement over other compared methods and even superior to the Oracle in some cases. 4. CONCLUSION: In this work, we propose a general framework for multi-domain image completion, given that one or more input images are missing. We learn shared content and domain-specific style encodings across multiple domains via the representational disentanglement. We also design several loss functions for accurate image generation, including image consistency loss, latent consistency loss, adversarial loss, and reconstruction loss. Our framework is flexible and can be easily extended to a unified generation and segmentation network. Extensive experiments validate the proposed method surpasses baseline methods and previous state-of-the-art methods on both multi-domain image generation and segmentation with missing domains. A EXTENDED RESULTS FOR MULTI-DOMAIN IMAGE COMPLETION: A.1 QUALITATIVE RESULTS OF IMAGE COMPLETION FOR PROSTATEX Due to the page limit, we put the quantitative results of image completion for ProstateX dataset here. As shown in Fig. 5 , our method could generate images much closer to target images in all the three domains compared with other methods. We demonstrate the quantitative results for multi-domain image completion with two more evaluation metrics: Mean Absolute Error (MAE), and Peak Signal-to-Noise Ratio (PSNR) for all three datasets. As shown in this section, Table 4 , Table 5 , and Table 6 are the extended full tables for Table 1 and Table 2 in the main text. The best results are in bold for each domain. It is shown that our method is able to generate images not only closer to the target images (lower MAE, NRMSE, and higher SSIM), but also with higher image quality (higher PSNR), from the results of three datasets. In addition, for BraTS and ProstateX data, the generated images in the jointly trained generationsegmentation model are evaluated using the same metrics as shown in Table 4 and Table 5 . The results indicate that adding segmentation branch does not bring an obvious benefit for image generation. This is because the segmentation sub-module mainly focuses on the tumor region which takes up only a small part among the whole slice image. Besides, we use dice loss as the segmentation training objective which might not be consistent with the metrics used to evaluate generated images that emphasize the pixel-level similarity. B EXTENDED RESULTS FOR IMAGE SEGMENTATION WITH MISSING DOMAIN: Based on the missing-domain image completion, we show that our proposed method could go beyond image translation to solve the missing-domain segmentation problem. Specifically, our model learns factorized representations by disentangling latent space, which could be efficiently leveraged the eight domains, we assume that each sample contains at least one existing domain randomly. In other words, there are possibly 0∼7 missing domains for each training sample. This is a very challenging setting where the number and the choice of missing domains are totally random. This difficult problem cannot be solved by existing works, and to our best knowledge, our proposed method is the first one that could achieve the random n-to-n image translation. In testing, we evaluate the model with different number of input domains. We show the results of three testing samples shown in Figs. 6∼ 8. The top half of each column shows the input domain(s), where the missing domains are filled with zeros. The bottom half of each column shows the generation results for all the eight domains no matter if it exists in input domains or not. Firstly, in our method, no matter how many and which domains are available, the model could generate images for all the domains including missing ones in an one-time inference. Comparing the results in each row, we could see that the domain-specific style and domain-shared content are all preserved well even when we push the limit of existing domain(s) to be only one. In addition, when the number of visible domains increase, the content in each domain image is enhanced gradually and gets closer to the target image. This illustrates that our model is efficiently learning a better content code through representation disentanglement as the source domains provide more complementary information. C.2 MULTI-SAMPLE LEARNING: Based on the proposed model in Fig. 2 , we further propose a more advanced model and training strategy when multiple samples are inputted at one time. Generally, based on the assumption of partially shared latent space, we assume that the factorized latent code can independently represent the corresponding content and style information in the input image. Then by exchanging the content codes from two independent samples, it should be able to reconstruct the original input image by recombining the style code extracted from the other sample. Based on this idea, we build a comprehensive model with cross-domain training between two samples. Similarly as the framework in Fig. 2 , the image and code consistency loss and image reconstruction loss are also constrained through the encoding and decoding procedure. The results of multi-sample learning are shown in Table 1 named as "ReMIC+Multi-Sample". C.3 ANALYSIS OF MISSING-DOMAIN SEGMENTATION RESULTS: To better understand why our method is a better solution in missing-domain imputation for multidomain recognition tasks like segmentation, we demonstrate three randomly selected testing samples in BraTS dataset as shown in Fig. 9 . Rows 1∼3 shows the first sample, and the left two samples are shown in the same format. In the first sample, the first row shows real images in four domains and its ground truth segmentation labels. When some domain for this sample is randomly missing, a straightforward solution is to search through all the available training data and find the nearest neighbor (NN) to complete the missing image. We search the nearest neighbor according to the Euclidean distance in 2D image space. The second row shows the images in four domains and the segmentation map for the NN sample, which actually looks very similar to the target sample visually. However, we note that the tumor region is seriously different between the target sample and its NN sample, which shows the NN image is not a good missing imputation in semantics. To cope with this issue, our proposed method could generate images for missing domains with not only pixel-level similarity but also similar tumor regions, which are the most significant parts in tumor segmentation task. As shown in the third row, the generated images in four domains are very close to the target images. The segmentation map shows the prediction results when the generated T1 image is gives a segmentation mask very close to the ground truth. These results illustrate the superiority of our method, which results from the disentangled representation learning in feature level. D IMPLEMENTATION DETAILS: Here, we describe the implementation details of our method. We will open source all the source codes and models if get accepted. D.1 HYPERPARAMETERS: In our algorithm, we use the Adam optimizer (Kingma & Ba, 2014) with β 1 = 0.5, β 2 = 0.999. The learning rate is 0.0001. We set the loss weights in Equation 6 as λ adv = 1, λ x cyc = 10, λ c cyc = 1, λ s cyc = 1, λ recon = 20. For comparison purpose, we train the model to 150,000 / 100,000 / 100,000 iterations for BraTS, ProstateX, and RaFD datasets respectively, and compare the results across MUNIT, StarGAN, CollaGAN, and ours ReMIC in all datasets. In ReMIC, we set the dimension of the style code as 8 for comparison purpose with MUNIT. For image generation during testing, we use a fixed style code of 0.5 in each dimension for both MUNIT and ReMIC to compute quantitative results. D.2 NETWORK ARCHITECTURES: The network structures of ReMIC is developed on the backbone of MUNIT model. We describe the details of each module here. Unified Content Encoder: consists of a down-sampling module and residual blocks to extract contexture knowledge from all available domain images in inputs. The down-sampling module contains a 7 × 7 convolutional block with stride 1 and 64 filters, and two 4 × 4 convolutional blocks with stride 2 and, 128 and 256 filters respectively. The strided convolutional layers downsample the input to features maps of size W/4 × H/4 × 256, where W and H are the width and height of input image. Next, there are four residual blocks, each of which contains two 3 × 3 convolutional blocks with 256 filters and stride 1. We apply Instance Normalization (IN) (Ulyanov et al., 2017) after all the convolutional layers. Note that the proposed unified content encoder accepts images of all domains as input (missing domains are filled with zeros in initialization), and learns a universe content code complementarily and collaboratively, which are different from MUNIT. Style Encoder: contains a similar down-sampling module and several residual blocks, which is followed by a global average pooling layer and a fully connected layer to learn the style vector. The down-sampling module is developped using the same structure as that in the unified content encoder above, and two more 4 × 4 convolutional blocks with stride 2 and 256 filters are followed. The final fully connected layer generates style code as a 8-dim vector. There is no IN applied to the Figure 6 : Random multi-domain image completion results. Rows 1-8 are input images, and rows 9-16 are generated images when different numbers of input images are given. Each column demonstrates the images in one domain in the order of "neutral", "angry", "contemptuous", "disgusted", "fearful", "happy", "sad", "surprised". Figure 7 : Random multi-domain image completion results. Rows 1-8 are input images, and rows 9-16 are generated images when different numbers of input images are given. Each column demonstrates the images in one domain in the order of "neutral", "angry", "contemptuous", "disgusted", "fearful", "happy", "sad", "surprised". Figure 8 : Random multi-domain image completion results. Rows 1-8 are input images, and rows 9-16 are generated images when different numbers of input images are given. Each column demonstrates the images in one domain in the order of "neutral", "angry", "contemptuous", "disgusted", "fearful", "happy", "sad", "surprised". : Figures and Tables: Figure 1: Image translation tasks using (a) Star-GAN and MUNIT (1-to-1), (b) CollaGAN (n-to-1), (c) ReMIC (1-to-n), and (d) ReMIC (n-ton). ReMIC completes the missing domain images that can be randomly distributed (N -to-n, N ∈ {1, ..., n}) in the input set. It makes our approach a more general and flexible framework for image translation tasks. The Multimodal Brain Tumor Segmentation Challenge (BraTS) 2018(Menze et al., 2014;Bakas et al., 2017; 2018) provides a set of multi-institutional multimodal brain MRI scans with four modalities: a) native (T1) and b) post-contrast T1-weighted (T1Gd), c) T2-weighted (T2), and d) T2 Fluid Attenuated Inversion Recovery (T2-FLAIR). Following the setting in(Lee et al., 2019b), 218 and 28 subjects are randomly selected for training and test sets. The 2D slices at the same location are extracted from 3D MRI volumes for each of four modalities as one independent data sample in our experiments. In total, the training and testing datasets contain 40,148 and 5,340 images respectively. We resize the images of 240 × 240 to 256 × 256. Three labels are given for the brain tumor segmentation, i.e., enhancing tumor (ET), tumor core (TC), and whole tumor (WT). Figure 3: BraTS image completion results. Rows: 4 modalities. Columns: compared methods. Figure 5: ProstateX generation results. Rows: 1) T2, 2) ADC, 3) HighB. Columns: 1) Ground truth, 2) MUNIT, 3) StarGAN, 4) CollaGAN, 5) ReMIC Multi-domain Medical Image Completion Results Table content: <table><tr><td/><td/><td/><td colspan="2">(a) BraTS</td></tr><tr><td>Methods</td><td colspan="5">T1 NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM T1Gd T2 FLAIR</td></tr><tr><td>MUNIT</td><td colspan="2">0.4054 / 0.8904</td><td colspan="2">0.3469 / 0.9084</td><td>0.4185 / 0.8702</td><td>0.4563 / 0.8728</td></tr><tr><td>StarGAN</td><td colspan="2">0.3028 / 0.9346</td><td colspan="2">0.2795 / 0.9351</td><td>0.5137 / 0.8473</td><td>0.4417 / 0.8931</td></tr><tr><td>CollaGAN</td><td colspan="2">0.4729 / 0.8951</td><td colspan="2">0.4952 / 0.8689</td><td>0.5243 / 0.8890</td><td>0.4299 / 0.8616</td></tr><tr><td>ReMIC w/o Recon</td><td colspan="2">0.3502 / 0.9328</td><td colspan="2">0.2349 / 0.9448</td><td>0.4485 / 0.8681</td><td>0.4214 / 0.8810</td></tr><tr><td>ReMIC</td><td colspan="2">0.1939 / 0.9629</td><td colspan="2">0.2143 / 0.9546</td><td>0.2410 / 0.9469</td><td>0.2639 / 0.9369</td></tr><tr><td colspan="3">ReMIC+Multi-Sample 0.2031 / 0.9637</td><td colspan="2">0.2096 / 0.9563</td><td>0.2369 / 0.9490</td><td>0.2348 / 0.9395</td></tr><tr><td/><td/><td/><td colspan="2">(b) ProstateX</td></tr><tr><td>Methods</td><td/><td colspan="4">T2 NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM ADC HighB</td></tr><tr><td>MUNIT</td><td/><td colspan="2">0.6904 / 0.4428</td><td colspan="2">0.9208 / 0.4297</td><td>0.9325 / 0.5383</td></tr><tr><td>StarGAN</td><td/><td colspan="2">0.6638 / 0.4229</td><td colspan="2">0.9157 / 0.3665</td><td>0.9188 / 0.4350</td></tr><tr><td>CollaGAN</td><td/><td colspan="2">0.8070 / 0.2667</td><td colspan="2">0.7621 / 0.4875</td><td>0.7722 / 0.6824</td></tr><tr><td colspan="2">ReMIC w/o Recon</td><td colspan="2">0.8567 / 0.3330</td><td colspan="2">0.7289 / 0.5377</td><td>0.8469 / 0.7818</td></tr><tr><td>ReMIC</td><td/><td colspan="2">0.4908 / 0.5427</td><td colspan="2">0.2179 / 0.9232</td><td>0.3894 / 0.9150</td></tr><tr><td colspan="4">ReMIC+Multi-Sample 0.4742 / 0.5493</td><td colspan="2">0.2171 / 0.9263</td><td>0.3945 / 0.9116</td></tr></table> RaFD multi-expression translation results. Table content: <table><tr><td>Methods</td><td colspan="4">Neutral NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM Angry Contemptuous Disgusted</td></tr><tr><td>MUNIT</td><td>0.1589 / 0.8177</td><td>0.1637 / 0.8156</td><td>0.1518 / 0.8319</td><td>0.1563 / 0.8114</td></tr><tr><td>StarGAN</td><td>0.1726 / 0.8206</td><td>0.1722 / 0.8245</td><td>0.1459 / 0.8506</td><td>0.1556 / 0.8243</td></tr><tr><td>CollaGAN</td><td>0.1867 / 0.7934</td><td>0.1761 / 0.7736</td><td>0.1856 / 0.7928</td><td>0.1823 / 0.7812</td></tr><tr><td colspan="2">ReMIC w/o Recon 0.1215 / 0.8776</td><td>0.1335 / 0.8556</td><td>0.1192 / 0.8740</td><td>0.1206 / 0.8559</td></tr><tr><td>ReMIC</td><td>0.1225 / 0.8794</td><td>0.1290 / 0.8598</td><td>0.1217 / 0.8725</td><td>0.1177 / 0.8668</td></tr><tr><td>Methods</td><td colspan="4">Fearful NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM Happy Sad Surprised</td></tr><tr><td>MUNIT</td><td>0.1714 / 0.7792</td><td>0.1623 / 0.8073</td><td>0.1677 / 0.7998</td><td>0.1694 / 0.7884</td></tr><tr><td>StarGAN</td><td>0.1685 / 0.7943</td><td>0.1522 / 0.8288</td><td>0.1620 / 0.8227</td><td>0.1634 / 0.7974</td></tr><tr><td>CollaGAN</td><td>0.1907 / 0.7442</td><td>0.1829 / 0.7601</td><td>0.1783 / 0.7766</td><td>0.1888 / 0.7495</td></tr><tr><td colspan="2">ReMIC w/o Recon 0.1321 / 0.8384</td><td>0.1399 / 0.8332</td><td>0.1284 / 0.8597</td><td>0.1333 / 0.8347</td></tr><tr><td>ReMIC</td><td>0.1316 / 0.8395</td><td>0.1383 / 0.8406</td><td>0.1301 / 0.8581</td><td>0.1276 / 0.8484</td></tr></table> Missing-domain segmentation: Dice scores are reported here. Table content: <table/> BraTS results: Missing-domain imputation for pretrained segmentation model (full tabel) ReMIC+Joint 0.915 / 0.859 / 0.710 0.910 / 0.740 / 0.430 0.909 / 0.858 / 0.716 0.860 / 0.823 / 0.691 Table content: <table><tr><td colspan="2">Methods</td><td>T1</td><td>T1Gd</td><td>T2</td><td>FLAIR</td></tr><tr><td/><td/><td>WT / TC / ET</td><td>WT / TC / ET</td><td>WT / TC / ET</td><td>WT / TC / ET</td></tr><tr><td/><td>Oracle</td><td/><td colspan="2">0.910 / 0.849 / 0.708</td></tr><tr><td/><td>Zero</td><td colspan="4">0.771 / 0.609 / 0.572 0.872 / 0.539 / 0.008 0.755 / 0.690 / 0.677 0.458 / 0.468 / 0.435</td></tr><tr><td/><td>Average</td><td colspan="4">0.870 / 0.744 / 0.674 0.882 / 0.603 / 0.303 0.849 / 0.732 / 0.686 0.655 / 0.710 / 0.648</td></tr><tr><td>2D</td><td>NN MUNIT</td><td colspan="4">0.883 / 0.765 / 0.660 0.871 / 0.564 / 0.186 0.811 / 0.720 / 0.642 0.534 / 0.669 / 0.614 0.886 / 0.785 / 0.679 0.872 / 0.552 / 0.187 0.882 / 0.781 / 0.682 0.408 / 0.541 / 0.527</td></tr><tr><td/><td>StarGAN</td><td colspan="4">0.897 / 0.795 / 0.704 0.886 / 0.588 / 0.184 0.851 / 0.725 / 0.661 0.570 / 0.664 / 0.604</td></tr><tr><td/><td colspan="5">CollaGAN 0.860 / 0.747 / 0.651 0.864 / 0.576 / 0.252 0.882 / 0.811 / 0.700 0.663 / 0.697 / 0.663</td></tr><tr><td/><td>ReMIC</td><td colspan="4">0.909 / 0.834 / 0.714 0.899 / 0.669 / 0.354 0.905 / 0.855 / 0.709 0.853 / 0.807 / 0.691</td></tr><tr><td/><td>Oracle</td><td/><td colspan="2">0.909 / 0.867 / 0.733</td></tr><tr><td/><td>Zero</td><td colspan="4">0.876 / 0.826 / 0.694 0.884 / 0.574 / 0.020 0.901 / 0.865 / 0.728 0.661 / 0.730 / 0.643</td></tr><tr><td>3D</td><td>Average</td><td colspan="4">0.880 / 0.814 / 0.640 0.854 / 0.618 / 0.282 0.838 / 0.801 / 0.695 0.713 / 0.732 / 0.675</td></tr><tr><td/><td>NN</td><td colspan="4">0.890 / 0.829 / 0.703 0.859 / 0.538 / 0.081 0.790 / 0.799 / 0.704 0.472 / 0.686 / 0.607</td></tr><tr><td/><td>ReMIC</td><td colspan="4">0.905 / 0.864 / 0.722 0.888 / 0.614 / 0.273 0.902 / 0.871 / 0.734 0.855 / 0.850 / 0.724</td></tr><tr><td colspan="2">Methods</td><td>T1 WT / TC / ET</td><td>T1CE WT / TC / ET</td><td>T2 WT / TC / ET</td><td>FLAIR WT / TC / ET</td></tr><tr><td colspan="2">Oracle</td><td/><td colspan="2">0.910 / 0.849 / 0.708</td></tr><tr><td>Zero</td><td/><td colspan="4">0.904 / 0.818 / 0.710 0.888 / 0.687 / 0.394 0.907 / 0.842 / 0.720 0.841 / 0.793 / 0.692</td></tr><tr><td colspan="2">Average</td><td colspan="4">0.905 / 0.798 / 0.685 0.898 / 0.603 / 0.312 0.897 / 0.803 / 0.663 0.846 / 0.768 / 0.663</td></tr><tr><td colspan="2">ReMIC</td><td colspan="4">0.908 / 0.783 / 0.676 0.897 / 0.685 / 0.382 0.901 / 0.815 / 0.698 0.851 / 0.779 / 0.665</td></tr><tr><td colspan="2">ReMIC+Seg</td><td colspan="4">0.911 / 0.819 / 0.687 0.902 / 0.708 / 0.411 0.910 / 0.839 / 0.716 0.850 / 0.792 / 0.671</td></tr></table> BraTS results: Missing-domain segmentation (full table) Table content: <table/>
[Summary] This paper proposes an image translation method that generates images of missing domains from images in multiple available domains. i.e., ReMIC. For achieving this, the authors employ using separated latent codes for content and style from shared representation. They compare the proposed model with StarGAN, MUNIT, CollaGAN on two medical datasets (BraTX, ProstateX) and RaFD datasets, using NRMSE, SSIM, and oracle GT. [Pros] - Task is novel and important, in particular, in medical domains. - Experiments are extensive and the results on medical data look promising. - Well written. [Cons] - Technical novelty of the method is incremental. Most components are from previous work. - Even if the authors argue that their model uses disentanglement, it is overclaiming that to separate style and content is disentanglement. There exsist several previous studies to separate style and content. To claim disentanglement, it is required that more specific information should be separated from styles such as hair-style and lip color. - The experimental setting is not consistent with task definition. Task definition assumes k-missing domains among n domains, i.e, (n-k)-to-k. However, experiments cover k=1 case only: (n-1)-to-1. This setting is similar to that of CollaGAN. I think it is not trivial to extend k to multiple missing domains. - Figure 1 can be improved. StarGAN performs 1-to-n mapping with target style one-hot encoding (The authors also refer to that in page 6), unlike Figure 1. - Experimental results on RaFD are not impressive. Could you evaluate the proposed model on high-resolution data such as CelebHQ? Also, for RaFD, other conventional metrics can be used such as FID. - A figure on the detailed architecture can help to understand the method.
iclr2020
OpenReview
{ "criticism": 3, "example": 2, "importance_and_relevance": 1, "materials_and_methods": 14, "praise": 3, "presentation_and_reporting": 6, "results_and_discussion": 2, "suggestion_and_solution": 5, "total": 23 }
{ "criticism": 0.13043478260869565, "example": 0.08695652173913043, "importance_and_relevance": 0.043478260869565216, "materials_and_methods": 0.6086956521739131, "praise": 0.13043478260869565, "presentation_and_reporting": 0.2608695652173913, "results_and_discussion": 0.08695652173913043, "suggestion_and_solution": 0.21739130434782608 }
1.565217
Let me review this paper on "Representational Disentanglement for Multi-Domain Image Completion" to provide a comprehensive peer review. First, I'll summarize the paper: - The paper proposes a generative adversarial network (GAN) architecture called ReMIC that addresses the problem of multi-domain image completion (handling missing modalities/domains in datasets) - It focuses on disentangling representations into shared "skeleton" content code and domain-specific "flesh" style code - The approach can perform n-to-n image translation, rather than just 1-to-1 or n-to-1 like previous methods - The authors demonstrate how this framework can be extended to segmentation tasks - The method is tested on both medical images (BraTS and ProstateX datasets) and natural images (RaFD dataset) Now, I'll evaluate the scientific contributions and merit of this paper: Strengths: 1. Addressing an important problem: The problem of missing modalities in medical imaging is significant and their solution has real practical value. 2. Novelty: The representational disentanglement scheme with unified content encoder is a novel approach to n-to-n image translation. 3. Comprehensive evaluation: They test on three datasets across both medical and facial image domains. 4. Extension to segmentation: The authors show that their approach extends beyond just image completion to improve segmentation performance. 5. Quantitative improvement: The method shows clear numerical improvements over baseline approaches. 6. Flexibility: The method can handle any arbitrary number of missing modalities, making it highly practical. Potential weaknesses: 1. Model complexity: The architecture involves multiple components and loss functions, which might affect reproducibility and implementation for others. 2. Theoretical foundation: While the paper has strong empirical results, the theoretical justification for why representational disentanglement works better could be expanded. 3. Computational requirements: The paper doesn't discuss the computational efficiency or training time relative to other approaches. 4. Clinical validation: For the medical applications, there's no discussion of clinical validation by medical experts beyond quantitative metrics. The paper seems well-written with clear organization, and the methods and experiments are thoroughly described. The results show significant improvements over state-of-the-art methods in both image completion and segmentation tasks.
You are a member of the scientific community tasked with peer review. Review the following paper content. ### Paper Content Title: REPRESENTATIONAL DISENTANGLEMENT FOR MULTI-DOMAIN IMAGE COMPLETION Abstract: Multi-domain data are widely leveraged in vision applications to take advantage of complementary information from each modality, e.g., brain tumor segmentation from multi-parametric magnetic resonance imaging (MRI). However, due to different imaging protocol and data loss or corruption, the availability of images in all domains could vary amongst multiple data sources in practice, which makes it challenging to train and test a universal model with a varied set of input data. To tackle this problem, we propose a general approach to complete the possible missing domain of the input data in a variety of application settings. Specifically, we develop a novel generative adversarial network (GAN) architecture that utilizes a representational disentanglement scheme for shared 'skeleton' encoding and separate 'flesh' encoding across multiple domains. We further illustrate that the learned representation in the multi-domain image translation could be leveraged for higher-level recognition, like segmentation. Specifically, we introduce a unified framework of image completion branch and segmentation branch with a shared content encoder. We demonstrate constant and significant performance improvement by integrating the proposed representation disentanglement scheme in both multi-domain image completion and image segmentation tasks using three evaluation datasets individually for brain tumor segmentation, prostate segmentation, and facial expression image completion. 1. INTRODUCTION: Multi-domain images are often required as inputs due to the nature that different domains could provide complementary knowledge in various vision tasks. For example, four medical imaging modalities, MRI with T1, T1-weighted, T2-weighted, FLAIR (FLuid-Attenuated Inversion Recovery), are acquired as a standard protocol to segment the accurate tumor regions for each patient in the brain tumor segmentation task (Menze et al., 2014) . Each modality can provide distinct features to locate the true tumor boundaries from a differential diagnosis perspective. Additionally, when it comes to the natural image tasks, there are similar scenarios such as person re-identification across different cameras and different times (Zheng et al., 2015; 2019) . Here, the medical images in different contrast modalities or natural images with the person under different appearance or cameras can both be considered as different domains, which all contribute to depict the underlying object or scene from different aspects of view. However, some domains might be missing in practice. In large-scale datasets from multiple institutes, it is generally difficult or even infeasible to guarantee the availability of complete domains for all the data. For example, in some cases, the patients might lack some imaging scans due to different imaging protocol or data loss or corruption. In terms of taking the most advantage of all these rare and valuable data, it is costly to just throw away the incomplete samples during training, and even infeasible to test with missing domain input. Thus, it becomes necessary to effectively artificially generate the missing data. An intuitive approach is to substitute the missing domains with the nearest neighbor among other existing samples, but this might lack of semantic consistency among domains of the input sample since it only focuses on pixel-level similarity with existing data. Additionally, the recent success of GANs (Goodfellow et al., 2014; Mirza & Osindero, 2014; Isola et al., 2017; Zhu et al., 2017a; b; Kim et al., 2017; Liu et al., 2017; Choi et al., 2018; Yoon et al., 2018; Lee et al., 2019a) in image-to-image translation provides another possible solution for this challenge. By training a GAN-based model to generate images for missing domains, we are more likely to keep the semantic consistency with existing domains by learning the semantic representations. CycleGAN (Zhu et al., 2017a) shows an impressive performance in the image-to-image translation task via cycle-consistency constraints between real images and generated images. However, CycleGAN mainly focuses on the 1-to-1 mapping between two specific domains and assumes the corresponding images in two domains strictly share the same representation in latent space. This is limited in multi-domain applications since n(n -1) mapping functions are required if there are n domains. Following this, StarGAN (Choi et al., 2018) proposes a general method for multi-domain image translation using a mask vector in inputs to specify the desired target domain. Meantime, RadialGAN (Yoon et al., 2018) also deals with the multi-domain generation problem with the assumption that all the domains share the same latent space. Although StarGAN and RadialGAN make it possible to generate images in different target domains through the 1-to-n translation, the representation learning and image generation are always conditioned on the input with only one source domain. In order to take advantage of multiple available domains as inputs for representation learning, CollaGAN (Lee et al., 2019a) proposes a collaborative model to incorporate multiple available domains to generate one missing domain. Similar to StarGAN, CollaGAN only relies on the cycle-consistency constraints to preserve the contents in generated images, which are actually implicit constraints between real images and fake images in pixel level. Additionally, since the target domain is controlled by an one-hot mask vector in the input, CollaGAN is essentially doing n-to-1 translation in one inference. Our goal in this work is to propose a more general n-to-n image translation framework that could overcome the aforementioned limitations as illustrated in the Fig. 1 . Recently, learning disentangled representation is proposed to capture the full distribution of possible outputs by introducing a random style code (Chen et al., 2016; Higgins et al., 2017; Huang et al., 2018; Lee et al., 2018) . InfoGAN (Chen et al., 2016) and β-VAE (Higgins et al., 2017) learn the disentangled representation without supervision. In image-to-image translation tasks, DIRT (Lee et al., 2018) learns disentangled content and attribute features by exchanging the features encoded from images of two domains respectively and then reverting back again. The image consistency during translation is constrained by code reconstruction and image reconstruction. With a similar code exchange framework, MUNIT (Huang et al., 2018) assumes a prior distribution on style code, which allows sampling different style codes to generate multiple images in target domain. However, both DIRT and MUNIT only deal with image translation between two domains, which is not efficient to train n(n -1) mapping functions in a n-domain task. Inspired by previous works, we propose a n-to-n multi-domain image translation approach based on the representational disentanglement scheme for multi-domain image completion (ReMIC). Specifically, our contributions are three-fold: (1) We propose a GAN architecture with representation disentanglement to learn domain-shared features and domain-specific features for the more general and flexible n-to-n image translation; (2) We illustrate the effectiveness of the representation learning for high-level tasks by building a unified framework for jointly learning the generation and segmentation with a shared content encoder; (3) Extensive experiments with three different datasets demonstrate that the proposed method achieves better performance than previous state-of-the-art methods on both generation and segmentation. 2. METHOD: As discussed above, images from different domains for the same data sample could present their exclusive features of the data subject. Nonetheless, they also inherit some global content structures Figure 2 : Overview of the proposed nto-n multi-domain completion and segmentation framework. n = 4 and two domain data are missing in this example. Our model contains a unified content encoder E c (red lines), domainspecific style encoders E s i (1 ≤ i ≤ n, orange lines) and generators G i (1 ≤ i ≤ n, blue lines). A variety of loss was adopted (burgundy lines), i.e., image consistency loss for visible domains, latent consistency loss, adversarial loss and reconstruction loss for the generated images. Furthermore, the representational learning framework is flexible to combine a segmentation generator G S following the content code for the unified image generation and segmentation. of it. Let us take the parametric MRI for brain tumors as an example. T2 and FLAIR MRI can highlight the differences in tissues' water relaxational properties, which will distinguish the tumor tissue from normal ones. Contrasted T1 MRI can examine the pathological intratumoral take-up of contrast agents so that the boundary between tumor core and the rest will be highlighted. However, the underline anatomical structure of the brain is shared by all these three modalities. With the availability of multiple domain data, it will be meaningful to decompose the images into the shared content structure (skeleton) and meanwhile distinguish and model their unique characteristics (flesh) through learning. Therefore, we will be able to reconstruct the missing image during the testing by using the shared skeleton (extracted from the available data domains) and a sampled flesh from the learned model. Without assuming a fixed number of missing domains during the model training, the learned framework could flexibly handle one or more missing domains. In addition, we further enforce the accuracy of the extracted content structure by connecting it to the segmentation tasks. In such manner, the disentangled representations of multiple domain images (both the skeleton and flesh) can help in both the image completion and segmentation tasks. Suppose there are n domains: {χ 1 , χ 2 , • • • , χ n }. Let x 1 ∈ χ 1 , x 2 ∈ χ 2 , • • • , x n ∈ χ n are images from n different domains respectively, which are grouped data describing the same subject x = {x 1 , • • • , x n }. In total, we assume the whole dataset contains M independent data samples. For each data sample, we assume one or more of the n images might be randomly missing. The goal of our first task is to complete all the missing domains for all the samples. To accomplish the completion of all missing domains from a random set of available domains, we assume the n domains share the latent representation of underline structure. We name the shared latent representation as content code and meanwhile each domain also contains the domain-specific latent representation, i.e., style code that is related to the various features or attributes in different domains. The missing domains can be reconstructed from these two-aspect of information through the learning of deep neural networks. Similar to the setting in MUNIT (Huang et al., 2018) , we also assume a prior distribution for style latent code as N (0, I) to capture the full distribution of possible outputs in each domain. However, MUNIT trains separate content encoder for each domain and enforce the disentanglement via coupled cross-domain translation during training while the proposed method employs one single content encoder to extract the true representation shared across all the domains. 2.1. UNIFIED IMAGE COMPLETION AND SEGMENTATION FRAMEWORK: As shown in Figure 2 , our model contains a unified content encoder E c and domain-specific style encoders E s i (1 ≤ i ≤ n). Content encoder E c extracts the content code c from all existing domains: E c (x 1 , x 2 , • • • , x n ) = c. For the missing domains, we use zero padding in corresponding input channels. For each domain, a style encoder E s i learns the domain-specific style code s i from the corresponding domain image x i respectively (1 ≤ i ≤ n): E s i (x i ) = s i . During the training, our model captures the content code c and style codes s i (1 ≤ i ≤ n) (assumed in a prior distribution of N (0, I) as shown in Figure 2 ) through the encoding process (denoted as red and orange arrows respectively) with a random set of input images (in green box). We only need to train a single ReMIC model to generate all these missing images from sets of available domains. In the generation process, our model samples style codes from prior distribution and recombines content code to generate images in n domains through generators G i (1 ≤ i ≤ n) (denoted as blue arrows). The generator G i for each domain generates the fake images in corresponding domain from the domain-shared content code and the domain-specific style code: G i (c, s i ) = xi . As discussed before, it is fairly common to find that there are missing domains for input data. Some straight-forward solutions to complete all the missing images include zero filling, average image, or generating images via image translation model, etc. Alternatively, based on the content code learned in our model, we could develop a unified model for multi-task learning of both generation and segmentation. Specifically, another branch of segmentation generator G S is added after content encoder to generate segmentation mask. By optimizing the generation loss and segmentation Dice loss (detailed in Section 2.2) simultaneously, the model could adaptively learn how to generate missing images to promote the segmentation performance. 2.2. TRAINING LOSS: In the training of GAN models, the setting of losses is of paramount importance to the final generation results. Our loss objective contains the cycle-consistency loss of images and codes within and across domains, adversarial loss and reconstruction loss on the generation and input images. Image Consistency Loss: For each sample, the content and style encoders are able to extract a domain-shared content code and domain-specific style codes respectively for each available domain. Then by recombining the content and style codes, the domain generators are expected to reconstruct the input images from each domain. The image consistency loss is defined to constrain the reconstructed images and real images as in the flow chat of "Image → Code → Image". L xi cyc = E xi∼p(xi) [ G i (E c (x 1 , x 2 , • • • , x n ), E s i (x i )) -x i 1 ] (1) where p(x i ) is the data distribution in domain χ i (1 ≤ i ≤ n). Latent Consistency Loss: Next, the generated fake images can also be encoded as content and style codes by using the same encoders. The latent consistency loss constrains the code before decoding and after encoding again in the direction of "Code → Image → Code". EQUATION EQUATION where p(s i ) is the prior distribution N (0, I), p(c) is given by c = E c (x 1 , x 2 , • • • , x n ) and x i ∼ p(x i ) (1 ≤ i ≤ n). Adversarial Loss: The adversarial learning between generators and discriminators forces the data distribution of the generated fake images to be close to the real images' distribution for each domain. EQUATION ) where D i is the discriminator for domain i to distinguish the generated fake images xi and real images x i ∈ χ i . Reconstruction Loss: In addition to the feature-level consistency mentioned above to constrain the relationship between the generated fake images and real images in different domains, we also constrain the pixel-level similarity between generated images and ground truth images in the same domain during training stage. EQUATION Total Loss: The encodes, decoders and discriminators are jointly trained to optimize the total objective as follows. min E c ,E s 1 ,••• ,E s n max D1,••• ,Dn L(E c , E s 1 , • • • , E s n , D 1 , • • • , D n ) = λ adv n i=1 L xi adv + λ x cyc n i=1 L xi cyc + λ c cyc L c cyc + λ s cyc n i=1 L si cyc + λ recon n i=1 L xi recon (6) where λ adv , λ x cyc , λ c cyc , λ s cyc , and λ recon are weights. In the n-to-n image translation task, the model learns a complementary representation of multiple domains, which can also facilitate the high-level recognition tasks. For example, extracted content code (containing the underline anatomical structures) can largely benefit the segmentation of organs and lesions in medical image analysis. On the other side, the segmentation task can enforce the learning of a more representative content encoder. Therefore, we train a multi-task network for both the segmentation and generation. In the proposed framework, we construct a unified generation and segmentation model by adding a segmentation generator G S following the content code out from the generated fake images as shown in Fig. 2 . We utilize dice loss (Salehi et al., 2017) for accurate segmentation of multiple input images. L seg = 1 - 1 L L l=1 p ŷp (l)y p (l) p ŷp (l) + y p (l) -ŷp (l)y p (l) (7) where L is the total number of classes, p is the pixel position in the image, ŷ is the predicted segmentation probability map from G S and y is the ground truth segmentation mask. The segmentation can loss can be added into the total loss in Equation 6for an end-to-end joint learning. 3. EXPERIMENTAL RESULTS AND DISCUSSION: We firstly show the advantage of our proposed method in the n-to-n image completion task given a random set of available domains. Moreover, we illustrate that the proposed model (a variation with two branches) provides an efficient solution to multi-domain segmentation with missing image inputs. To demonstrate the generalizability of the proposed algorithm, we evaluate the proposed method on a natural image dataset as well as two medical image datasets. Totally three datasets are employed in our experiments, i.e., BraTS, ProstateX, and RaFD. ProstateX The ProstateX dataset (Litjens et al., 2014) contains multi-parametric prostate MR scans for 98 subjects. Here, we use the three modalities of each sample: 1) T2-weighted (T2), 2) Apparent Diffusion Coefficient (ADC), 3) high b-value DWI images (HighB). We randomly divide the dataset to 78 and 20 subjects for training and testing respectively. Similar to BraTS, 2D slices are extracted from 3D volumes for each modality. In total, the training and testing sets contain 3,540 and 840 images respectively. We resize the images of 384 × 384 to 256 × 256. Prostate regions are manually labeled as the whole prostate (WP) by board-certificated radiologists. BraTS: RaFD The Radboud Faces Database (RaFD) (Langner et al., 2010) contains eight facial expressions collected from 67 participants respectively: neutral, angry, contemptuous, disgusted, fearful, happy, sad, and surprised. We adopt images from three different camera angles (45 • , 90 • , 135 • ) along with three different gaze directions (left, frontal, right), 4,824 images in total. We treat the eight paired images in eight expression domains as an data sample. 54 participants (3,888 images) and 13 participants (936 images) are randomly divided for training and testing set. Following the setting in StarGAN (Choi et al., 2018) , we crop the image with the face in the center and then resize them to 128 × 128. 3.1. RESULTS OF MULTI-DOMAIN IMAGE COMPLETION: For comparison purpose, we firstly assume there are always only one domain missing for each sample and the algorithm will be evaluated for multiple times when one domain is missed at each time. Then we investigate a more general scenario when there are more than one missing domains and show that our proposed method is capable to handle the general random n-to-n image completion. Multiple quantitative metrics are used to evaluate the similarity between the generated image and original one, i.e., normalized root mean-squared error (NRMSE) and mean structural similarity index (SSIM) between generated images and target images. We compare our results with previous arts on all three datasets for this task. The results of the proposed method ("ReMIC"), ReMIC without reconstruction loss ("ReMIC w/o Recon") and ReMIC with cross-domain translation ("ReMIC+Multi-Sample", more details in Appendix C.2) are reported. Note that in our method, by leveraging the unified content code and sampling the style code for each domain respectively, the proposed model could handle any number of domain missing, which is more general and flexible for the random n-to-n image completion. Besides, these compared methods have their own limits. MUNIT (Huang et al., 2018 ) conducts 1-to-1 image translation between two domains through representational disentanglement, as shown in Fig. 1(a) . In experiments, we train and test MUNIT models between any pair of two domains. Without loss of generality, we select "neural" images to generate all the other domains by following the StarGAN setting, and "angry" image is used to generate "neural" one. In BraTS , "T1" is selected to generate other domains since it is the most common modality while "T1" is generated from "T1Gd". Similarly, "T2" is selected to generate other domains in ProstateX while "T2" is generated from "ADC". StarGAN (Choi et al., 2018) adopts a mask vector to generate image in the specified target domain. Thus, different target domains could be generated from one source domain in multiple inference passes while only a single model is trained. This is actually a 1-to-n image translation, since only one domain can be used as input in StarGAN, we use the same domain pair match as MUNIT, which is also the same as the setting in the StarGAN (Choi et al., 2018) . CollaGAN (Lee et al., 2019a; b) carries out the n-to-1 image translation as shown in Fig. 1(b) , where multiple source domains collaboratively generate the only one target domain which is assumed missing in inputs. But CollaGAN cannot deal with multiple missing domains. In CollaGAN experiments, we use the same domain generation setting as ours, that is, all the existing domains are used to complete the one missing domain in sequence. Results of Medical Image Completion: Fig. 3 and Fig. 5 in appendix show some sample results of image completion on BraTS and ProstateX, and corresponding quantitative results are in Table 1 . Each row shows the target and generated images of one domain with the assumption of that domain is missing in inputs. In comparison, our model achieves higher similarity to the target and also produce qualitatively more accurate images, e.g., a more accurate outstanding tumor region in BraTS and prostate regions are well-preserved in ProstateX. This is achieved by learning a better content code through factorized latent space in our method, which is essential in preserving the anatomical structures in medical images. Extended quantitative evaluation metrics are in Appendix. Results of Facial Expression Image Translation: Fig. 4 shows a sample result of facial expression image completion on RaFD dataset. In each column, we show the target and generated images in one domain (expression), where we assume the target domain is missing in the inputs and need to be generated from one or more available domains. In this way, we evaluate the missing domains one by one sequentially. Compared with MUNIT and StarGAN results, our method could generate missing images with a better quality, especially in the generating details like teeth, mouth and eyes. This is paritally due to the fact that our method can incorporate complementary information from multiple domains, while MUNIT and StarGAN can only accept one domain as input source. For example, in the generation of "happy" and "disgusted" expressions, either MUNIT or StarGAN could not generate a good teeth and mouth region, since their source domain "neutral" does not contain the teeth. Compared with CollaGAN, our method could generate images with a better content through explicit disentangled representation learning in feature level instead of implicit cycle-consistency constraints in pixel level. The superior performance could also be observed in the NRMSE and SSIM value across all testing samples in Table 2 with all eight expressions. 3.2. RESULTS OF IMAGE SEGMENTATION WITH MISSING DOMAIN: Based on the missing-domain image completion as above, we show that our proposed method could go beyond image translation to solve the missing-domain image segmentation problem. Specifically, our model learns factorized representations by disentangling latent space, which could be efficiently leveraged for high-level tasks. As shown in Fig. 2 , a segmentation branch is added after the learned content codes to generate segmentation prediction mask. We evaluate the segmentation accuracy with Dice coefficient on both BraTS and ProstateX datasets. The results of unified image completion and segmentation model as shown in Table 8 achieve the best dice score in both BraTS and ProstateX datasets. We train a fully supervised 2D U-shaped segmentation network (a U-net variation, Ronneberger et al. (2015) ) without missing images as the "Oracle". "Oracle+X" means that the results are computed by testing the missing images generated from the "X" method with the pretrained "Oracle" model. "All" represents the full testing set without any missing domains. "ReMIC+Seg" stands for using separate content encoders for generation and segmentation tasks in our proposed unified framework, while "ReMIC+Joint" indicates sharing the weights of content encoder for the two tasks. All methods obtain similar segmentation performances in ProstateX, but ReMIC is still relative closer to the Oracle results. In a more difficult segmentation task (like ET segmentation in BraTS), our proposed method shows significant improvement over other compared methods and even superior to the Oracle in some cases. 4. CONCLUSION: In this work, we propose a general framework for multi-domain image completion, given that one or more input images are missing. We learn shared content and domain-specific style encodings across multiple domains via the representational disentanglement. We also design several loss functions for accurate image generation, including image consistency loss, latent consistency loss, adversarial loss, and reconstruction loss. Our framework is flexible and can be easily extended to a unified generation and segmentation network. Extensive experiments validate the proposed method surpasses baseline methods and previous state-of-the-art methods on both multi-domain image generation and segmentation with missing domains. A EXTENDED RESULTS FOR MULTI-DOMAIN IMAGE COMPLETION: A.1 QUALITATIVE RESULTS OF IMAGE COMPLETION FOR PROSTATEX Due to the page limit, we put the quantitative results of image completion for ProstateX dataset here. As shown in Fig. 5 , our method could generate images much closer to target images in all the three domains compared with other methods. We demonstrate the quantitative results for multi-domain image completion with two more evaluation metrics: Mean Absolute Error (MAE), and Peak Signal-to-Noise Ratio (PSNR) for all three datasets. As shown in this section, Table 4 , Table 5 , and Table 6 are the extended full tables for Table 1 and Table 2 in the main text. The best results are in bold for each domain. It is shown that our method is able to generate images not only closer to the target images (lower MAE, NRMSE, and higher SSIM), but also with higher image quality (higher PSNR), from the results of three datasets. In addition, for BraTS and ProstateX data, the generated images in the jointly trained generationsegmentation model are evaluated using the same metrics as shown in Table 4 and Table 5 . The results indicate that adding segmentation branch does not bring an obvious benefit for image generation. This is because the segmentation sub-module mainly focuses on the tumor region which takes up only a small part among the whole slice image. Besides, we use dice loss as the segmentation training objective which might not be consistent with the metrics used to evaluate generated images that emphasize the pixel-level similarity. B EXTENDED RESULTS FOR IMAGE SEGMENTATION WITH MISSING DOMAIN: Based on the missing-domain image completion, we show that our proposed method could go beyond image translation to solve the missing-domain segmentation problem. Specifically, our model learns factorized representations by disentangling latent space, which could be efficiently leveraged the eight domains, we assume that each sample contains at least one existing domain randomly. In other words, there are possibly 0∼7 missing domains for each training sample. This is a very challenging setting where the number and the choice of missing domains are totally random. This difficult problem cannot be solved by existing works, and to our best knowledge, our proposed method is the first one that could achieve the random n-to-n image translation. In testing, we evaluate the model with different number of input domains. We show the results of three testing samples shown in Figs. 6∼ 8. The top half of each column shows the input domain(s), where the missing domains are filled with zeros. The bottom half of each column shows the generation results for all the eight domains no matter if it exists in input domains or not. Firstly, in our method, no matter how many and which domains are available, the model could generate images for all the domains including missing ones in an one-time inference. Comparing the results in each row, we could see that the domain-specific style and domain-shared content are all preserved well even when we push the limit of existing domain(s) to be only one. In addition, when the number of visible domains increase, the content in each domain image is enhanced gradually and gets closer to the target image. This illustrates that our model is efficiently learning a better content code through representation disentanglement as the source domains provide more complementary information. C.2 MULTI-SAMPLE LEARNING: Based on the proposed model in Fig. 2 , we further propose a more advanced model and training strategy when multiple samples are inputted at one time. Generally, based on the assumption of partially shared latent space, we assume that the factorized latent code can independently represent the corresponding content and style information in the input image. Then by exchanging the content codes from two independent samples, it should be able to reconstruct the original input image by recombining the style code extracted from the other sample. Based on this idea, we build a comprehensive model with cross-domain training between two samples. Similarly as the framework in Fig. 2 , the image and code consistency loss and image reconstruction loss are also constrained through the encoding and decoding procedure. The results of multi-sample learning are shown in Table 1 named as "ReMIC+Multi-Sample". C.3 ANALYSIS OF MISSING-DOMAIN SEGMENTATION RESULTS: To better understand why our method is a better solution in missing-domain imputation for multidomain recognition tasks like segmentation, we demonstrate three randomly selected testing samples in BraTS dataset as shown in Fig. 9 . Rows 1∼3 shows the first sample, and the left two samples are shown in the same format. In the first sample, the first row shows real images in four domains and its ground truth segmentation labels. When some domain for this sample is randomly missing, a straightforward solution is to search through all the available training data and find the nearest neighbor (NN) to complete the missing image. We search the nearest neighbor according to the Euclidean distance in 2D image space. The second row shows the images in four domains and the segmentation map for the NN sample, which actually looks very similar to the target sample visually. However, we note that the tumor region is seriously different between the target sample and its NN sample, which shows the NN image is not a good missing imputation in semantics. To cope with this issue, our proposed method could generate images for missing domains with not only pixel-level similarity but also similar tumor regions, which are the most significant parts in tumor segmentation task. As shown in the third row, the generated images in four domains are very close to the target images. The segmentation map shows the prediction results when the generated T1 image is gives a segmentation mask very close to the ground truth. These results illustrate the superiority of our method, which results from the disentangled representation learning in feature level. D IMPLEMENTATION DETAILS: Here, we describe the implementation details of our method. We will open source all the source codes and models if get accepted. D.1 HYPERPARAMETERS: In our algorithm, we use the Adam optimizer (Kingma & Ba, 2014) with β 1 = 0.5, β 2 = 0.999. The learning rate is 0.0001. We set the loss weights in Equation 6 as λ adv = 1, λ x cyc = 10, λ c cyc = 1, λ s cyc = 1, λ recon = 20. For comparison purpose, we train the model to 150,000 / 100,000 / 100,000 iterations for BraTS, ProstateX, and RaFD datasets respectively, and compare the results across MUNIT, StarGAN, CollaGAN, and ours ReMIC in all datasets. In ReMIC, we set the dimension of the style code as 8 for comparison purpose with MUNIT. For image generation during testing, we use a fixed style code of 0.5 in each dimension for both MUNIT and ReMIC to compute quantitative results. D.2 NETWORK ARCHITECTURES: The network structures of ReMIC is developed on the backbone of MUNIT model. We describe the details of each module here. Unified Content Encoder: consists of a down-sampling module and residual blocks to extract contexture knowledge from all available domain images in inputs. The down-sampling module contains a 7 × 7 convolutional block with stride 1 and 64 filters, and two 4 × 4 convolutional blocks with stride 2 and, 128 and 256 filters respectively. The strided convolutional layers downsample the input to features maps of size W/4 × H/4 × 256, where W and H are the width and height of input image. Next, there are four residual blocks, each of which contains two 3 × 3 convolutional blocks with 256 filters and stride 1. We apply Instance Normalization (IN) (Ulyanov et al., 2017) after all the convolutional layers. Note that the proposed unified content encoder accepts images of all domains as input (missing domains are filled with zeros in initialization), and learns a universe content code complementarily and collaboratively, which are different from MUNIT. Style Encoder: contains a similar down-sampling module and several residual blocks, which is followed by a global average pooling layer and a fully connected layer to learn the style vector. The down-sampling module is developped using the same structure as that in the unified content encoder above, and two more 4 × 4 convolutional blocks with stride 2 and 256 filters are followed. The final fully connected layer generates style code as a 8-dim vector. There is no IN applied to the Figure 6 : Random multi-domain image completion results. Rows 1-8 are input images, and rows 9-16 are generated images when different numbers of input images are given. Each column demonstrates the images in one domain in the order of "neutral", "angry", "contemptuous", "disgusted", "fearful", "happy", "sad", "surprised". Figure 7 : Random multi-domain image completion results. Rows 1-8 are input images, and rows 9-16 are generated images when different numbers of input images are given. Each column demonstrates the images in one domain in the order of "neutral", "angry", "contemptuous", "disgusted", "fearful", "happy", "sad", "surprised". Figure 8 : Random multi-domain image completion results. Rows 1-8 are input images, and rows 9-16 are generated images when different numbers of input images are given. Each column demonstrates the images in one domain in the order of "neutral", "angry", "contemptuous", "disgusted", "fearful", "happy", "sad", "surprised". : Figures and Tables: Figure 1: Image translation tasks using (a) Star-GAN and MUNIT (1-to-1), (b) CollaGAN (n-to-1), (c) ReMIC (1-to-n), and (d) ReMIC (n-ton). ReMIC completes the missing domain images that can be randomly distributed (N -to-n, N ∈ {1, ..., n}) in the input set. It makes our approach a more general and flexible framework for image translation tasks. The Multimodal Brain Tumor Segmentation Challenge (BraTS) 2018(Menze et al., 2014;Bakas et al., 2017; 2018) provides a set of multi-institutional multimodal brain MRI scans with four modalities: a) native (T1) and b) post-contrast T1-weighted (T1Gd), c) T2-weighted (T2), and d) T2 Fluid Attenuated Inversion Recovery (T2-FLAIR). Following the setting in(Lee et al., 2019b), 218 and 28 subjects are randomly selected for training and test sets. The 2D slices at the same location are extracted from 3D MRI volumes for each of four modalities as one independent data sample in our experiments. In total, the training and testing datasets contain 40,148 and 5,340 images respectively. We resize the images of 240 × 240 to 256 × 256. Three labels are given for the brain tumor segmentation, i.e., enhancing tumor (ET), tumor core (TC), and whole tumor (WT). Figure 3: BraTS image completion results. Rows: 4 modalities. Columns: compared methods. Figure 5: ProstateX generation results. Rows: 1) T2, 2) ADC, 3) HighB. Columns: 1) Ground truth, 2) MUNIT, 3) StarGAN, 4) CollaGAN, 5) ReMIC Multi-domain Medical Image Completion Results Table content: <table><tr><td/><td/><td/><td colspan="2">(a) BraTS</td></tr><tr><td>Methods</td><td colspan="5">T1 NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM T1Gd T2 FLAIR</td></tr><tr><td>MUNIT</td><td colspan="2">0.4054 / 0.8904</td><td colspan="2">0.3469 / 0.9084</td><td>0.4185 / 0.8702</td><td>0.4563 / 0.8728</td></tr><tr><td>StarGAN</td><td colspan="2">0.3028 / 0.9346</td><td colspan="2">0.2795 / 0.9351</td><td>0.5137 / 0.8473</td><td>0.4417 / 0.8931</td></tr><tr><td>CollaGAN</td><td colspan="2">0.4729 / 0.8951</td><td colspan="2">0.4952 / 0.8689</td><td>0.5243 / 0.8890</td><td>0.4299 / 0.8616</td></tr><tr><td>ReMIC w/o Recon</td><td colspan="2">0.3502 / 0.9328</td><td colspan="2">0.2349 / 0.9448</td><td>0.4485 / 0.8681</td><td>0.4214 / 0.8810</td></tr><tr><td>ReMIC</td><td colspan="2">0.1939 / 0.9629</td><td colspan="2">0.2143 / 0.9546</td><td>0.2410 / 0.9469</td><td>0.2639 / 0.9369</td></tr><tr><td colspan="3">ReMIC+Multi-Sample 0.2031 / 0.9637</td><td colspan="2">0.2096 / 0.9563</td><td>0.2369 / 0.9490</td><td>0.2348 / 0.9395</td></tr><tr><td/><td/><td/><td colspan="2">(b) ProstateX</td></tr><tr><td>Methods</td><td/><td colspan="4">T2 NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM ADC HighB</td></tr><tr><td>MUNIT</td><td/><td colspan="2">0.6904 / 0.4428</td><td colspan="2">0.9208 / 0.4297</td><td>0.9325 / 0.5383</td></tr><tr><td>StarGAN</td><td/><td colspan="2">0.6638 / 0.4229</td><td colspan="2">0.9157 / 0.3665</td><td>0.9188 / 0.4350</td></tr><tr><td>CollaGAN</td><td/><td colspan="2">0.8070 / 0.2667</td><td colspan="2">0.7621 / 0.4875</td><td>0.7722 / 0.6824</td></tr><tr><td colspan="2">ReMIC w/o Recon</td><td colspan="2">0.8567 / 0.3330</td><td colspan="2">0.7289 / 0.5377</td><td>0.8469 / 0.7818</td></tr><tr><td>ReMIC</td><td/><td colspan="2">0.4908 / 0.5427</td><td colspan="2">0.2179 / 0.9232</td><td>0.3894 / 0.9150</td></tr><tr><td colspan="4">ReMIC+Multi-Sample 0.4742 / 0.5493</td><td colspan="2">0.2171 / 0.9263</td><td>0.3945 / 0.9116</td></tr></table> RaFD multi-expression translation results. Table content: <table><tr><td>Methods</td><td colspan="4">Neutral NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM Angry Contemptuous Disgusted</td></tr><tr><td>MUNIT</td><td>0.1589 / 0.8177</td><td>0.1637 / 0.8156</td><td>0.1518 / 0.8319</td><td>0.1563 / 0.8114</td></tr><tr><td>StarGAN</td><td>0.1726 / 0.8206</td><td>0.1722 / 0.8245</td><td>0.1459 / 0.8506</td><td>0.1556 / 0.8243</td></tr><tr><td>CollaGAN</td><td>0.1867 / 0.7934</td><td>0.1761 / 0.7736</td><td>0.1856 / 0.7928</td><td>0.1823 / 0.7812</td></tr><tr><td colspan="2">ReMIC w/o Recon 0.1215 / 0.8776</td><td>0.1335 / 0.8556</td><td>0.1192 / 0.8740</td><td>0.1206 / 0.8559</td></tr><tr><td>ReMIC</td><td>0.1225 / 0.8794</td><td>0.1290 / 0.8598</td><td>0.1217 / 0.8725</td><td>0.1177 / 0.8668</td></tr><tr><td>Methods</td><td colspan="4">Fearful NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM Happy Sad Surprised</td></tr><tr><td>MUNIT</td><td>0.1714 / 0.7792</td><td>0.1623 / 0.8073</td><td>0.1677 / 0.7998</td><td>0.1694 / 0.7884</td></tr><tr><td>StarGAN</td><td>0.1685 / 0.7943</td><td>0.1522 / 0.8288</td><td>0.1620 / 0.8227</td><td>0.1634 / 0.7974</td></tr><tr><td>CollaGAN</td><td>0.1907 / 0.7442</td><td>0.1829 / 0.7601</td><td>0.1783 / 0.7766</td><td>0.1888 / 0.7495</td></tr><tr><td colspan="2">ReMIC w/o Recon 0.1321 / 0.8384</td><td>0.1399 / 0.8332</td><td>0.1284 / 0.8597</td><td>0.1333 / 0.8347</td></tr><tr><td>ReMIC</td><td>0.1316 / 0.8395</td><td>0.1383 / 0.8406</td><td>0.1301 / 0.8581</td><td>0.1276 / 0.8484</td></tr></table> Missing-domain segmentation: Dice scores are reported here. Table content: <table/> BraTS results: Missing-domain imputation for pretrained segmentation model (full tabel) ReMIC+Joint 0.915 / 0.859 / 0.710 0.910 / 0.740 / 0.430 0.909 / 0.858 / 0.716 0.860 / 0.823 / 0.691 Table content: <table><tr><td colspan="2">Methods</td><td>T1</td><td>T1Gd</td><td>T2</td><td>FLAIR</td></tr><tr><td/><td/><td>WT / TC / ET</td><td>WT / TC / ET</td><td>WT / TC / ET</td><td>WT / TC / ET</td></tr><tr><td/><td>Oracle</td><td/><td colspan="2">0.910 / 0.849 / 0.708</td></tr><tr><td/><td>Zero</td><td colspan="4">0.771 / 0.609 / 0.572 0.872 / 0.539 / 0.008 0.755 / 0.690 / 0.677 0.458 / 0.468 / 0.435</td></tr><tr><td/><td>Average</td><td colspan="4">0.870 / 0.744 / 0.674 0.882 / 0.603 / 0.303 0.849 / 0.732 / 0.686 0.655 / 0.710 / 0.648</td></tr><tr><td>2D</td><td>NN MUNIT</td><td colspan="4">0.883 / 0.765 / 0.660 0.871 / 0.564 / 0.186 0.811 / 0.720 / 0.642 0.534 / 0.669 / 0.614 0.886 / 0.785 / 0.679 0.872 / 0.552 / 0.187 0.882 / 0.781 / 0.682 0.408 / 0.541 / 0.527</td></tr><tr><td/><td>StarGAN</td><td colspan="4">0.897 / 0.795 / 0.704 0.886 / 0.588 / 0.184 0.851 / 0.725 / 0.661 0.570 / 0.664 / 0.604</td></tr><tr><td/><td colspan="5">CollaGAN 0.860 / 0.747 / 0.651 0.864 / 0.576 / 0.252 0.882 / 0.811 / 0.700 0.663 / 0.697 / 0.663</td></tr><tr><td/><td>ReMIC</td><td colspan="4">0.909 / 0.834 / 0.714 0.899 / 0.669 / 0.354 0.905 / 0.855 / 0.709 0.853 / 0.807 / 0.691</td></tr><tr><td/><td>Oracle</td><td/><td colspan="2">0.909 / 0.867 / 0.733</td></tr><tr><td/><td>Zero</td><td colspan="4">0.876 / 0.826 / 0.694 0.884 / 0.574 / 0.020 0.901 / 0.865 / 0.728 0.661 / 0.730 / 0.643</td></tr><tr><td>3D</td><td>Average</td><td colspan="4">0.880 / 0.814 / 0.640 0.854 / 0.618 / 0.282 0.838 / 0.801 / 0.695 0.713 / 0.732 / 0.675</td></tr><tr><td/><td>NN</td><td colspan="4">0.890 / 0.829 / 0.703 0.859 / 0.538 / 0.081 0.790 / 0.799 / 0.704 0.472 / 0.686 / 0.607</td></tr><tr><td/><td>ReMIC</td><td colspan="4">0.905 / 0.864 / 0.722 0.888 / 0.614 / 0.273 0.902 / 0.871 / 0.734 0.855 / 0.850 / 0.724</td></tr><tr><td colspan="2">Methods</td><td>T1 WT / TC / ET</td><td>T1CE WT / TC / ET</td><td>T2 WT / TC / ET</td><td>FLAIR WT / TC / ET</td></tr><tr><td colspan="2">Oracle</td><td/><td colspan="2">0.910 / 0.849 / 0.708</td></tr><tr><td>Zero</td><td/><td colspan="4">0.904 / 0.818 / 0.710 0.888 / 0.687 / 0.394 0.907 / 0.842 / 0.720 0.841 / 0.793 / 0.692</td></tr><tr><td colspan="2">Average</td><td colspan="4">0.905 / 0.798 / 0.685 0.898 / 0.603 / 0.312 0.897 / 0.803 / 0.663 0.846 / 0.768 / 0.663</td></tr><tr><td colspan="2">ReMIC</td><td colspan="4">0.908 / 0.783 / 0.676 0.897 / 0.685 / 0.382 0.901 / 0.815 / 0.698 0.851 / 0.779 / 0.665</td></tr><tr><td colspan="2">ReMIC+Seg</td><td colspan="4">0.911 / 0.819 / 0.687 0.902 / 0.708 / 0.411 0.910 / 0.839 / 0.716 0.850 / 0.792 / 0.671</td></tr></table> BraTS results: Missing-domain segmentation (full table) Table content: <table/>
12,166
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper content.\n\n### Paper Content\n\nTitle: REPRESENTATIONAL DISENTANGLEMENT FOR MULTI-DOMAIN IMAGE COMPLETION\n\nAbstract: Multi-domain data are widely leveraged in vision applications to take advantage of complementary information from each modality, e.g., brain tumor segmentation from multi-parametric magnetic resonance imaging (MRI). However, due to different imaging protocol and data loss or corruption, the availability of images in all domains could vary amongst multiple data sources in practice, which makes it challenging to train and test a universal model with a varied set of input data.\nTo tackle this problem, we propose a general approach to complete the possible missing domain of the input data in a variety of application settings. Specifically, we develop a novel generative adversarial network (GAN) architecture that utilizes a representational disentanglement scheme for shared 'skeleton' encoding and separate 'flesh' encoding across multiple domains. We further illustrate that the learned representation in the multi-domain image translation could be leveraged for higher-level recognition, like segmentation. Specifically, we introduce a unified framework of image completion branch and segmentation branch with a shared content encoder. We demonstrate constant and significant performance improvement by integrating the proposed representation disentanglement scheme in both multi-domain image completion and image segmentation tasks using three evaluation datasets individually for brain tumor segmentation, prostate segmentation, and facial expression image completion.\n\n1. INTRODUCTION:\n\nMulti-domain images are often required as inputs due to the nature that different domains could provide complementary knowledge in various vision tasks. For example, four medical imaging modalities, MRI with T1, T1-weighted, T2-weighted, FLAIR (FLuid-Attenuated Inversion Recovery), are acquired as a standard protocol to segment the accurate tumor regions for each patient in the brain tumor segmentation task (Menze et al., 2014) . Each modality can provide distinct features to locate the true tumor boundaries from a differential diagnosis perspective. Additionally, when it comes to the natural image tasks, there are similar scenarios such as person re-identification across different cameras and different times (Zheng et al., 2015; 2019) . Here, the medical images in different contrast modalities or natural images with the person under different appearance or cameras can both be considered as different domains, which all contribute to depict the underlying object or scene from different aspects of view.\n\nHowever, some domains might be missing in practice. In large-scale datasets from multiple institutes, it is generally difficult or even infeasible to guarantee the availability of complete domains for all the data. For example, in some cases, the patients might lack some imaging scans due to different imaging protocol or data loss or corruption. In terms of taking the most advantage of all these rare and valuable data, it is costly to just throw away the incomplete samples during training, and even infeasible to test with missing domain input. Thus, it becomes necessary to effectively artificially generate the missing data. An intuitive approach is to substitute the missing domains with the nearest neighbor among other existing samples, but this might lack of semantic consistency among domains of the input sample since it only focuses on pixel-level similarity with existing data.\n\nAdditionally, the recent success of GANs (Goodfellow et al., 2014; Mirza & Osindero, 2014; Isola et al., 2017; Zhu et al., 2017a; b; Kim et al., 2017; Liu et al., 2017; Choi et al., 2018; Yoon et al., 2018; Lee et al., 2019a) in image-to-image translation provides another possible solution for this challenge. By training a GAN-based model to generate images for missing domains, we are more likely to keep the semantic consistency with existing domains by learning the semantic representations. CycleGAN (Zhu et al., 2017a) shows an impressive performance in the image-to-image translation task via cycle-consistency constraints between real images and generated images. However, CycleGAN mainly focuses on the 1-to-1 mapping between two specific domains and assumes the corresponding images in two domains strictly share the same representation in latent space. This is limited in multi-domain applications since n(n -1) mapping functions are required if there are n domains. Following this, StarGAN (Choi et al., 2018) proposes a general method for multi-domain image translation using a mask vector in inputs to specify the desired target domain. Meantime, RadialGAN (Yoon et al., 2018) also deals with the multi-domain generation problem with the assumption that all the domains share the same latent space. Although StarGAN and RadialGAN make it possible to generate images in different target domains through the 1-to-n translation, the representation learning and image generation are always conditioned on the input with only one source domain. In order to take advantage of multiple available domains as inputs for representation learning, CollaGAN (Lee et al., 2019a) proposes a collaborative model to incorporate multiple available domains to generate one missing domain. Similar to StarGAN, CollaGAN only relies on the cycle-consistency constraints to preserve the contents in generated images, which are actually implicit constraints between real images and fake images in pixel level. Additionally, since the target domain is controlled by an one-hot mask vector in the input, CollaGAN is essentially doing n-to-1 translation in one inference. Our goal in this work is to propose a more general n-to-n image translation framework that could overcome the aforementioned limitations as illustrated in the Fig. 1 .\n\nRecently, learning disentangled representation is proposed to capture the full distribution of possible outputs by introducing a random style code (Chen et al., 2016; Higgins et al., 2017; Huang et al., 2018; Lee et al., 2018) . InfoGAN (Chen et al., 2016) and β-VAE (Higgins et al., 2017) learn the disentangled representation without supervision. In image-to-image translation tasks, DIRT (Lee et al., 2018) learns disentangled content and attribute features by exchanging the features encoded from images of two domains respectively and then reverting back again. The image consistency during translation is constrained by code reconstruction and image reconstruction. With a similar code exchange framework, MUNIT (Huang et al., 2018) assumes a prior distribution on style code, which allows sampling different style codes to generate multiple images in target domain. However, both DIRT and MUNIT only deal with image translation between two domains, which is not efficient to train n(n -1) mapping functions in a n-domain task.\n\nInspired by previous works, we propose a n-to-n multi-domain image translation approach based on the representational disentanglement scheme for multi-domain image completion (ReMIC). Specifically, our contributions are three-fold: (1) We propose a GAN architecture with representation disentanglement to learn domain-shared features and domain-specific features for the more general and flexible n-to-n image translation; (2) We illustrate the effectiveness of the representation learning for high-level tasks by building a unified framework for jointly learning the generation and segmentation with a shared content encoder; (3) Extensive experiments with three different datasets demonstrate that the proposed method achieves better performance than previous state-of-the-art methods on both generation and segmentation.\n\n2. METHOD:\n\nAs discussed above, images from different domains for the same data sample could present their exclusive features of the data subject. Nonetheless, they also inherit some global content structures Figure 2 : Overview of the proposed nto-n multi-domain completion and segmentation framework. n = 4 and two domain data are missing in this example. Our model contains a unified content encoder E c (red lines), domainspecific style encoders E s i (1 ≤ i ≤ n, orange lines) and generators G i (1 ≤ i ≤ n, blue lines). A variety of loss was adopted (burgundy lines), i.e., image consistency loss for visible domains, latent consistency loss, adversarial loss and reconstruction loss for the generated images. Furthermore, the representational learning framework is flexible to combine a segmentation generator G S following the content code for the unified image generation and segmentation. of it. Let us take the parametric MRI for brain tumors as an example. T2 and FLAIR MRI can highlight the differences in tissues' water relaxational properties, which will distinguish the tumor tissue from normal ones. Contrasted T1 MRI can examine the pathological intratumoral take-up of contrast agents so that the boundary between tumor core and the rest will be highlighted. However, the underline anatomical structure of the brain is shared by all these three modalities. With the availability of multiple domain data, it will be meaningful to decompose the images into the shared content structure (skeleton) and meanwhile distinguish and model their unique characteristics (flesh) through learning. Therefore, we will be able to reconstruct the missing image during the testing by using the shared skeleton (extracted from the available data domains) and a sampled flesh from the learned model. Without assuming a fixed number of missing domains during the model training, the learned framework could flexibly handle one or more missing domains. In addition, we further enforce the accuracy of the extracted content structure by connecting it to the segmentation tasks. In such manner, the disentangled representations of multiple domain images (both the skeleton and flesh) can help in both the image completion and segmentation tasks.\n\nSuppose there are n domains:\n\n{χ 1 , χ 2 , • • • , χ n }. Let x 1 ∈ χ 1 , x 2 ∈ χ 2 , • • • , x n ∈ χ n\n\nare images from n different domains respectively, which are grouped data describing the same subject x = {x 1 , • • • , x n }. In total, we assume the whole dataset contains M independent data samples. For each data sample, we assume one or more of the n images might be randomly missing. The goal of our first task is to complete all the missing domains for all the samples.\n\nTo accomplish the completion of all missing domains from a random set of available domains, we assume the n domains share the latent representation of underline structure. We name the shared latent representation as content code and meanwhile each domain also contains the domain-specific latent representation, i.e., style code that is related to the various features or attributes in different domains. The missing domains can be reconstructed from these two-aspect of information through the learning of deep neural networks. Similar to the setting in MUNIT (Huang et al., 2018) , we also assume a prior distribution for style latent code as N (0, I) to capture the full distribution of possible outputs in each domain. However, MUNIT trains separate content encoder for each domain and enforce the disentanglement via coupled cross-domain translation during training while the proposed method employs one single content encoder to extract the true representation shared across all the domains.\n\n2.1. UNIFIED IMAGE COMPLETION AND SEGMENTATION FRAMEWORK:\n\nAs shown in Figure 2 , our model contains a unified content encoder E c and domain-specific style encoders\n\nE s i (1 ≤ i ≤ n).\n\nContent encoder E c extracts the content code c from all existing domains:\n\nE c (x 1 , x 2 , • • • , x n ) = c.\n\nFor the missing domains, we use zero padding in corresponding input channels. For each domain, a style encoder E s i learns the domain-specific style code s i from the corresponding domain image\n\nx i respectively (1 ≤ i ≤ n): E s i (x i ) = s i .\n\nDuring the training, our model captures the content code c and style codes s i (1 ≤ i ≤ n) (assumed in a prior distribution of N (0, I) as shown in Figure 2 ) through the encoding process (denoted as red and orange arrows respectively) with a random set of input images (in green box). We only need to train a single ReMIC model to generate all these missing images from sets of available domains.\n\nIn the generation process, our model samples style codes from prior distribution and recombines content code to generate images in n domains through generators G i (1 ≤ i ≤ n) (denoted as blue arrows). The generator G i for each domain generates the fake images in corresponding domain from the domain-shared content code and the domain-specific style code:\n\nG i (c, s i ) = xi .\n\nAs discussed before, it is fairly common to find that there are missing domains for input data. Some straight-forward solutions to complete all the missing images include zero filling, average image, or generating images via image translation model, etc. Alternatively, based on the content code learned in our model, we could develop a unified model for multi-task learning of both generation and segmentation. Specifically, another branch of segmentation generator G S is added after content encoder to generate segmentation mask. By optimizing the generation loss and segmentation Dice loss (detailed in Section 2.2) simultaneously, the model could adaptively learn how to generate missing images to promote the segmentation performance.\n\n2.2. TRAINING LOSS:\n\nIn the training of GAN models, the setting of losses is of paramount importance to the final generation results. Our loss objective contains the cycle-consistency loss of images and codes within and across domains, adversarial loss and reconstruction loss on the generation and input images.\n\nImage Consistency Loss: For each sample, the content and style encoders are able to extract a domain-shared content code and domain-specific style codes respectively for each available domain. Then by recombining the content and style codes, the domain generators are expected to reconstruct the input images from each domain. The image consistency loss is defined to constrain the reconstructed images and real images as in the flow chat of \"Image → Code → Image\".\n\nL xi cyc = E xi∼p(xi) [ G i (E c (x 1 , x 2 , • • • , x n ), E s i (x i )) -x i 1 ] (1) where p(x i ) is the data distribution in domain χ i (1 ≤ i ≤ n).\n\nLatent Consistency Loss: Next, the generated fake images can also be encoded as content and style codes by using the same encoders. The latent consistency loss constrains the code before decoding and after encoding again in the direction of \"Code → Image → Code\".\n\nEQUATION\n\nEQUATION\n\nwhere p(s i ) is the prior distribution N (0, I), p(c) is given by c\n\n= E c (x 1 , x 2 , • • • , x n ) and x i ∼ p(x i ) (1 ≤ i ≤ n).\n\nAdversarial Loss: The adversarial learning between generators and discriminators forces the data distribution of the generated fake images to be close to the real images' distribution for each domain.\n\nEQUATION\n\n) where D i is the discriminator for domain i to distinguish the generated fake images xi and real images x i ∈ χ i .\n\nReconstruction Loss: In addition to the feature-level consistency mentioned above to constrain the relationship between the generated fake images and real images in different domains, we also constrain the pixel-level similarity between generated images and ground truth images in the same domain during training stage.\n\nEQUATION\n\nTotal Loss: The encodes, decoders and discriminators are jointly trained to optimize the total objective as follows. min\n\nE c ,E s 1 ,••• ,E s n max D1,••• ,Dn L(E c , E s 1 , • • • , E s n , D 1 , • • • , D n ) = λ adv n i=1 L xi adv + λ x cyc n i=1 L xi cyc + λ c cyc L c cyc + λ s cyc n i=1 L si cyc + λ recon n i=1 L xi recon (6)\n\nwhere λ adv , λ x cyc , λ c cyc , λ s cyc , and λ recon are weights. In the n-to-n image translation task, the model learns a complementary representation of multiple domains, which can also facilitate the high-level recognition tasks. For example, extracted content code (containing the underline anatomical structures) can largely benefit the segmentation of organs and lesions in medical image analysis. On the other side, the segmentation task can enforce the learning of a more representative content encoder. Therefore, we train a multi-task network for both the segmentation and generation. In the proposed framework, we construct a unified generation and segmentation model by adding a segmentation generator G S following the content code out from the generated fake images as shown in Fig. 2 . We utilize dice loss (Salehi et al., 2017) for accurate segmentation of multiple input images.\n\nL seg = 1 - 1 L L l=1 p ŷp (l)y p (l) p ŷp (l) + y p (l) -ŷp (l)y p (l) (7)\n\nwhere L is the total number of classes, p is the pixel position in the image, ŷ is the predicted segmentation probability map from G S and y is the ground truth segmentation mask. The segmentation can loss can be added into the total loss in Equation 6for an end-to-end joint learning.\n\n3. EXPERIMENTAL RESULTS AND DISCUSSION:\n\nWe firstly show the advantage of our proposed method in the n-to-n image completion task given a random set of available domains. Moreover, we illustrate that the proposed model (a variation with two branches) provides an efficient solution to multi-domain segmentation with missing image inputs. To demonstrate the generalizability of the proposed algorithm, we evaluate the proposed method on a natural image dataset as well as two medical image datasets.\n\nTotally three datasets are employed in our experiments, i.e., BraTS, ProstateX, and RaFD. ProstateX The ProstateX dataset (Litjens et al., 2014) contains multi-parametric prostate MR scans for 98 subjects. Here, we use the three modalities of each sample: 1) T2-weighted (T2), 2) Apparent Diffusion Coefficient (ADC), 3) high b-value DWI images (HighB). We randomly divide the dataset to 78 and 20 subjects for training and testing respectively. Similar to BraTS, 2D slices are extracted from 3D volumes for each modality. In total, the training and testing sets contain 3,540 and 840 images respectively. We resize the images of 384 × 384 to 256 × 256. Prostate regions are manually labeled as the whole prostate (WP) by board-certificated radiologists.\n\nBraTS:\n\nRaFD The Radboud Faces Database (RaFD) (Langner et al., 2010) contains eight facial expressions collected from 67 participants respectively: neutral, angry, contemptuous, disgusted, fearful, happy, sad, and surprised. We adopt images from three different camera angles (45 • , 90 • , 135 • ) along with three different gaze directions (left, frontal, right), 4,824 images in total. We treat the eight paired images in eight expression domains as an data sample. 54 participants (3,888 images) and 13 participants (936 images) are randomly divided for training and testing set. Following the setting in StarGAN (Choi et al., 2018) , we crop the image with the face in the center and then resize them to 128 × 128.\n\n3.1. RESULTS OF MULTI-DOMAIN IMAGE COMPLETION:\n\nFor comparison purpose, we firstly assume there are always only one domain missing for each sample and the algorithm will be evaluated for multiple times when one domain is missed at each time. Then we investigate a more general scenario when there are more than one missing domains and show that our proposed method is capable to handle the general random n-to-n image completion. Multiple quantitative metrics are used to evaluate the similarity between the generated image and original one, i.e., normalized root mean-squared error (NRMSE) and mean structural similarity index (SSIM) between generated images and target images. We compare our results with previous arts on all three datasets for this task. The results of the proposed method (\"ReMIC\"), ReMIC without reconstruction loss (\"ReMIC w/o Recon\") and ReMIC with cross-domain translation (\"ReMIC+Multi-Sample\", more details in Appendix C.2) are reported. Note that in our method, by leveraging the unified content code and sampling the style code for each domain respectively, the proposed model could handle any number of domain missing, which is more general and flexible for the random n-to-n image completion. Besides, these compared methods have their own limits.\n\nMUNIT (Huang et al., 2018 ) conducts 1-to-1 image translation between two domains through representational disentanglement, as shown in Fig. 1(a) . In experiments, we train and test MUNIT models between any pair of two domains. Without loss of generality, we select \"neural\" images to generate all the other domains by following the StarGAN setting, and \"angry\" image is used to generate \"neural\" one. In BraTS , \"T1\" is selected to generate other domains since it is the most common modality while \"T1\" is generated from \"T1Gd\". Similarly, \"T2\" is selected to generate other domains in ProstateX while \"T2\" is generated from \"ADC\".\n\nStarGAN (Choi et al., 2018) adopts a mask vector to generate image in the specified target domain. Thus, different target domains could be generated from one source domain in multiple inference passes while only a single model is trained. This is actually a 1-to-n image translation, since only one domain can be used as input in StarGAN, we use the same domain pair match as MUNIT, which is also the same as the setting in the StarGAN (Choi et al., 2018) .\n\nCollaGAN (Lee et al., 2019a; b) carries out the n-to-1 image translation as shown in Fig. 1(b) , where multiple source domains collaboratively generate the only one target domain which is assumed missing in inputs. But CollaGAN cannot deal with multiple missing domains. In CollaGAN experiments, we use the same domain generation setting as ours, that is, all the existing domains are used to complete the one missing domain in sequence.\n\nResults of Medical Image Completion: Fig. 3 and Fig. 5 in appendix show some sample results of image completion on BraTS and ProstateX, and corresponding quantitative results are in Table 1 .\n\nEach row shows the target and generated images of one domain with the assumption of that domain is missing in inputs. In comparison, our model achieves higher similarity to the target and also produce qualitatively more accurate images, e.g., a more accurate outstanding tumor region in BraTS and prostate regions are well-preserved in ProstateX. This is achieved by learning a better content code through factorized latent space in our method, which is essential in preserving the anatomical structures in medical images. Extended quantitative evaluation metrics are in Appendix.\n\nResults of Facial Expression Image Translation: Fig. 4 shows a sample result of facial expression image completion on RaFD dataset. In each column, we show the target and generated images in one domain (expression), where we assume the target domain is missing in the inputs and need to be generated from one or more available domains. In this way, we evaluate the missing domains one by one sequentially. Compared with MUNIT and StarGAN results, our method could generate missing images with a better quality, especially in the generating details like teeth, mouth and eyes. This is paritally due to the fact that our method can incorporate complementary information from multiple domains, while MUNIT and StarGAN can only accept one domain as input source. For example, in the generation of \"happy\" and \"disgusted\" expressions, either MUNIT or StarGAN could not generate a good teeth and mouth region, since their source domain \"neutral\" does not contain the teeth. Compared with CollaGAN, our method could generate images with a better content through explicit disentangled representation learning in feature level instead of implicit cycle-consistency constraints in pixel level. The superior performance could also be observed in the NRMSE and SSIM value across all testing samples in Table 2 with all eight expressions.\n\n3.2. RESULTS OF IMAGE SEGMENTATION WITH MISSING DOMAIN:\n\nBased on the missing-domain image completion as above, we show that our proposed method could go beyond image translation to solve the missing-domain image segmentation problem. Specifically, our model learns factorized representations by disentangling latent space, which could be efficiently leveraged for high-level tasks. As shown in Fig. 2 , a segmentation branch is added after the learned content codes to generate segmentation prediction mask. We evaluate the segmentation accuracy with Dice coefficient on both BraTS and ProstateX datasets.\n\nThe results of unified image completion and segmentation model as shown in Table 8 achieve the best dice score in both BraTS and ProstateX datasets. We train a fully supervised 2D U-shaped segmentation network (a U-net variation, Ronneberger et al. (2015) ) without missing images as the \"Oracle\". \"Oracle+X\" means that the results are computed by testing the missing images generated from the \"X\" method with the pretrained \"Oracle\" model. \"All\" represents the full testing set without any missing domains. \"ReMIC+Seg\" stands for using separate content encoders for generation and segmentation tasks in our proposed unified framework, while \"ReMIC+Joint\" indicates sharing the weights of content encoder for the two tasks. All methods obtain similar segmentation performances in ProstateX, but ReMIC is still relative closer to the Oracle results. In a more difficult segmentation task (like ET segmentation in BraTS), our proposed method shows significant improvement over other compared methods and even superior to the Oracle in some cases.\n\n4. CONCLUSION:\n\nIn this work, we propose a general framework for multi-domain image completion, given that one or more input images are missing. We learn shared content and domain-specific style encodings across multiple domains via the representational disentanglement. We also design several loss functions for accurate image generation, including image consistency loss, latent consistency loss, adversarial loss, and reconstruction loss. Our framework is flexible and can be easily extended to a unified generation and segmentation network. Extensive experiments validate the proposed method surpasses baseline methods and previous state-of-the-art methods on both multi-domain image generation and segmentation with missing domains.\n\nA EXTENDED RESULTS FOR MULTI-DOMAIN IMAGE COMPLETION:\n\nA.1 QUALITATIVE RESULTS OF IMAGE COMPLETION FOR PROSTATEX Due to the page limit, we put the quantitative results of image completion for ProstateX dataset here.\n\nAs shown in Fig. 5 , our method could generate images much closer to target images in all the three domains compared with other methods. We demonstrate the quantitative results for multi-domain image completion with two more evaluation metrics: Mean Absolute Error (MAE), and Peak Signal-to-Noise Ratio (PSNR) for all three datasets. As shown in this section, Table 4 , Table 5 , and Table 6 are the extended full tables for Table 1 and Table 2 in the main text. The best results are in bold for each domain. It is shown that our method is able to generate images not only closer to the target images (lower MAE, NRMSE, and higher SSIM), but also with higher image quality (higher PSNR), from the results of three datasets.\n\nIn addition, for BraTS and ProstateX data, the generated images in the jointly trained generationsegmentation model are evaluated using the same metrics as shown in Table 4 and Table 5 . The results indicate that adding segmentation branch does not bring an obvious benefit for image generation. This is because the segmentation sub-module mainly focuses on the tumor region which takes up only a small part among the whole slice image. Besides, we use dice loss as the segmentation training objective which might not be consistent with the metrics used to evaluate generated images that emphasize the pixel-level similarity.\n\nB EXTENDED RESULTS FOR IMAGE SEGMENTATION WITH MISSING DOMAIN:\n\nBased on the missing-domain image completion, we show that our proposed method could go beyond image translation to solve the missing-domain segmentation problem. Specifically, our model learns factorized representations by disentangling latent space, which could be efficiently leveraged the eight domains, we assume that each sample contains at least one existing domain randomly. In other words, there are possibly 0∼7 missing domains for each training sample. This is a very challenging setting where the number and the choice of missing domains are totally random. This difficult problem cannot be solved by existing works, and to our best knowledge, our proposed method is the first one that could achieve the random n-to-n image translation. In testing, we evaluate the model with different number of input domains. We show the results of three testing samples shown in Figs. 6∼ 8. The top half of each column shows the input domain(s), where the missing domains are filled with zeros. The bottom half of each column shows the generation results for all the eight domains no matter if it exists in input domains or not. Firstly, in our method, no matter how many and which domains are available, the model could generate images for all the domains including missing ones in an one-time inference. Comparing the results in each row, we could see that the domain-specific style and domain-shared content are all preserved well even when we push the limit of existing domain(s) to be only one. In addition, when the number of visible domains increase, the content in each domain image is enhanced gradually and gets closer to the target image. This illustrates that our model is efficiently learning a better content code through representation disentanglement as the source domains provide more complementary information.\n\nC.2 MULTI-SAMPLE LEARNING:\n\nBased on the proposed model in Fig. 2 , we further propose a more advanced model and training strategy when multiple samples are inputted at one time. Generally, based on the assumption of partially shared latent space, we assume that the factorized latent code can independently represent the corresponding content and style information in the input image. Then by exchanging the content codes from two independent samples, it should be able to reconstruct the original input image by recombining the style code extracted from the other sample. Based on this idea, we build a comprehensive model with cross-domain training between two samples. Similarly as the framework in Fig. 2 , the image and code consistency loss and image reconstruction loss are also constrained through the encoding and decoding procedure. The results of multi-sample learning are shown in Table 1 named as \"ReMIC+Multi-Sample\".\n\nC.3 ANALYSIS OF MISSING-DOMAIN SEGMENTATION RESULTS:\n\nTo better understand why our method is a better solution in missing-domain imputation for multidomain recognition tasks like segmentation, we demonstrate three randomly selected testing samples in BraTS dataset as shown in Fig. 9 . Rows 1∼3 shows the first sample, and the left two samples are shown in the same format. In the first sample, the first row shows real images in four domains and its ground truth segmentation labels. When some domain for this sample is randomly missing, a straightforward solution is to search through all the available training data and find the nearest neighbor (NN) to complete the missing image. We search the nearest neighbor according to the Euclidean distance in 2D image space. The second row shows the images in four domains and the segmentation map for the NN sample, which actually looks very similar to the target sample visually. However, we note that the tumor region is seriously different between the target sample and its NN sample, which shows the NN image is not a good missing imputation in semantics. To cope with this issue, our proposed method could generate images for missing domains with not only pixel-level similarity but also similar tumor regions, which are the most significant parts in tumor segmentation task. As shown in the third row, the generated images in four domains are very close to the target images. The segmentation map shows the prediction results when the generated T1 image is gives a segmentation mask very close to the ground truth. These results illustrate the superiority of our method, which results from the disentangled representation learning in feature level.\n\nD IMPLEMENTATION DETAILS:\n\nHere, we describe the implementation details of our method. We will open source all the source codes and models if get accepted.\n\nD.1 HYPERPARAMETERS:\n\nIn our algorithm, we use the Adam optimizer (Kingma & Ba, 2014) with β 1 = 0.5, β 2 = 0.999. The learning rate is 0.0001. We set the loss weights in Equation 6 as λ adv = 1, λ x cyc = 10, λ c cyc = 1, λ s cyc = 1, λ recon = 20. For comparison purpose, we train the model to 150,000 / 100,000 / 100,000 iterations for BraTS, ProstateX, and RaFD datasets respectively, and compare the results across MUNIT, StarGAN, CollaGAN, and ours ReMIC in all datasets. In ReMIC, we set the dimension of the style code as 8 for comparison purpose with MUNIT. For image generation during testing, we use a fixed style code of 0.5 in each dimension for both MUNIT and ReMIC to compute quantitative results.\n\nD.2 NETWORK ARCHITECTURES:\n\nThe network structures of ReMIC is developed on the backbone of MUNIT model. We describe the details of each module here.\n\nUnified Content Encoder: consists of a down-sampling module and residual blocks to extract contexture knowledge from all available domain images in inputs. The down-sampling module contains a 7 × 7 convolutional block with stride 1 and 64 filters, and two 4 × 4 convolutional blocks with stride 2 and, 128 and 256 filters respectively. The strided convolutional layers downsample the input to features maps of size W/4 × H/4 × 256, where W and H are the width and height of input image. Next, there are four residual blocks, each of which contains two 3 × 3 convolutional blocks with 256 filters and stride 1. We apply Instance Normalization (IN) (Ulyanov et al., 2017) after all the convolutional layers. Note that the proposed unified content encoder accepts images of all domains as input (missing domains are filled with zeros in initialization), and learns a universe content code complementarily and collaboratively, which are different from MUNIT.\n\nStyle Encoder: contains a similar down-sampling module and several residual blocks, which is followed by a global average pooling layer and a fully connected layer to learn the style vector. The down-sampling module is developped using the same structure as that in the unified content encoder above, and two more 4 × 4 convolutional blocks with stride 2 and 256 filters are followed. The final fully connected layer generates style code as a 8-dim vector. There is no IN applied to the Figure 6 : Random multi-domain image completion results. Rows 1-8 are input images, and rows 9-16 are generated images when different numbers of input images are given. Each column demonstrates the images in one domain in the order of \"neutral\", \"angry\", \"contemptuous\", \"disgusted\", \"fearful\", \"happy\", \"sad\", \"surprised\".\n\nFigure 7 : Random multi-domain image completion results. Rows 1-8 are input images, and rows 9-16 are generated images when different numbers of input images are given. Each column demonstrates the images in one domain in the order of \"neutral\", \"angry\", \"contemptuous\", \"disgusted\", \"fearful\", \"happy\", \"sad\", \"surprised\".\n\nFigure 8 : Random multi-domain image completion results. Rows 1-8 are input images, and rows 9-16 are generated images when different numbers of input images are given. Each column demonstrates the images in one domain in the order of \"neutral\", \"angry\", \"contemptuous\", \"disgusted\", \"fearful\", \"happy\", \"sad\", \"surprised\".\n\n:\n\n\n\nFigures and Tables:\n\nFigure 1: Image translation tasks using (a) Star-GAN and MUNIT (1-to-1), (b) CollaGAN (n-to-1), (c) ReMIC (1-to-n), and (d) ReMIC (n-ton). ReMIC completes the missing domain images that can be randomly distributed (N -to-n, N ∈ {1, ..., n}) in the input set. It makes our approach a more general and flexible framework for image translation tasks.\n\nThe Multimodal Brain Tumor Segmentation Challenge (BraTS) 2018(Menze et al., 2014;Bakas et al., 2017; 2018) provides a set of multi-institutional multimodal brain MRI scans with four modalities: a) native (T1) and b) post-contrast T1-weighted (T1Gd), c) T2-weighted (T2), and d) T2 Fluid Attenuated Inversion Recovery (T2-FLAIR). Following the setting in(Lee et al., 2019b), 218 and 28 subjects are randomly selected for training and test sets. The 2D slices at the same location are extracted from 3D MRI volumes for each of four modalities as one independent data sample in our experiments. In total, the training and testing datasets contain 40,148 and 5,340 images respectively. We resize the images of 240 × 240 to 256 × 256. Three labels are given for the brain tumor segmentation, i.e., enhancing tumor (ET), tumor core (TC), and whole tumor (WT).\n\nFigure 3: BraTS image completion results. Rows: 4 modalities. Columns: compared methods.\n\nFigure 5: ProstateX generation results. Rows: 1) T2, 2) ADC, 3) HighB. Columns: 1) Ground truth, 2) MUNIT, 3) StarGAN, 4) CollaGAN, 5) ReMIC\n\nMulti-domain Medical Image Completion Results\n\nTable content: <table><tr><td/><td/><td/><td colspan=\"2\">(a) BraTS</td></tr><tr><td>Methods</td><td colspan=\"5\">T1 NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM T1Gd T2 FLAIR</td></tr><tr><td>MUNIT</td><td colspan=\"2\">0.4054 / 0.8904</td><td colspan=\"2\">0.3469 / 0.9084</td><td>0.4185 / 0.8702</td><td>0.4563 / 0.8728</td></tr><tr><td>StarGAN</td><td colspan=\"2\">0.3028 / 0.9346</td><td colspan=\"2\">0.2795 / 0.9351</td><td>0.5137 / 0.8473</td><td>0.4417 / 0.8931</td></tr><tr><td>CollaGAN</td><td colspan=\"2\">0.4729 / 0.8951</td><td colspan=\"2\">0.4952 / 0.8689</td><td>0.5243 / 0.8890</td><td>0.4299 / 0.8616</td></tr><tr><td>ReMIC w/o Recon</td><td colspan=\"2\">0.3502 / 0.9328</td><td colspan=\"2\">0.2349 / 0.9448</td><td>0.4485 / 0.8681</td><td>0.4214 / 0.8810</td></tr><tr><td>ReMIC</td><td colspan=\"2\">0.1939 / 0.9629</td><td colspan=\"2\">0.2143 / 0.9546</td><td>0.2410 / 0.9469</td><td>0.2639 / 0.9369</td></tr><tr><td colspan=\"3\">ReMIC+Multi-Sample 0.2031 / 0.9637</td><td colspan=\"2\">0.2096 / 0.9563</td><td>0.2369 / 0.9490</td><td>0.2348 / 0.9395</td></tr><tr><td/><td/><td/><td colspan=\"2\">(b) ProstateX</td></tr><tr><td>Methods</td><td/><td colspan=\"4\">T2 NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM ADC HighB</td></tr><tr><td>MUNIT</td><td/><td colspan=\"2\">0.6904 / 0.4428</td><td colspan=\"2\">0.9208 / 0.4297</td><td>0.9325 / 0.5383</td></tr><tr><td>StarGAN</td><td/><td colspan=\"2\">0.6638 / 0.4229</td><td colspan=\"2\">0.9157 / 0.3665</td><td>0.9188 / 0.4350</td></tr><tr><td>CollaGAN</td><td/><td colspan=\"2\">0.8070 / 0.2667</td><td colspan=\"2\">0.7621 / 0.4875</td><td>0.7722 / 0.6824</td></tr><tr><td colspan=\"2\">ReMIC w/o Recon</td><td colspan=\"2\">0.8567 / 0.3330</td><td colspan=\"2\">0.7289 / 0.5377</td><td>0.8469 / 0.7818</td></tr><tr><td>ReMIC</td><td/><td colspan=\"2\">0.4908 / 0.5427</td><td colspan=\"2\">0.2179 / 0.9232</td><td>0.3894 / 0.9150</td></tr><tr><td colspan=\"4\">ReMIC+Multi-Sample 0.4742 / 0.5493</td><td colspan=\"2\">0.2171 / 0.9263</td><td>0.3945 / 0.9116</td></tr></table>\n\nRaFD multi-expression translation results.\n\nTable content: <table><tr><td>Methods</td><td colspan=\"4\">Neutral NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM Angry Contemptuous Disgusted</td></tr><tr><td>MUNIT</td><td>0.1589 / 0.8177</td><td>0.1637 / 0.8156</td><td>0.1518 / 0.8319</td><td>0.1563 / 0.8114</td></tr><tr><td>StarGAN</td><td>0.1726 / 0.8206</td><td>0.1722 / 0.8245</td><td>0.1459 / 0.8506</td><td>0.1556 / 0.8243</td></tr><tr><td>CollaGAN</td><td>0.1867 / 0.7934</td><td>0.1761 / 0.7736</td><td>0.1856 / 0.7928</td><td>0.1823 / 0.7812</td></tr><tr><td colspan=\"2\">ReMIC w/o Recon 0.1215 / 0.8776</td><td>0.1335 / 0.8556</td><td>0.1192 / 0.8740</td><td>0.1206 / 0.8559</td></tr><tr><td>ReMIC</td><td>0.1225 / 0.8794</td><td>0.1290 / 0.8598</td><td>0.1217 / 0.8725</td><td>0.1177 / 0.8668</td></tr><tr><td>Methods</td><td colspan=\"4\">Fearful NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM NRMSE / SSIM Happy Sad Surprised</td></tr><tr><td>MUNIT</td><td>0.1714 / 0.7792</td><td>0.1623 / 0.8073</td><td>0.1677 / 0.7998</td><td>0.1694 / 0.7884</td></tr><tr><td>StarGAN</td><td>0.1685 / 0.7943</td><td>0.1522 / 0.8288</td><td>0.1620 / 0.8227</td><td>0.1634 / 0.7974</td></tr><tr><td>CollaGAN</td><td>0.1907 / 0.7442</td><td>0.1829 / 0.7601</td><td>0.1783 / 0.7766</td><td>0.1888 / 0.7495</td></tr><tr><td colspan=\"2\">ReMIC w/o Recon 0.1321 / 0.8384</td><td>0.1399 / 0.8332</td><td>0.1284 / 0.8597</td><td>0.1333 / 0.8347</td></tr><tr><td>ReMIC</td><td>0.1316 / 0.8395</td><td>0.1383 / 0.8406</td><td>0.1301 / 0.8581</td><td>0.1276 / 0.8484</td></tr></table>\n\nMissing-domain segmentation: Dice scores are reported here.\n\nTable content: <table/>\n\nBraTS results: Missing-domain imputation for pretrained segmentation model (full tabel) ReMIC+Joint 0.915 / 0.859 / 0.710 0.910 / 0.740 / 0.430 0.909 / 0.858 / 0.716 0.860 / 0.823 / 0.691\n\nTable content: <table><tr><td colspan=\"2\">Methods</td><td>T1</td><td>T1Gd</td><td>T2</td><td>FLAIR</td></tr><tr><td/><td/><td>WT / TC / ET</td><td>WT / TC / ET</td><td>WT / TC / ET</td><td>WT / TC / ET</td></tr><tr><td/><td>Oracle</td><td/><td colspan=\"2\">0.910 / 0.849 / 0.708</td></tr><tr><td/><td>Zero</td><td colspan=\"4\">0.771 / 0.609 / 0.572 0.872 / 0.539 / 0.008 0.755 / 0.690 / 0.677 0.458 / 0.468 / 0.435</td></tr><tr><td/><td>Average</td><td colspan=\"4\">0.870 / 0.744 / 0.674 0.882 / 0.603 / 0.303 0.849 / 0.732 / 0.686 0.655 / 0.710 / 0.648</td></tr><tr><td>2D</td><td>NN MUNIT</td><td colspan=\"4\">0.883 / 0.765 / 0.660 0.871 / 0.564 / 0.186 0.811 / 0.720 / 0.642 0.534 / 0.669 / 0.614 0.886 / 0.785 / 0.679 0.872 / 0.552 / 0.187 0.882 / 0.781 / 0.682 0.408 / 0.541 / 0.527</td></tr><tr><td/><td>StarGAN</td><td colspan=\"4\">0.897 / 0.795 / 0.704 0.886 / 0.588 / 0.184 0.851 / 0.725 / 0.661 0.570 / 0.664 / 0.604</td></tr><tr><td/><td colspan=\"5\">CollaGAN 0.860 / 0.747 / 0.651 0.864 / 0.576 / 0.252 0.882 / 0.811 / 0.700 0.663 / 0.697 / 0.663</td></tr><tr><td/><td>ReMIC</td><td colspan=\"4\">0.909 / 0.834 / 0.714 0.899 / 0.669 / 0.354 0.905 / 0.855 / 0.709 0.853 / 0.807 / 0.691</td></tr><tr><td/><td>Oracle</td><td/><td colspan=\"2\">0.909 / 0.867 / 0.733</td></tr><tr><td/><td>Zero</td><td colspan=\"4\">0.876 / 0.826 / 0.694 0.884 / 0.574 / 0.020 0.901 / 0.865 / 0.728 0.661 / 0.730 / 0.643</td></tr><tr><td>3D</td><td>Average</td><td colspan=\"4\">0.880 / 0.814 / 0.640 0.854 / 0.618 / 0.282 0.838 / 0.801 / 0.695 0.713 / 0.732 / 0.675</td></tr><tr><td/><td>NN</td><td colspan=\"4\">0.890 / 0.829 / 0.703 0.859 / 0.538 / 0.081 0.790 / 0.799 / 0.704 0.472 / 0.686 / 0.607</td></tr><tr><td/><td>ReMIC</td><td colspan=\"4\">0.905 / 0.864 / 0.722 0.888 / 0.614 / 0.273 0.902 / 0.871 / 0.734 0.855 / 0.850 / 0.724</td></tr><tr><td colspan=\"2\">Methods</td><td>T1 WT / TC / ET</td><td>T1CE WT / TC / ET</td><td>T2 WT / TC / ET</td><td>FLAIR WT / TC / ET</td></tr><tr><td colspan=\"2\">Oracle</td><td/><td colspan=\"2\">0.910 / 0.849 / 0.708</td></tr><tr><td>Zero</td><td/><td colspan=\"4\">0.904 / 0.818 / 0.710 0.888 / 0.687 / 0.394 0.907 / 0.842 / 0.720 0.841 / 0.793 / 0.692</td></tr><tr><td colspan=\"2\">Average</td><td colspan=\"4\">0.905 / 0.798 / 0.685 0.898 / 0.603 / 0.312 0.897 / 0.803 / 0.663 0.846 / 0.768 / 0.663</td></tr><tr><td colspan=\"2\">ReMIC</td><td colspan=\"4\">0.908 / 0.783 / 0.676 0.897 / 0.685 / 0.382 0.901 / 0.815 / 0.698 0.851 / 0.779 / 0.665</td></tr><tr><td colspan=\"2\">ReMIC+Seg</td><td colspan=\"4\">0.911 / 0.819 / 0.687 0.902 / 0.708 / 0.411 0.910 / 0.839 / 0.716 0.850 / 0.792 / 0.671</td></tr></table>\n\nBraTS results: Missing-domain segmentation (full table)\n\nTable content: <table/>", "role": "user" }, { "content": "<think>Let me review this paper on \"Representational Disentanglement for Multi-Domain Image Completion\" to provide a comprehensive peer review.\n\nFirst, I'll summarize the paper:\n- The paper proposes a generative adversarial network (GAN) architecture called ReMIC that addresses the problem of multi-domain image completion (handling missing modalities/domains in datasets)\n- It focuses on disentangling representations into shared \"skeleton\" content code and domain-specific \"flesh\" style code\n- The approach can perform n-to-n image translation, rather than just 1-to-1 or n-to-1 like previous methods\n- The authors demonstrate how this framework can be extended to segmentation tasks\n- The method is tested on both medical images (BraTS and ProstateX datasets) and natural images (RaFD dataset)\n\nNow, I'll evaluate the scientific contributions and merit of this paper:\n\nStrengths:\n1. Addressing an important problem: The problem of missing modalities in medical imaging is significant and their solution has real practical value.\n2. Novelty: The representational disentanglement scheme with unified content encoder is a novel approach to n-to-n image translation.\n3. Comprehensive evaluation: They test on three datasets across both medical and facial image domains.\n4. Extension to segmentation: The authors show that their approach extends beyond just image completion to improve segmentation performance.\n5. Quantitative improvement: The method shows clear numerical improvements over baseline approaches.\n6. Flexibility: The method can handle any arbitrary number of missing modalities, making it highly practical.\n\nPotential weaknesses:\n1. Model complexity: The architecture involves multiple components and loss functions, which might affect reproducibility and implementation for others.\n2. Theoretical foundation: While the paper has strong empirical results, the theoretical justification for why representational disentanglement works better could be expanded.\n3. Computational requirements: The paper doesn't discuss the computational efficiency or training time relative to other approaches.\n4. Clinical validation: For the medical applications, there's no discussion of clinical validation by medical experts beyond quantitative metrics.\n\nThe paper seems well-written with clear organization, and the methods and experiments are thoroughly described. The results show significant improvements over state-of-the-art methods in both image completion and segmentation tasks.</think>\n\n[Summary] This paper proposes an image translation method that generates images of missing domains from images in multiple available domains. i.e., ReMIC. For achieving this, the authors employ using separated latent codes for content and style from shared representation. They compare the proposed model with StarGAN, MUNIT, CollaGAN on two medical datasets (BraTX, ProstateX) and RaFD datasets, using NRMSE, SSIM, and oracle GT. [Pros] - Task is novel and important, in particular, in medical domains. - Experiments are extensive and the results on medical data look promising. - Well written. [Cons] - Technical novelty of the method is incremental. Most components are from previous work. - Even if the authors argue that their model uses disentanglement, it is overclaiming that to separate style and content is disentanglement. There exsist several previous studies to separate style and content. To claim disentanglement, it is required that more specific information should be separated from styles such as hair-style and lip color. - The experimental setting is not consistent with task definition. Task definition assumes k-missing domains among n domains, i.e, (n-k)-to-k. However, experiments cover k=1 case only: (n-1)-to-1. This setting is similar to that of CollaGAN. I think it is not trivial to extend k to multiple missing domains. - Figure 1 can be improved. StarGAN performs 1-to-n mapping with target style one-hot encoding (The authors also refer to that in page 6), unlike Figure 1. - Experimental results on RaFD are not impressive. Could you evaluate the proposed model on high-resolution data such as CelebHQ? Also, for RaFD, other conventional metrics can be used such as FID. - A figure on the detailed architecture can help to understand the method.", "role": "assistant" } ]
2,020
19
0.970107
Representational Disentanglement for Multi-Domain Image Completion
"Multi-domain data are widely leveraged in vision applications to take advantage of complementary in(...TRUNCATED)
"Title: REPRESENTATIONAL DISENTANGLEMENT FOR MULTI-DOMAIN IMAGE COMPLETION\n\nAbstract: Multi-domain(...TRUNCATED)
"In this work, the authors proposed a deep learning-based method for synthesizing the missing modali(...TRUNCATED)
iclr2020
OpenReview
{"criticism":10,"example":5,"importance_and_relevance":0,"materials_and_methods":38,"praise":2,"pres(...TRUNCATED)
{"criticism":0.11494252873563218,"example":0.05747126436781609,"importance_and_relevance":0.0,"mater(...TRUNCATED)
0.954023
"Let me review this paper on \"Representational Disentanglement for Multi-Domain Image Completion\" (...TRUNCATED)
"You are a member of the scientific community tasked with peer review. Review the following paper co(...TRUNCATED)
12,166
[{"content":"You are a member of the scientific community tasked with peer review. Review the follow(...TRUNCATED)
2,020
3,886
0.759604
Representational Disentanglement for Multi-Domain Image Completion
"Multi-domain data are widely leveraged in vision applications to take advantage of complementary in(...TRUNCATED)
"Title: REPRESENTATIONAL DISENTANGLEMENT FOR MULTI-DOMAIN IMAGE COMPLETION\n\nAbstract: Multi-domain(...TRUNCATED)
"Overall, I think the authors tackle a meaningful task of translating images when not all modes are (...TRUNCATED)
iclr2020
OpenReview
{"criticism":14,"example":0,"importance_and_relevance":2,"materials_and_methods":15,"praise":2,"pres(...TRUNCATED)
{"criticism":0.3783783783783784,"example":0.0,"importance_and_relevance":0.05405405405405406,"materi(...TRUNCATED)
1.756757
"Let me review this paper on \"Representational Disentanglement for Multi-Domain Image Completion\" (...TRUNCATED)
"You are a member of the scientific community tasked with peer review. Review the following paper co(...TRUNCATED)
12,166
[{"content":"You are a member of the scientific community tasked with peer review. Review the follow(...TRUNCATED)
2,020
1
0.987938
"Baseline-corrected space-by-time non-negative matrix factorization for decoding single trial popula(...TRUNCATED)
"Activity of populations of sensory neurons carries stimulus information in both the temporal and th(...TRUNCATED)
"Title: BASELINE-CORRECTED SPACE-BY-TIME NON-NEGATIVE MATRIX FACTORIZATION FOR DECODING SINGLE TRIAL(...TRUNCATED)
" In this paper, the authors present an adaptation of space-by-time non-negative matrix factorizatio(...TRUNCATED)
iclr2018
OpenReview
{"criticism":2,"example":0,"importance_and_relevance":0,"materials_and_methods":6,"praise":0,"presen(...TRUNCATED)
{"criticism":0.2222222222222222,"example":0.0,"importance_and_relevance":0.0,"materials_and_methods"(...TRUNCATED)
1.222222
"I'll review this scientific paper on \"BASELINE-CORRECTED SPACE-BY-TIME NON-NEGATIVE MATRIX FACTORI(...TRUNCATED)
"You are a member of the scientific community tasked with peer review. Review the following paper co(...TRUNCATED)
6,908
[{"content":"You are a member of the scientific community tasked with peer review. Review the follow(...TRUNCATED)
2,018
11
0.976972
"Baseline-corrected space-by-time non-negative matrix factorization for decoding single trial popula(...TRUNCATED)
"Activity of populations of sensory neurons carries stimulus information in both the temporal and th(...TRUNCATED)
"Title: BASELINE-CORRECTED SPACE-BY-TIME NON-NEGATIVE MATRIX FACTORIZATION FOR DECODING SINGLE TRIAL(...TRUNCATED)
" In this contribution, the authors propose an improvement of a tensor decomposition method for deco(...TRUNCATED)
iclr2018
OpenReview
{"criticism":0,"example":1,"importance_and_relevance":1,"materials_and_methods":6,"praise":1,"presen(...TRUNCATED)
{"criticism":0.0,"example":0.14285714285714285,"importance_and_relevance":0.14285714285714285,"mater(...TRUNCATED)
2.142857
"I'll review this scientific paper on \"BASELINE-CORRECTED SPACE-BY-TIME NON-NEGATIVE MATRIX FACTORI(...TRUNCATED)
"You are a member of the scientific community tasked with peer review. Review the following paper co(...TRUNCATED)
6,908
[{"content":"You are a member of the scientific community tasked with peer review. Review the follow(...TRUNCATED)
2,018
64
0.956668
"Baseline-corrected space-by-time non-negative matrix factorization for decoding single trial popula(...TRUNCATED)
"Activity of populations of sensory neurons carries stimulus information in both the temporal and th(...TRUNCATED)
"Title: BASELINE-CORRECTED SPACE-BY-TIME NON-NEGATIVE MATRIX FACTORIZATION FOR DECODING SINGLE TRIAL(...TRUNCATED)
" This study proposes the use of non-negative matrix factorization accounting for baseline by subtra(...TRUNCATED)
iclr2018
OpenReview
{"criticism":2,"example":0,"importance_and_relevance":2,"materials_and_methods":19,"praise":4,"prese(...TRUNCATED)
{"criticism":0.07692307692307693,"example":0.0,"importance_and_relevance":0.07692307692307693,"mater(...TRUNCATED)
1.846154
"I'll review this scientific paper on \"BASELINE-CORRECTED SPACE-BY-TIME NON-NEGATIVE MATRIX FACTORI(...TRUNCATED)
"You are a member of the scientific community tasked with peer review. Review the following paper co(...TRUNCATED)
6,908
[{"content":"You are a member of the scientific community tasked with peer review. Review the follow(...TRUNCATED)
2,018
2
0.984691
Learning from Partially-Observed Multimodal Data with Variational Autoencoders
"Learning from only partially-observed data for imputation has been an active research area. Despite(...TRUNCATED)
"Title: LEARNING FROM PARTIALLY-OBSERVED MULTI-MODAL DATA WITH VARIATIONAL AUTOENCODERS\n\nAbstract:(...TRUNCATED)
"Summary: This paper proposes to impute multimodal data when certain modalities are present. The aut(...TRUNCATED)
iclr2020
OpenReview
{"criticism":4,"example":0,"importance_and_relevance":1,"materials_and_methods":14,"praise":3,"prese(...TRUNCATED)
{"criticism":0.21052631578947367,"example":0.0,"importance_and_relevance":0.05263157894736842,"mater(...TRUNCATED)
1.473684
"I need to provide a thorough peer review of this paper on Variational Selective Autoencoders (VSAE)(...TRUNCATED)
"You are a member of the scientific community tasked with peer review. Review the following paper co(...TRUNCATED)
13,603
[{"content":"You are a member of the scientific community tasked with peer review. Review the follow(...TRUNCATED)
2,020
195
0.936464
Learning from Partially-Observed Multimodal Data with Variational Autoencoders
"Learning from only partially-observed data for imputation has been an active research area. Despite(...TRUNCATED)
"Title: LEARNING FROM PARTIALLY-OBSERVED MULTI-MODAL DATA WITH VARIATIONAL AUTOENCODERS\n\nAbstract:(...TRUNCATED)
"The paper proposed variational selective autoencoders (VSAE) to learn from partially-observed multi(...TRUNCATED)
iclr2020
OpenReview
{"criticism":14,"example":4,"importance_and_relevance":2,"materials_and_methods":25,"praise":6,"pres(...TRUNCATED)
{"criticism":0.2153846153846154,"example":0.06153846153846154,"importance_and_relevance":0.030769230(...TRUNCATED)
1.369231
"I need to provide a thorough peer review of this paper on Variational Selective Autoencoders (VSAE)(...TRUNCATED)
"You are a member of the scientific community tasked with peer review. Review the following paper co(...TRUNCATED)
13,603
[{"content":"You are a member of the scientific community tasked with peer review. Review the follow(...TRUNCATED)
2,020
632
0.899001
Learning from Partially-Observed Multimodal Data with Variational Autoencoders
"Learning from only partially-observed data for imputation has been an active research area. Despite(...TRUNCATED)
"Title: LEARNING FROM PARTIALLY-OBSERVED MULTI-MODAL DATA WITH VARIATIONAL AUTOENCODERS\n\nAbstract:(...TRUNCATED)
"The paper proposes a novel training method for variational autoencoders that allows using partially(...TRUNCATED)
iclr2020
OpenReview
{"criticism":12,"example":4,"importance_and_relevance":2,"materials_and_methods":29,"praise":1,"pres(...TRUNCATED)
{"criticism":0.26666666666666666,"example":0.08888888888888889,"importance_and_relevance":0.04444444(...TRUNCATED)
1.733333
"I need to provide a thorough peer review of this paper on Variational Selective Autoencoders (VSAE)(...TRUNCATED)
"You are a member of the scientific community tasked with peer review. Review the following paper co(...TRUNCATED)
13,603
[{"content":"You are a member of the scientific community tasked with peer review. Review the follow(...TRUNCATED)
2,020
1,713
0.841589
Learning from Partially-Observed Multimodal Data with Variational Autoencoders
"Learning from only partially-observed data for imputation has been an active research area. Despite(...TRUNCATED)
"Title: LEARNING FROM PARTIALLY-OBSERVED MULTI-MODAL DATA WITH VARIATIONAL AUTOENCODERS\n\nAbstract:(...TRUNCATED)
"This paper proposes variational selective autoencoders (VSAE) to learn the joint distribution model(...TRUNCATED)
iclr2020
OpenReview
{"criticism":2,"example":0,"importance_and_relevance":1,"materials_and_methods":11,"praise":3,"prese(...TRUNCATED)
{"criticism":0.13333333333333333,"example":0.0,"importance_and_relevance":0.06666666666666667,"mater(...TRUNCATED)
1.533333
"I need to provide a thorough peer review of this paper on Variational Selective Autoencoders (VSAE)(...TRUNCATED)
"You are a member of the scientific community tasked with peer review. Review the following paper co(...TRUNCATED)
13,603
[{"content":"You are a member of the scientific community tasked with peer review. Review the follow(...TRUNCATED)
2,020
End of preview. Expand in Data Studio

ICLR Peer Reviews Dataset

This dataset contains peer reviews from ICLR (International Conference on Learning Representations), processed and standardized.

Dataset Structure

  • title: Title of the paper
  • abstract: Abstract of the paper
  • full_text: Full text of the paper (if available)
  • review: The peer review text
  • source: Source conference/year (e.g., iclr2020)
  • review_src: Original source identifier
  • year: Year of the conference (extracted from source)
  • overall_score: Computed overall quality score
  • dimension_scores: Scores for specific dimensions (criticism, praise, etc.)
  • criteria_count: Count of sentences matching specific criteria
  • conversations: Chat-style format of the review

Usage

from datasets import load_dataset
ds = load_dataset("JerMa88/ICLR_Peer_Reviews")
Downloads last month
106