PT - JOURNAL ARTICLE AU - A. Hagiwara AU - Y. Otsuka AU - M. Hori AU - Y. Tachibana AU - K. Yokoyama AU - S. Fujita AU - C. Andica AU - K. Kamagata AU - R. Irie AU - S. Koshino AU - T. Maekawa AU - L. Chougar AU - A. Wada AU - M.Y. Takemura AU - N. Hattori AU - S. Aoki TI - Improving the Quality of Synthetic FLAIR Images with Deep Learning Using a Conditional Generative Adversarial Network for Pixel-by-Pixel Image Translation AID - 10.3174/ajnr.A5927 DP - 2019 Jan 10 TA - American Journal of Neuroradiology 4099 - http://www.ajnr.org/content/early/2019/01/10/ajnr.A5927.short 4100 - http://www.ajnr.org/content/early/2019/01/10/ajnr.A5927.full AB - BACKGROUND AND PURPOSE: Synthetic FLAIR images are of lower quality than conventional FLAIR images. Here, we aimed to improve the synthetic FLAIR image quality using deep learning with pixel-by-pixel translation through conditional generative adversarial network training.MATERIALS AND METHODS: Forty patients with MS were prospectively included and scanned (3T) to acquire synthetic MR imaging and conventional FLAIR images. Synthetic FLAIR images were created with the SyMRI software. Acquired data were divided into 30 training and 10 test datasets. A conditional generative adversarial network was trained to generate improved FLAIR images from raw synthetic MR imaging data using conventional FLAIR images as targets. The peak signal-to-noise ratio, normalized root mean square error, and the Dice index of MS lesion maps were calculated for synthetic and deep learning FLAIR images against conventional FLAIR images, respectively. Lesion conspicuity and the existence of artifacts were visually assessed.RESULTS: The peak signal-to-noise ratio and normalized root mean square error were significantly higher and lower, respectively, in generated-versus-synthetic FLAIR images in aggregate intracranial tissues and all tissue segments (all P < .001). The Dice index of lesion maps and visual lesion conspicuity were comparable between generated and synthetic FLAIR images (P = 1 and .59, respectively). Generated FLAIR images showed fewer granular artifacts (P = .003) and swelling artifacts (in all cases) than synthetic FLAIR images.CONCLUSIONS: Using deep learning, we improved the synthetic FLAIR image quality by generating FLAIR images that have contrast closer to that of conventional FLAIR images and fewer granular and swelling artifacts, while preserving the lesion contrast.cGANconditional generative adversarial networkDLdeep learningGANgenerative adversarial networkNRMSEnormalized root mean square errorPSNRpeak signal-to-noise ratio