Elsevier

Magnetic Resonance Imaging

Volume 64, December 2019, Pages 77-89
Magnetic Resonance Imaging

Original contribution
Automatic brain tissue segmentation in fetal MRI using convolutional neural networks

https://doi.org/10.1016/j.mri.2019.05.020Get rights and content

Abstract

MR images of fetuses allow clinicians to detect brain abnormalities in an early stage of development. The cornerstone of volumetric and morphologic analysis in fetal MRI is segmentation of the fetal brain into different tissue classes. Manual segmentation is cumbersome and time consuming, hence automatic segmentation could substantially simplify the procedure. However, automatic brain tissue segmentation in these scans is challenging owing to artifacts including intensity inhomogeneity, caused in particular by spontaneous fetal movements during the scan. Unlike methods that estimate the bias field to remove intensity inhomogeneity as a preprocessing step to segmentation, we propose to perform segmentation using a convolutional neural network that exploits images with synthetically introduced intensity inhomogeneity as data augmentation. The method first uses a CNN to extract the intracranial volume. Thereafter, another CNN with the same architecture is employed to segment the extracted volume into seven brain tissue classes: cerebellum, basal ganglia and thalami, ventricular cerebrospinal fluid, white matter, brain stem, cortical gray matter and extracerebral cerebrospinal fluid. To make the method applicable to slices showing intensity inhomogeneity artifacts, the training data was augmented by applying a combination of linear gradients with random offsets and orientations to image slices without artifacts. To evaluate the performance of the method, Dice coefficient (DC) and Mean surface distance (MSD) per tissue class were computed between automatic and manual expert annotations. When the training data was enriched by simulated intensity inhomogeneity artifacts, the average achieved DC over all tissue classes and images increased from 0.77 to 0.88, and MSD decreased from 0.78 mm to 0.37 mm. These results demonstrate that the proposed approach can potentially replace or complement preprocessing steps, such as bias field corrections, and thereby improve the segmentation performance.

Introduction

Important neurodevelopmental changes occur in the last trimester of pregnancy, i.e., between 30 and 40 weeks of gestation, including volumetric growth, myelination and cortical gyrification [[1], [2], [3]]. Magnetic resonance imaging (MRI) is widely used to non-invasively assess and monitor the developmental status of the fetal brain in utero [4,5]. The cornerstone of volumetric and morphologic analysis in fetal MRI is the segmentation of the fetal brain into different tissue classes, such as white and gray matter. Performing this segmentation manually, however, is extremely time-consuming and requires a high level of expertise. The reasons are not only the complex convoluted shapes of the different tissues, but also the limited image quality due to imaging artifacts. Fetal MR imaging is particularly challenging in this regard because the receiver coils can only be positioned on the maternal body and not closer to the anatomy of interest. Furthermore, movements of the fetus relative to the mother can only to some extent be controlled and predicted. Especially, fetal motion therefore negatively affects the image quality and causes artifacts such as intensity inhomogeneity (Fig. 1).

Because manual annotation is very time consuming and additionally hampered by these artifacts, a reliable automatic tissue segmentation tool would provide a valuable alternative, especially if it could give detailed fetal brain tissue segmentations in the presence of artifacts. To cope with imaging artifacts, previous approaches in the literature performed the segmentation in images reconstructed from multiple 2D acquisitions. Most fetal MRI scans are acquired in 2D using single-shot fast spin-echo (SSFSE) sequences [6]. Artifacts such as intensity inhomogeneity may therefore appear only in some slices, e.g., due to movements during acquisition of these slices, but do not have to be present in their immediate neighboring slices as well (Fig. 2). Volumetric reconstruction approaches are typically based on the acquisition of several stacks of 2D slices in axial, sagittal and coronal orientation. These stacks are registered to a common coordinate space so that they can be combined into a single reconstructed 3D volume, thus removing artifacts that affect only some slices and inter-slice inconsistencies [[7], [8], [9], [10], [11]].

For automatic segmentation of fetal brain tissue in reconstructed MR volumes, Habas et al. [12] proposed a method using an atlas-based expectation maximization (EM) model to segment white matter (WM), gray matter, germinal matrix, and extracerebral cerebrospinal fluid (eCSF). Prior to performing the segmentation, another EM model was used for bias field correction. Gholipour et al. [13] proposed a method for segmentation of the ventricles in fetal MRI. As a preprocessing step, in addition to using volumetric reconstructions, intensity inhomogeneity was corrected using the non-parametric entropy maximization method [14]. Initial segmentation was obtained with the use of STAPLE [15], then the final segmentation was derived with a probabilistic shape model that incorporates intensity and local spatial information. Serag et al. [16] proposed an atlas-based brain segmentation method for both neonatal and fetal MRI. Fetal scans were reconstructed into a single 3D brain volume using the slice-to-volume reconstruction method described in [7] and intensity inhomogeneity was removed using the N4 algorithm [17]. Thereafter, the fetal brain scans were segmented into cortex, ventricles and hemispheres.

Deep learning methods have recently been very successful and have often outperformed traditional machine learning and model-based methods in medical image analysis [18] including brain MRI [19,20]. A major strength of these networks is their ability to extract the features relevant for the tasks directly from the data. There is no need anymore to first derive a set of handcrafted features from the image as input to a classifier or model, the networks rather learn themselves to extract and interpret features relevant to the segmentation task. Therefore, deep learning methods often achieve a better performance than traditional machine learning methods with hand-crafted features. However, CNNs usually require large sets of diverse training data. To enlarge the size of training set and to ensure robustness to expected variability in the data, some studies use data augmentation techniques such as random rotation, random translation and random noise injection [21,22]. We therefore hypothesize that, while artifacts such as intensity inhomogeneity are challenging for traditional approaches and therefore normally require preprocessing of the images, CNNs may be able to adapt and become invariant to such artifacts if they are presented enough examples during training. However, manual segmentation of slices with intensity inhomogeneity is much more cumbersome than segmentation of artifact free slices so that a sizable training database is difficult to obtain. We therefore propose to tackle one of the most common artifacts in fetal MRI, namely intensity inhomogeneity, by randomly adding synthetic intensity inhomogeneity to slices for which a corresponding reference segmentation is available. By only mutating the intensity values but not the orientation or shape of structures in the image, the same reference segmentation can be used as ground truth. This tailored data augmentation strategy affects network training only. At inference time, in contrast to previous methods, no complex preprocessing of the image is required.

Furthermore, previous methods focused on segmenting the brain into the three main tissue classes: WM, cortical gray matter and ventricles. However, characteristics of other tissue classes, such as cerebellum (CB) and brain stem (BS), are important to understand and predict healthy or aberrant brain development in preterm infants of similar gestational age as fetuses [23]. The cerebellum is particularly of clinical interest as it is one of the fastest growing brain regions during the last trimester of pregnancy [24].

Another challenge for segmentation of the fetal brain in MRI is the large field of view of these scans. Since the fetus is scanned in utero, the images also visualize parts of the maternal and the fetal body, and not only the head of the fetus as would be the case in regular brain MRI. Similar to previous publications [25, 26], we therefore propose to first automatically segment the intracranial volume (ICV) of the fetus to identify the region of interest. A number of studies proposed segmentation of the ICV in fetal MRI [[27], [28], [29], [30]]. Following our previous work [29], we segment the ICV directly in the entire image to fully automatically detect a region of interest.

The method we propose performs segmentation of fetal and brain tissues. The method first identifies the ICV from the fetal MRI slices using a convolutional neural network. Subsequently, the identified volume is segmented by another 2D convolutional neural network. Note that the proposed approach is applied to 2D slices of images reconstructed in a standard way, i.e. without reconstruction to high resolution volumes. The contribution of this paper is twofold: First, we propose data augmentation technique that synthesizes intensity inhomogeneity artifacts to improve the robustness against these artifacts. Second, the fetal brain segmentation is performed into seven classes: CB, basal ganglia and thalami (BGT), ventricular cerebrospinal fluid (vCSF), WM, BS, cortical gray matter (cGM) and eCSF in contrast to previous methods which focused on WM, cGM and cerebrospinal fluid only.

The remainder of this paper is organized as follows: in Section 2 the data set used for the method development and evaluation is described, in Section 3 the method for fetal brain segmentation and the simulation of intensity inhomogeneity are described, in Section 4 the evaluation method is given. The performed experiments and their results are presented in Section 5, followed by a discussion of the method and the results in Section 6. Our conclusions are given in the final section.

Section snippets

Fetal MRI dataset

This study includes T2-weighted MR scans of 12 fetuses (22.9–34.6 weeks post menstrual age). Images were acquired on a Philips Achieva 3T scanner at the University Medical Center (UMC) Utrecht, the Netherlands, using a turbo fast spin-echo sequence. Repetition time (TR) was set to 2793 ms, echo time (TE) was set to 180 ms and the flip angle to 110 °. The acquired voxel size was 1.25 × 1.25 × 2.5 mm3, the reconstructed voxel size was 0.7 × 0.7 × 1.25 mm3, and the reconstruction matrix was

Method

To simplify the brain tissue segmentation and allow the segmentation method to focus on the fetal brain only, the fetal ICV is first automatically extracted. Subsequently, the identified ICV is automatically segmented into seven tissue classes. An overview of this pipeline is shown in Fig. 4. The same network architecture, described in Section 3.1, was used for ICV extraction and brain tissue segmentation.

Evaluation

The automatic brain tissue segmentation was evaluated by means of the Dice coefficient (DC) for volume overlap and the mean surface distance (MSD) between manual reference segmentation and automatically obtained segmentation. In the fetal MRI scans, these metrics were calculated in 2D, i.e., per slice, and were then averaged across all slices. In the neonatal MRI scans, following previous work [26], these metrics were calculated in 3D.

Experiments and results

In our experiments, we first evaluated the overall segmentation performance of the proposed pipeline with respect to the different tissue classes. To evaluate the influence of the proposed intensity inhomogeneity augmentation technique with the standard augmentation techniques, the segmentation performance before and after applying intensity inhomogeneity augmentation was evaluated. Furthermore, we evaluated whether this augmentation technique is able to generalize to different data, i.e.,

Discussion

We presented a pipeline for automatic segmentation of the fetal brain into seven tissue classes in MRI. The method consists of two fully convolutional networks with identical U-net architectures. The first network extracts ICV and the second network performs segmentation of the brain into seven tissue classes. The results demonstrate that segmentation using the proposed data augmentation with simulated intensity inhomogeneity artifacts leads to accurate segmentations of the brain tissue

Conclusion

We presented an automatic method for brain tissue segmentation in fetal MRI into seven tissue classes using convolutional neural networks. We demonstrated that the proposed method learns to cope with intensity inhomogeneity artifacts by augmenting the training data with synthesized intensity inhomogeneity artifacts. This can potentially replace or complement preprocessing steps, such as bias field corrections, and help to substantially improve the segmentation performance.

Acknowledgments

This study was sponsored by the Research Program Specialized Nutrition of the Utrecht Center for Food and Health, through a subsidy from the Dutch Ministry of Economic Affairs, the Utrecht Province and the Municipality of Utrecht. Furthermore, we thank Nienke Heuvelink for her help with creating the manual fetal brain segmentations.

References (1)

    Cited by (0)

    View full text