Skip to main content
Advertisement

Main menu

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • Video Articles
    • AJNR Case Collection
    • Case of the Week Archive
    • Case of the Month Archive
    • Classic Case Archive
  • Special Collections
    • Low-Field MRI
    • Alzheimer Disease
    • ASNR Foundation Special Collection
    • Photon-Counting CT
    • AJNR Awards
    • View All
  • Multimedia
    • AJNR Podcasts
    • AJNR SCANtastic
    • Trainee Corner
    • MRI Safety Corner
    • Imaging Protocols
  • For Authors
    • Submit a Manuscript
    • Submit a Video Article
    • Submit an eLetter to the Editor/Response
    • Manuscript Submission Guidelines
    • Statistical Tips
    • Fast Publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Imaging Protocol Submission
    • Author Policies
  • About Us
    • About AJNR
    • Editorial Board
    • Editorial Board Alumni
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home

User menu

  • Alerts
  • Log in

Search

  • Advanced search
American Journal of Neuroradiology
American Journal of Neuroradiology

American Journal of Neuroradiology

ASHNR American Society of Functional Neuroradiology ASHNR American Society of Pediatric Neuroradiology ASSR
  • Alerts
  • Log in

Advanced Search

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • Video Articles
    • AJNR Case Collection
    • Case of the Week Archive
    • Case of the Month Archive
    • Classic Case Archive
  • Special Collections
    • Low-Field MRI
    • Alzheimer Disease
    • ASNR Foundation Special Collection
    • Photon-Counting CT
    • AJNR Awards
    • View All
  • Multimedia
    • AJNR Podcasts
    • AJNR SCANtastic
    • Trainee Corner
    • MRI Safety Corner
    • Imaging Protocols
  • For Authors
    • Submit a Manuscript
    • Submit a Video Article
    • Submit an eLetter to the Editor/Response
    • Manuscript Submission Guidelines
    • Statistical Tips
    • Fast Publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Imaging Protocol Submission
    • Author Policies
  • About Us
    • About AJNR
    • Editorial Board
    • Editorial Board Alumni
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home
  • Follow AJNR on Twitter
  • Visit AJNR on Facebook
  • Follow AJNR on Instagram
  • Join AJNR on LinkedIn
  • RSS Feeds

AJNR at ASNR25 | Join us at BOOTH 312 and more. Check out our schedule

Research ArticleHead & Neck

Deep Learning for Synthetic CT from Bone MRI in the Head and Neck

S. Bambach and M.-L. Ho
American Journal of Neuroradiology August 2022, 43 (8) 1172-1179; DOI: https://doi.org/10.3174/ajnr.A7588
S. Bambach
aFrom the Abigail Wexner Research Institute at Nationwide Children’s Hospital (S.B.), Columbus, Ohio
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for S. Bambach
M.-L. Ho
bDepartment of Radiology (M.-L.H.), Nationwide Children’s Hospital, Columbus, Ohio.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for M.-L. Ho
  • Article
  • Figures & Data
  • Info & Metrics
  • Responses
  • References
  • PDF
Loading

Abstract

BACKGROUND AND PURPOSE: Bone MR imaging techniques enable visualization of cortical bone without the need for ionizing radiation. Automated conversion of bone MR imaging to synthetic CT is highly desirable for downstream image processing and eventual clinical adoption. Given the complex anatomy and pathology of the head and neck, deep learning models are ideally suited for learning such mapping.

MATERIALS AND METHODS: This was a retrospective study of 39 pediatric and adult patients with bone MR imaging and CT examinations of the head and neck. For each patient, MR imaging and CT data sets were spatially coregistered using multiple-point affine transformation. Paired MR imaging and CT slices were generated for model training, using 4-fold cross-validation. We trained 3 different encoder-decoder models: Light_U-Net (2 million parameters) and VGG-16 U-Net (29 million parameters) without and with transfer learning. Loss functions included mean absolute error, mean squared error, and a weighted average. Performance metrics included Pearson R, mean absolute error, mean squared error, bone precision, and bone recall. We investigated model generalizability by training and validating across different conditions.

RESULTS: The Light_U-Net architecture quantitatively outperformed VGG-16 models. Mean absolute error loss resulted in higher bone precision, while mean squared error yielded higher bone recall. Performance metrics decreased when using training data captured only in a different environment but increased when local training data were augmented with those from different hospitals, vendors, or MR imaging techniques.

CONCLUSIONS: We have optimized a robust deep learning model for conversion of bone MR imaging to synthetic CT, which shows good performance and generalizability when trained on different hospitals, vendors, and MR imaging techniques. This approach shows promise for facilitating downstream image processing and adoption into clinical practice.

ABBREVIATIONS:

DL
deep learning
GRE
gradient recalled-echo
MAE
mean absolute error
MSE
mean squared error
TE
echo time

MR imaging is the workhorse of clinical neuroradiology, providing high tissue contrast for the evaluation of CNS structures. However, CT remains the first-line technique for rapid neurologic screening and cortical bone assessment. A novel class of MR imaging techniques uses very short TE to capture weak and short-lived proton signals from dry tissues such as cortical bone. As MR imaging hardware and software have advanced, “black-bone” MR imaging techniques have progressively improved from gradient recalled-echo (GRE) to ultrashort-TE and zero-TE approaches.1⇓-3 TE values are on the order of 1–2 ms for GRE, 50–200 ms for ultrashort-TE, and 0–25 ms for zero-TE (Online Supplemental Data). With shorter TEs, the detectable signal from cortical bone increases, scan times become faster, acoustic noise from gradient switching decreases, and resistance to motion and susceptibility artifacts increases.4-5

Bone MR imaging offers the potential for both rapid initial screening and comprehensive “one-stop-shop” imaging, without the need for ionizing radiation exposure. Thus, bone MR imaging is a promising alternative to CT for bone imaging. However, current barriers to implementation involve direct comparison of bone MR imaging and CT with regard to multiple factors, including accessibility, cost, convenience, patient awareness, clinician understanding, diagnostic accuracy, and interventional utility.6-8 A key step in facilitating clinical understanding and adoption is the automated conversion of bone MR imaging to synthetic CT-like contrast, which is highly desirable for image interpretation, 3D printing, and surgical planning applications. Conventional image-processing approaches, such as intensity thresholding, logarithmic inversion, histogram subtraction, atlas- and voxel-based techniques, have all been investigated.9⇓⇓-12 In clinical practice, these point operation–based techniques are hampered by false-negatives in the setting of undermineralization (young children or osteopenia), very thin bone (pathologic remodeling), and multiple bone-air interfaces (facial bones, skull base), as well as false-positives including other short-T2 tissues (fascia, dura, ligaments, cartilage, hardware, hemorrhage, mucoid secretions, air) and complex tissue interfaces (tumor, trauma, inflammation).

Deep learning (DL) offers a promising approach for synthetic CT generation, being already routinely used for tissue classification and image-mapping purposes. DL algorithms use multiple layers of neighborhood-based operations to derive complex information from diverse input data sets, including MR imaging signal properties, normal anatomic structures, and pathologic changes. In neuroradiology, DL for synthetic CT has been explored in adult volunteers and a few patient case series, the most frequent application being radiation therapy planning or PET attenuation correction. Despite these early studies suggesting feasibility, synthetic CT approaches have only been successful when applied to anatomically simpler regions such as the torso and extremities or normal adult skull anatomy at low spatial resolution.13-17 For most cases of head and neck clinical applications, existing synthetic CT algorithms fail due to the wide variety of normal anatomic variants and pathologic conditions. Without sufficient clinical training data and human supervision, DL-powered bone MR imaging conversion approaches show limitations similar to those of conventional processing, yielding a variety of false-negatives and false-positives.

Robust synthetic CT algorithms still have not been developed for head and neck applications, are not routinely used in clinical decision-making, and do not carry added value over source MR images interpreted by experienced radiologists. Therefore, at this time, bone MR imaging is a useful alternative to CT for diagnostic imaging but requires a radiologist’s understanding of imaging physics, head and neck anatomy, and pathologic disease processes to optimally analyze the source images. Improvement of automated synthetic CT algorithms could help address existing barriers to technology implementation by providing greater understanding for untrained radiologists and clinicians as well as facilitating downstream processing such as 3D printing and surgical navigation. Therefore, the objective of our study was to optimize a convolutional neural network algorithm for bone MR imaging conversion to synthetic CT based on our diverse data set of patients using different institutions, platforms, and bone MR imaging techniques. In particular, we sought to develop a robust DL model that would show good performance and generalizability, thus facilitating downstream image processing and adoption into clinical practice.

MATERIALS AND METHODS

Data Acquisition

This was an institutional review board–approved retrospective study with de-identified data sequentially collected from 2 institutions. The patient flowchart for study selection is described in the Online Supplemental Data. Originally, 53 patients were included with bone MR imaging and CT of the head and neck performed within a 6-month time period for bone evaluation. Following image review by a neuroradiologist with expertise in bone MR imaging, 14 patients were excluded on the basis of nondiagnostic image quality (MR imaging and/or CT) due to motion, hardware, or other artifacts. This exclusion resulted in a final data set of 39 patients: 16 patients from institution 1 and 23 patients from institution 2. Subjects spanned a broad age range (neonate to 35 years; median age, 4.5 years) with 23 (59%) male and 16 (41%) female patients. Clinical indications for imaging were suspected craniosynostosis (n = 10), genetic syndrome (n = 5), tumor (n = 4), trauma (n = 4), preoperative planning (n = 10), and postoperative follow-up (n = 6). Anatomic imaging coverage included the head, face, neck, and/or cervical spine, based on the indication. For bone MR imaging, an additional bone sequence was added to the examination on the basis of a clinical request and/or the indication for bone imaging. A variety of platforms, techniques, and field strengths were used, depending on the institution and scanner availability.

For MR, thirteen patients were scanned on Siemens Healthineers (Erlangen, Germany) platforms (3 Tesla: Magnetom Prisma, 1.5 Tesla: Magnetom Aera), and 26, on GE Healthcare (Chicago, IL) platforms (3 Tesla: Discovery MR750, MR750w). Bone MR imaging sequences were adapted from commercially available options and included 3D zero-TE, ultrashort-TE, and GRE sequences with a 20- to 30-cm FOV and 0.7- to 1-mm isotropic resolution. Sample parameters are provided in the Online Supplemental Data. Most scans were performed at 3T, with 2 scans performed at 1.5T field strength due to device-compatibility considerations. For CT, 23 examinations were performed on Siemens Healthineers platforms (Somatom Definition Flash, Somatom Definition Edge, Somatom Definition AS, Somatom Force, Somatom Sensation 64); 9, on GE Healthcare platforms (Discovery CT750 HD, Optima CT660, LightSpeed VCT); and 7 on Canon Medical Systems (Tustin, California) platforms (Aquilion ONE) using a standard multidetector technique (age-adjusted radiation dose, 0.5- to 1-mm section thickness, bone reconstruction kernel).

Image Coregistration and Preprocessing

The goal of the image-processing pipeline (Online Supplemental Data) was to generate a diverse set of spatially aligned bone MR imaging and CT pairs for neural network training. A neuroradiologist with experience in bone MR imaging coregistered all MR imaging and CT images on the basis of key anatomic landmarks and inspected the final matched image sets for quality assurance. First, multiple-point affine transformation of MR imaging to CT data was performed in OsiriX MD (http://www.osirix-viewer.com) to yield coregistered 3D volumes. All remaining image-preprocessing steps were implemented in Matlab (MathWorks). The image volumes were resampled to achieve isotropic resolution in all dimensions and then were divided into paired 2D MR imaging and CT slices in axial, coronal, and sagittal planes. While synthesizing only axial CT views may be sufficient for many applications, we were interested in deriving the largest and most diverse training set possible. Each image pair was masked and cropped to disregard irrelevant background artifacts during training. Masks were created by binarizing the CT image (using Otsu’s method to find the ideal threshold18) and finding the largest convex area in the binary image. The same convex mask was also applied to the paired MR images. Images were cropped to the smallest possible square containing and centering the masked content.

Finally, each section was resized to the resolution required for neural network input. The resulting images were saved with an 8-bit gray-scale depth based on the entire dynamic range for MR imaging slices and bone window/level for CT slices. On average, this pipeline generated 550 MR imaging/CT pairs per patient (approximately 22,000 image slices total). Additionally, we artificially augmented our training data by randomly flipping (horizontally or vertically), rotating (by <10°), or cropping (by <10%) image pairs during training.

We note here that masking the MR image based on a registered CT image would not be possible in a real-world scenario (because CT would not be available). However, we found this approach to work much more robustly, which was necessary to automate the masking pipeline, given the large amount of training slices. Our goal was to have clean training data. Inference based on a nonmasked MR imaging is still possible.

Neural Network Architectures

We tested 3 encoder-decoder networks based on U-Net models.19 For the first model, we built a lightweight baseline model (Light_U-Net) based on the original U-Net architecture but decreasing the number of filters (channels) for each block, for total of ∼2 million trainable parameters. We further changed the filter size of the transposed convolutions from 2 × 2 to 3 × 3 so that the decoder path exactly mirrored the encoder path, avoiding the need to crop the filter responses in the skip connections. The output layer was reduced to a single channel with sigmoid activation function, allowing the model to produce a gray-scale image rather than a binary segmentation mask. For the second model, we used the well-established VGG-16 convolutional neural network architecture20 for the encoder path and mirrored it for the decoder path. The resulting model had a larger number of filters and filter blocks, resulting in ∼29 million total trainable parameters. This enabled us to use transfer-learning as a third model variation, VGG-16 U-Net transfer-learning, in which filter weights in the encoder path were initialized with values learned from the public ImageNet (https://image-net.org/index) data set, in which a large variety of annotated objects were classified from >14 million conventional color photographs21 (Online Supplemental Data).

Model Implementation

All DL models were implemented in Python (Python Software Foundation) using the TensorFlow library (www.tensorflow.org) with the Keras interface (Massachusetts Institute of Technology). All experiments were run on a high-performance computing cluster using either a NVIDIA Tesla V100 or NVIDIA Tesla P100 GPU (Nvidia, Santa Clara, California). The input to the model was a single-channel gray-scale bone MR image with a resolution of 224 × 224 pixels to match the fixed resolution of the VGG-16 architecture. Each 3 × 3 convolutional layer was followed by a batch-normalization layer and a ReLU activation layer. The VGG-16 U-Net architecture, which was originally designed for color images, required a 3-channel image input, so the gray-scale image was repeated across all 3 channels. Because the encoder path was an exact copy of the original VGG-16 architecture, its 3 × 3 convolutional layers were not followed by a batch-normalization layer but only had an ReLU activation. For the decoder path, batch normalization was still added after every 3 × 3 convolutional layer. For both networks, the synthetic CT image was produced via a 1 × 1 convolutional layer with a sigmoid activation, creating a continuous gray-scale image on the interval of 0–1 and a resolution of 224 × 224 pixels. All models were optimized with stochastic gradient descent using the Adam method22 with default parameters and a batch size of 128 images. Network weights were initialized randomly, except for the VGG-16 U-Net transfer learning variant, in which weights in the encoder path were pretrained on ImageNet. No weights were frozen during optimization.

Loss Functions

We experimented with optimizing 3 different loss functions: mean absolute error (MAE, also called L1 loss), mean squared error (MSE, also called L2 loss), and a weighted sum of both: Embedded Image

where N is the total number of image pairs in the training set and α is a coefficient that was selected empirically as 4.4, resulting in approximately equal contribution of L1 and L2 to the total loss.

Model Training and Validation

Because intrapatient image slices are visibly correlated with each other compared with interpatient slices, we trained and evaluated our models on data from separate patients. For every experiment, we performed a patient-level 4-fold cross-validation, with each model trained on three-quarters of the patients and then tested on the remaining quarter. Reported results were aggregated across all 4 models.

Because neural network optimization is stochastic in nature (random initialization and random batching), training on the same data set multiple times may result in a different model convergence. We, therefore, repeated each 4-fold cross-validation experiment 10 times and reported average performance and 95% confidence intervals across the 10 independent runs.

Neural network models additionally require an internal validation set to prevent overfitting. For this purpose, a random 15% of slices from the training data were held out during training. After each training epoch, we computed the internal validation loss and stopped training the model once that validation loss had not decreased for at least 5 epochs. We selected the model weights with the smallest internal validation loss up to that point.

Performance Metrics

Global performance metrics were calculated pixel-wise across the image data sets and included MAE, MSE, and the Pearson correlation coefficient R. To express MAE and MSE in terms of Hounsfield units, we rescaled the neural network output on the basis of a window width of 2000 HU. In addition, we quantified the degree of bone overlap between ground truth CT and synthetic CT by thresholding both into binary bone maps. Given a threshold t, we defined bone precision, bone recall (sensitivity), and bone Dice score as Embedded Image

Thresholding was done on a grid of thresholds ranging from 40% gray level to 70% gray level (Online Supplemental Data). We report the average precision, recall (sensitivity), and Dice score across all thresholds.

Model Generalizability

Because our full data set contained images acquired at different hospitals, as well as using difference imaging vendors and bone MR imaging techniques, we conducted a series of experiments to evaluate how well model performance generalizes across all these different dimensions. All models were based on Light_U-Net with MAE loss. For each test set, baseline model performance was computed using patient-based 4-fold cross-validation with a training set from the same data subset (vendor, hospital, or MR imaging technique). These baseline results were compared with a model trained only on data from a separate subset, as well as a model trained on augmented data including both the current and separate subsets (again with 4-fold cross-validation).

RESULTS

Model Architectures and Loss Functions

Performance comparison of the various neural network models and 3 loss functions is summarized in the Table, with visual comparison of model results in Fig 1 and loss functions in Fig 2. Results are based on 10 repetitions of a patient-based 4-fold cross-validation among the 16 patients from institution 1, which contained the best quality and most curated data. Among all model architectures, Light_U-Net achieved the lowest test MAE and MSE when trained with MAE and MSE loss, respectively. Light_U-Net models also achieved the highest correlation coefficients across the board. When trained on the mixture loss, Light_U-Net also achieved a lower test MAE and MSE than both VGG U-Net and VGG U-Net transfer learning. Adding transfer learning to VGG U-Net tended to increase the test performance, though differences between VGG U-Net and VGG U-Net transfer learning were not always significant.

View this table:
  • View inline
  • View popup

Four-fold cross-validation results for different model and loss combinationsa

FIG 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 1.

Comparison of different encoder-decoder models. The first column shows real MR imaging and real CT. Subsequent columns show synthetic CTs generated by Light_U-Net, VGG U-Net, and VGG U-Net transfer learning, as well as pixel-wise difference maps between synthetic CT and real CT. Red indicates that synthetic CT is darker than real CT; blue, synthetic CT is brighter than real CT (Refer online version for colors).

FIG 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 2.

Comparison of different loss functions using a Light_U-Net model. The first column shows real MR imaging and real CT. Subsequent columns show synthetic CTs generated when using a loss function based on MAE, MSE, and a mixed combination, as well as pixel-wise difference maps between synthetic CT and real CT. Red indicates that synthetic CT is darker than real CT; blue, synthetic CT is brighter than real CT (Refer online version for colors).

When we compared loss functions, models trained on MAE loss naturally achieved a lower validation MAE than those trained on MSE loss and vice versa, with the mixture loss falling in-between. MAE loss achieved a significantly higher mean bone precision across all network architectures. Visually, the synthetic CT images showed sharper edge contrast with crisper bone detail. In addition, relatively fewer pixels were assigned bone density (white signal) on CT, indicating higher specificity, a lower false-positive rate, and higher false-negative rate for bone. MSE tended to achieve a higher mean bone recall (sensitivity) with various network architectures, though the differences were not statistically significant. Visually, the synthetic CT images showed margins that were more blurry and more homogenized bone detail. In addition, relatively more pixels were assigned bone density (white signal) on CT, indicating a higher sensitivity, higher false-positive rate, and lower false-negative rate for bone. In general, MAE loss tended to undercall bone, and MSE loss tended to overcall bone, with the mixture loss producing intermediate image effects.

Overall, the Light_U-Net architecture models outperform or tie other models in all metrics, with difference loss functions allowing adjustment among higher bone precision, recall, or overlap (Dice score). Additional examples of synthetic CT images in axial, coronal, and sagittal views are provided in the Online Supplemental Data. When reviewed by expert neuroradiologists, the computationally optimized model (Light_U-Net, MAE) yielded visibly superior results compared with previously reported synthetic CT algorithms (eg, conventional logarithmic inversion and vendor-provided processing tools). For example, our algorithm enabled delineation of bone microstructure in typically false-negative areas of thin bone (facial bones, skull base, remodeled bone). In addition, our algorithm better excluded false-positive areas such as the fascia and mucoid secretions. Finally, the synthetic CT images showed distinction of nonbone tissues, including soft tissue, fat, and air, that was comparable with the true CT.

Model Generalizability

Results of generalizability experiments across different hospitals, vendors, and bone MR imaging techniques are summarized in the Online Supplemental Data. Training on additional data from a different hospital, vendor, or technique significantly improves performance in terms of MAE, MSE, and the correlation coefficient for all conditions, even when the added patients are few in number compared with the reference data set. Conversely, when one trains models on purely separate hospitals, vendors, or MR imaging techniques, performance significantly decreases across the board.

DISCUSSION

Model Architectures and Loss Functions

With regard to DL architecture, there are 2 classes of models that are suitable for image-to-image translation: encoder-decoder networks and conditional generative adversarial networks. Generative adversarial networks have the distinct advantage of learning to synthesize realistic-looking images when paired images from the source and target domain (eg, coregistered MR imaging and CT slices) are unavailable during training.23-26 In the presence of paired CT/MR imaging training data, recent experiments suggest that encoder-decoder networks tend to outperform generative adversarial networks in the CT/MR imaging domain in terms of MAE, MSE, and other metrics.27 We selected the U-Net architecture in particular because its skip connections between each encoder and decoder layer allow precise spatial information from the MR imaging to be propagated to the synthetic CT.

While transfer learning has been traditionally considered helpful when training large models for tasks with relatively small data sets (as is often the case for medical imaging), our study suggests that for MR imaging-to-CT image synthesis, smaller models with fewer training parameters may be more suitable. This result is supported by recent systematic studies that found that the transfer accuracy (specifically with models pretrained on ImageNet) is very sensitive to how exactly the pretraining was done.28-30 For example, many common forms of regularization may increase ImageNet accuracy but are less suited for transfer learning. An alternative transfer learning approach for future experiments could include finding a related image-translation task for which paired training data are available on a large scale. In general, if more training data are available, larger models may still be able to perform better for this task.

In terms of error minimization, low loss based on pixel-level statistics does not ensure a visually convincing and spatially accurate image rendering. We attempted to numerically quantify synthetic CT image quality by measuring bone precision, recall, and Dice scores on the basis of multiple gray-level thresholds. In addition, clinical assessment of synthetic CT images was performed by a neuroradiologist with expertise in bone MR imaging. Both numerically and visually, there were competing trade-offs in MAE-versus-MSE loss, and these trends persisted across all network architectures. This persistence can be because MAE error is computationally more tolerant of abrupt intensity changes between neighboring pixels, allowing small local errors and less bulk density assignment of bone. Therefore, MAE loss achieves higher precision, higher specificity, a lower false-positive rate, and higher a false-negative rate for bone. Visually, this results in a high-contrast image with sharply defined edges and a tendency to undercall bone. On the other hand, MSE loss penalizes individual outliers more heavily and so enforces a more universally balanced error. Therefore, MSE loss achieves higher recall (sensitivity), a higher false-positive rate, and a lower false-negative rate for bone. Visually, these findings result in a smoother and more regularized image with bulk density assignment to larger areas and a tendency to overcall bone. Using a mixture loss allows a balance among these competing factors, suggesting that the weighting coefficient α could be titrated depending on the clinical use case.

As previously mentioned, prior synthetic CT articles have used conventional or DL-based approaches in anatomically simpler regions, including the normal adult head, torso, and extremities, for nondiagnostic applications, including radiation therapy planning and PET attenuation and correction.13-17 More recent work has also described conventional or DL-powered approaches to synthetic CT using other MR imaging sequences such as GRE, T1, and T2.31,32 The physics of these sequences is inherently less sensitive to cortical bone so that postprocessing approaches are destined to be less accurate. Indeed, the sample synthetic CT output from these articles is low-resolution and insufficient for diagnostic radiology use.

Review of the computationally optimized model (Light_U-Net, MAE) by expert neuroradiologists showed clear potential clinical value over existing conventional and DL algorithms. Our synthetic CT algorithm visibly recaptured bone microstructure in areas pushing the limits of the MR imaging technique, generated fewer false-negative and false-positive bone assignments, and enabled distinction among nonbone tissues. Given that the network architectures and loss functions we used are similar to those described in prior DL studies, our improved results are best attributed to the use of real-world clinical data.

Advancement of clinical implementation will need to include large-scale systematic human reviews of DL algorithms to quantify the usefulness for diagnostic evaluation and interventional planning. At our institution, we are conducting a noninferiority trial of bone MR imaging versus CT, with CT representing the criterion standard technique or ground truth. Expert radiologists are independently evaluating CT, bone MR imaging, and synthetic CT images (Light_U-Net, MAE) to provide numeric scores (0–10) for visibility of key anatomic landmarks (calvaria, sutures, fontanelles, orbits, nose, jaw, teeth, paranasal sinuses, skull base, temporal bone, and cervical spine). For the patients analyzed in this study, CT landmark mean ratings ranged from 9.4 (SD , 0.52) for the calvaria to 9.1 (SD , 0.91) for temporal bone. For MR imaging, the highest rated landmark was also the calvaria (mean, 9.0 [SD , 0.86]) and the lowest was the temporal bone (mean , 7.2 [SD 1.39]). For synthetic CT, the highest rated landmark was the calvaria (mean , 8.1 [SD , 0.92]) and the lowest was the paranasal sinus (mean, 6.8 [SD , 2.31]). These preliminary data suggest that landmark visibility on bone MR imaging and synthetic CT are slightly less than on real CT but sufficient to make most clinical diagnoses.

Furthermore, we are comparing the suitability of CT, bone MR imaging, and synthetic CT data sets for 3D anatomic modeling and virtual surgical planning. Biomedical engineers are processing imaging volumes via bone segmentation, mesh triangulation, and surface generation. Conventional anatomic modeling pipelines use CT with density thresholding to identify bone. Therefore, source bone MR imaging with multiple dark structures is difficult and time-consuming to manually segment. In our experience, synthetic CT algorithms greatly facilitate 3D processing workflow, though as noted by radiologists, anatomic accuracy is less in challenging areas such as facial bones and skull base. For each patient, we coregister image data to calculate a matrix of the reference CT surface and spatial deviation Δ of the nearest point on the test surface (synthetic CT), displayed as a color heat map. We can then calculate statistical metrics over the entire point cloud (mean, range, SD, interquartile range). Based on the surgical accuracy criteria, we can also compute the percentage of data falling within the clinically acceptable tolerance interval Δ = (–2 mm, +2 mm). For the patients analyzed in this study, 86% (SD, 0.18) of all MR imaging surface data falls within ±2 mm of coregistered CT surface data. The largest areas of deviation are attributed to missing MR imaging data around regions of hardware and difficult-to-segment anatomic areas, which will guide further investigation.33-36

Future comparative effectiveness studies will need to account for the relative risks and benefits of clinical workflow, ionizing radiation exposure, examination duration, anesthesia requirements, diagnostic quality, and treatment outcomes. For example, bone MR imaging represents a key alternative for at-risk patients in whom radiation exposure must be minimized or eliminated, ie, children, pregnant women, and patients with cancer-predisposition syndromes. In such patients, CT dose reduction can yield poor image quality below a certain dose threshold. Therefore, ultra-low-dose CT versus no-dose bone MR imaging may yield a more realistic and equitable image comparison.7,33-36

Model Generalizability

In general, DL approaches benefit from larger and more broadly representative training data. This study is limited by the relatively small sample size of 39 patients, which, nevertheless, represents the largest documented database of paired bone MR imaging and CT examinations in clinical patients. Because referring patterns can vary across clinicians and institutions, we chose to include all available head and neck imaging cases to maximize the volume and diversity of the data set. Our study cohort includes varied patient ages, backgrounds, and disease processes with generalizable real-world imaging data, including motion and artifacts. We standardized the preprocessing and conversion of these volumetric data sets into a unique image repository of approximately 22,000 2D paired MR imaging and CT image slices. It would be advisable for multiple institutions interested in bone MR imaging and CT to create a multicenter consortium that can establish best practices with regard to clinical referrals, bone MR imaging techniques, image preprocessing, data sharing, and model development to further increase the available volume and scope of training data. As enrollment numbers increase, it may be possible to develop algorithms tailored to specific clinical indications. This collaborative effort will help elevate collaborations and democratize access among radiologists, clinicians, and patients worldwide.

Our cross-validation experiments evaluated the impact of different hospitals, vendors, and bone MR imaging techniques on model generalizability. These generalizability experiments showed that training on an augmented data set that includes a different hospital, vendor, or technique significantly improves model performance. Conversely, when one trains models only on disparate data sets, performance significantly decreases across the board. Taken together, these results suggest that blindly applying a model trained only on an outside data set can be dangerous due to inherent data variations, but augmenting a local model with additional data sets can boost overall performance. These are key considerations for any institution looking to practically implement bone MR imaging and synthetic CT. Future computational work will involve further model optimization and customization of problem-specific loss functions. We are also considering processing input data in patches, which would permit assembly of higher-resolution output images than our current model.37-41

Having established a robust DL pipeline with good performance and generalizability, we hope to facilitate adoption into clinical practice. At our institution, we are already seeing early promise for diagnostic and interventional applications. With larger clinical training sets, continued enhancement of synthetic CT algorithms will improve understanding among untrained radiologists and clinicians and streamline downstream processing for 3D printing and surgical navigation. Further technical advancements could even augment diagnostic value over source MR images, as suggested by the ability to reconstruct bone microstructure approaching MR imaging super-resolution. As synthetic CT algorithms become more robust and accessible, they may be increasingly accepted for clinical decision-making in head and neck imaging. True clinical validation will require comparative effectiveness research across different clinical use cases and multiple iterations of human expert input to guide selection and implementation of optimal algorithms.

CONCLUSIONS

We have optimized a DL model for conversion of bone MR imaging to synthetic CT in the head and neck on the basis of a patient data set inclusive of diverse demographics and clinical use cases. Our unique database consists of 39 paired bone MR imaging and CT examinations, scanned at 2 different institutions with varying MR imaging vendors and techniques. The Light_U-Net model outperformed more complex VGG U-Net models, even after the use of transfer learning. Selection of loss function on the basis of MAE resulted in better bone precision, while MSE tended to provide better bone recall. Performance metrics for a given model decreased when using training data captured only in a different environment and increased when local training data were augmented with those from different hospitals, vendors, and techniques. By establishing a robust DL-powered synthetic CT algorithm with good performance and generalizability, we hope to elevate the applicability of bone MR imaging with downstream image-processing and adoption into clinical practice.

Acknowledgments

We would like to thank Houchun Harry Hu, PhD, Mark Smith, MS, Aiming Lu, PhD, and Bhavani Selvaraj, MS, for their scientific expertise and collaboration. We would also like to thank Lisa Martin, MD, Diana Rodriguez, MD, Jeremy Jones, MD, Charles Elmaraghy, MD, Eric Sribnick, MD, Ibrahim Khansa, MD, and Gregory Pearson, MD, for their clinical expertise.

Footnotes

  • This work was supported by the American Society of Head & Neck Radiology Core Curriculum Fund William N. Hanafee Research Grant, Siemens/Radiological Society of North America Research Scholar Grant, RSCH1804, and the Society for Pediatric Radiology Pilot Award.

  • Disclosures: Mai-Lan Ho—RELATED: Grant: RSNA, SPR, ASHNR Comments: RSNA Research Scholar Grant, SPR Pilot Award, ASHNR William N. Hanafee Grant.* Support for Travel to Meetings for the Study or Other Purposes: RSNA, SPR, ASHNR Comments: RSNA Research Scholar Grant, SPR Pilot Award, ASHNR William N. Hanafee Grant.* UNRELATED—Royalties: McGraw-Hill Comments: Author, Neuroradiology Signs. *Money paid to the institution.

References

  1. 1.↵
    1. Du J,
    2. Hermida JC,
    3. Diaz E, et al
    . Assessment of cortical bone with clinical and ultrashort echo time sequences. Magn Reson Med 2013;70:697–704 doi:10.1002/mrm.24497 pmid:23001864
    CrossRefPubMed
  2. 2.↵
    1. Schieban K,
    2. Weiger M,
    3. Hennel F, et al
    . ZTE imaging with enhanced flip angle using modulated excitation. Magn Reson Med 2015;74:684–93 doi:10.1002/mrm.25464 pmid:25242318
    CrossRefPubMed
  3. 3.↵
    1. Eley KA,
    2. McIntyre AG,
    3. Watt-Smith SR, et al
    . “Black bone” MRI: a partial flip angle technique for radiation reduction in craniofacial imaging. Br J Radiol 2012;85:272–78 doi:10.1259/bjr/95110289 pmid:22391497
    Abstract/FREE Full Text
  4. 4.↵
    1. Tiberi G,
    2. Costagli M,
    3. Biagi L, et al
    . SAR prediction in adults and children by combining measured B1+ maps and simulations at 7.0 Tesla. J Magn Reson Imaging 2016;44:1048–55 doi:10.1002/jmri.25241 pmid:27042956
    CrossRefPubMed
  5. 5.↵
    1. Alibek S,
    2. Vogel M,
    3. Sun W, et al
    . Acoustic noise reduction in MRI using Silent Scan: an initial experience. Diagn Interv Radiol 2014;20:360–63 doi:10.5152/dir.2014.13458 pmid:24808439
    CrossRefPubMed
  6. 6.↵
    1. Eley KA,
    2. Watt-Smith SR,
    3. Golding SJ
    . “Black bone” MRI: a potential alternative to CT when imaging the head and neck: report of eight clinical cases and review of the Oxford experience. Br J Radiol 2012;85:1457–64 doi:10.1259/bjr/16830245 pmid:23091288
    Abstract/FREE Full Text
  7. 7.↵
    1. Lu A,
    2. Gorny KC,
    3. Ho ML
    . Zero TE MRI for craniofacial bone imaging. AJNR Am J Neuroradiol 2019;40:1562–66 doi:10.3174/ajnr.A6175 pmid:31467238
    Abstract/FREE Full Text
  8. 8.↵
    1. Cho SB,
    2. Baek HJ,
    3. Ryu KH, et al
    . Clinical feasibility of zero TE skull MRI in patients with head trauma in comparison with CT: a single-center study. AJNR Am J Neuroradiol 2019;40:109–15 doi:10.3174/ajnr.A5916 pmid:30545839
    Abstract/FREE Full Text
  9. 9.↵
    1. Hsu SH,
    2. Cao Y,
    3. Lawrence TS, et al
    . Quantitative characterizations of ultrashort echo (UTE) images for supporting air-bone separation in the head. Phys Med Biol 2015;60:2869–80 doi:10.1088/0031-9155/60/7/2869 pmid:25776205
    CrossRefPubMed
  10. 10.↵
    1. Ghose S,
    2. Dowling JA,
    3. Rai R, et al
    . Substitute CT generation from a single ultra short time echo MRI sequence: preliminary study. Phys Med Biol 2017;62:2950–60 doi:10.1088/1361-6560/aa508a pmid:28306546
    CrossRefPubMed
  11. 11.↵
    1. Kraus KM,
    2. Jäkel O,
    3. Niebuhr NI, et al
    . Generation of synthetic CT data using patient specific daily MR image data and image registration. Phys Med Biol 2017;62:1358–77 doi:10.1088/1361-6560/aa5200 pmid:28114107
    CrossRefPubMed
  12. 12.↵
    1. Wiesinger F,
    2. Bylund M,
    3. Yang J, et al
    . Zero TE-based pseudo-CT image conversion in the head and its application in PET/MR attenuation correction and MR-guided radiation therapy planning. Magn Reson Med 2018;80:1440–51 doi:10.1002/mrm.27134 pmid:29457287
    CrossRefPubMed
  13. 13.↵
    1. Leynes AP,
    2. Yang J,
    3. Wiesinger F, et al
    . Zero-echo-time and Dixon deep pseudo-CT (ZeDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI. J Nucl Med 2018;59:852–58 doi:10.2967/jnumed.117.198051 pmid:29084824
    Abstract/FREE Full Text
  14. 14.
    1. Gong K,
    2. Yang J,
    3. Kim K, et al
    . Attenuation correction for brain PET imaging using deep neural network based on Dixon and ZTE MR images. Phys Med Biol 2018;63:125011 doi:10.1088/1361-6560/aac763 pmid:29790857
    CrossRefPubMed
  15. 15.
    1. Nie D,
    2. Cao X,
    3. Gao Y, et al
    . Estimating CT image from MRI data using 3D fully convolutional networks. Deep Learn Data Label Med Appl (2016) 2016;2016:170–78 doi:10.1007/978-3-319-46976-8_18 pmid:29075680
    CrossRefPubMed
  16. 16.
    1. Andreasen D,
    2. Van Leemput K,
    3. Hansen RH, et al
    . Patch-based generation of a pseudo CT from conventional MRI sequences for MRI-only radiotherapy of the brain. Med Phys 2015;42:1596–605 doi:10.1118/1.4914158 pmid:25832050
    CrossRefPubMed
  17. 17.↵
    1. Boukellouz W,
    2. Moussaoui A
    . Magnetic resonance-driven pseudo CT image using patch-based multi-modal feature extraction and ensemble learning with stacked generalization. Journal of King Saud University: Computer and Information Sciences 2021;33:999–1007
    CrossRef
  18. 18.↵
    1. Otsu N
    . A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics 1979;9:62–66 doi:10.1109/TSMC.1979.4310076
    CrossRefPubMed
  19. 19.↵
    1. Ronneberger O,
    2. Fischer P,
    3. Brox T
    . U-net: convolutional networks for biomedical image segmentation: Medical Image Computing and Computer-Assisted Intervention (MICCAI). arXiv 1505.04597 [cs.CV] 2015 https://arxiv.org/abs/1505.04597. Accessed March 30, 2021
  20. 20.↵
    1. Simonyan K,
    2. Zisserman A
    . Very deep convolutional networks for large-scale image recognition. arXiv 1409.1556 2015. https://arxiv.org/abs/1409.1556v4. Accessed March 30, 2021
  21. 21.↵
    1. Deng J,
    2. Dong W,
    3. Socher R, et al
    . ImageNet: a large-scale hierarchical image database. In: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami Beach, Florida. June 20–25, 2009
  22. 22.↵
    1. Kingma DP,
    2. Ba J
    . Adam: a method for stochastic optimization. arXiv 1412.6980 2017. https://arxiv.org/abs/1412.6980. Accessed March 30, 2021
  23. 23.↵
    1. Goodfellow I, et al
    . Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, Quebec, Canada. December 8–13, 2014; 2672–80
  24. 24.
    1. Wolterink JM,
    2. Dinkla Am Savenije MH, et al
    . Deep MR to CT synthesis using unpaired data. Simulation and Synthesis in Medical Imaging. Lecture Notes in Computer Science. arXiv 1708.01155 [cs.CV] 2017. https://arxiv.org/abs/1708.01155. Accessed March 30, 2021
  25. 25.
    1. Zhu JY,
    2. Park T,
    3. Isola P, et al
    . Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy. October 22–29, 2017 doi:10.1109/ICCV.2017.244
    CrossRef
  26. 26.↵
    1. Isola P,
    2. Zhu JY,
    3. Zhou T, et al
    . Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii. July 21–26, 2017 doi:10.1109/CVPR.2017.632
    CrossRef
  27. 27.↵
    1. Li W,
    2. Li Y,
    3. Qin W, et al
    . Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy. Quan Imaging Med Surg 2020;10:1223–36 doi:10.21037/qims-19-885 pmid:32550132
    CrossRefPubMed
  28. 28.↵
    1. Kornblith S,
    2. Shlens J,
    3. Le QV
    . Do better imagenet models transfer better? In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, California. June 15–20, 2019 doi:10.1109/CVPR.2019.00277
    CrossRef
  29. 29.
    1. Raghu M,
    2. Zhang C,
    3. Kleinberg J, et al
    . Transfusion: understanding transfer learning for medical imaging. arXiv 2019. https://arxiv.org/abs/1902.07208. Accessed March 30, 2021
  30. 30.↵
    1. Anwar SM,
    2. Majid M,
    3. Qayyum A, et al
    . Medical image analysis using convolutional neural networks: a review. J Med Sys 2018;42: 226 doi:10.1007/s10916-018-1088-1 pmid:30298337
    CrossRefPubMed
  31. 31.↵
    1. Boulanger M,
    2. Nunes JC,
    3. Chourak H, et al
    . Deep learning methods to generate synthetic CT from MRI in radiotherapy: a literature review. Phys Med 2021;89:265–81 doi:10.1016/j.ejmp.2021.07.027 pmid:34474325
    CrossRefPubMed
  32. 32.↵
    1. Spadea MF,
    2. Maspero M,
    3. Zaffino P, et al
    . Deep learning based synthetic-CT generation in radiotherapy and PET: a review. Med Phys 2021;48:6537–66 doi:10.1002/mp.15150 pmid:34407209
    CrossRefPubMed
  33. 33.↵
    1. Bambach S,
    2. Ho ML
    . Bone MRI: can it replace CT: 2nd AI Award. In: Proceedings of the American Society of Functional Neuroradiology, Artificial Intelligence Workshop, February 5, 2021
  34. 34.
    1. Smith M,
    2. Bambach S,
    3. Selvaraj B, et al
    . Zero-TE MRI: potential applications in the oral cavity and oropharynx. Top Magn Reson Imaging 2021;30: 105–15 doi:10.1097/RMR.0000000000000279 pmid:33828062
    CrossRefPubMed
  35. 35.
    1. Kobayashi N,
    2. Bambach S,
    3. Ho ML
    . Ultrashort echo-time MR imaging of the pediatric head and neck. Magn Reson Imaging Clin N Am 2021;29:583–93 doi:10.1016/j.mric.2021.06.008 pmid:34717846
    CrossRefPubMed
  36. 36.↵
    1. Wiesinger F,
    2. Ho ML
    . Zero-TE MRI: principles and applications in the head and neck. Br J Radiol 2022 June 10. [Epub ahead of print]
  37. 37.↵
    1. Aouadi S,
    2. Vasic A,
    3. Paloor S, et al
    . Generation of synthetic CT using multi-scale and dual-contrast patches for brain MRI-only external beam radiotherapy. Phys Med 2017;42:174–84 doi:10.1016/j.ejmp.2017.09.132 pmid:29173912
    CrossRefPubMed
  38. 38.
    1. Dinkla AM,
    2. Florkow MC,
    3. Maspero M, et al
    . Dosimetric evaluation of synthetic CT for head and neck radiotherapy generated by a patch-based three-dimensional convolutional neural network. Med Phys 2019;46:4095–104 doi:10.1002/mp.13663 pmid:31206701
    CrossRefPubMed
  39. 39.
    1. Roy S,
    2. Carass A,
    3. Jog A, et al
    . MR to CT registration of brains using image synthesis. Proc SPIE Int Soc Opt Eng 2014;9034 doi:10.1117/12.2043954] pmid:25057341
    CrossRefPubMed
  40. 40.
    1. Lee J,
    2. Carass A,
    3. Jog A, et al
    . Multi-atlas-based CT synthesis from conventional MRI with patch-based refinement for MRI-based radiotherapy planning. Proc SPIE Int Soc Opt Eng 2017;10133:1013311 doi:10.1117/12.2254571 pmid:29142336
    CrossRefPubMed
  41. 41.↵
    1. Klages P,
    2. Benslimane I,
    3. Riyahi S, et al
    . Patch-based generative adversarial neural network models for head and neck MR-only planning. Med Phys 2020;47:626–42 doi:10.1002/mp.13927 pmid:31733164
    CrossRefPubMed
  • Received March 30, 2021.
  • Accepted after revision June 13, 2022.
  • © 2022 by American Journal of Neuroradiology
PreviousNext
Back to top

In this issue

American Journal of Neuroradiology: 43 (8)
American Journal of Neuroradiology
Vol. 43, Issue 8
1 Aug 2022
  • Table of Contents
  • Index by author
  • Complete Issue (PDF)
Advertisement
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Neuroradiology.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Deep Learning for Synthetic CT from Bone MRI in the Head and Neck
(Your Name) has sent you a message from American Journal of Neuroradiology
(Your Name) thought you would like to see the American Journal of Neuroradiology web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Cite this article
S. Bambach, M.-L. Ho
Deep Learning for Synthetic CT from Bone MRI in the Head and Neck
American Journal of Neuroradiology Aug 2022, 43 (8) 1172-1179; DOI: 10.3174/ajnr.A7588

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
0 Responses
Respond to this article
Share
Bookmark this article
Deep Learning for Synthetic CT from Bone MRI
S. Bambach, M.-L. Ho
American Journal of Neuroradiology Aug 2022, 43 (8) 1172-1179; DOI: 10.3174/ajnr.A7588
del.icio.us logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Purchase

Jump to section

  • Article
    • Abstract
    • ABBREVIATIONS:
    • MATERIALS AND METHODS
    • RESULTS
    • DISCUSSION
    • CONCLUSIONS
    • Acknowledgments
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • Responses
  • References
  • PDF

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • No citing articles found.
  • Crossref (16)
  • Google Scholar

This article has been cited by the following articles in journals that are participating in Crossref Cited-by Linking.

  • Current State of Artificial Intelligence in Clinical Applications for Head and Neck MR Imaging
    Noriyuki Fujima, Koji Kamagata, Daiju Ueda, Shohei Fujita, Yasutaka Fushimi, Masahiro Yanagawa, Rintaro Ito, Takahiro Tsuboyama, Mariko Kawamura, Takeshi Nakaura, Akira Yamada, Taiki Nozaki, Tomoyuki Fujioka, Yusuke Matsui, Kenji Hirata, Fuminari Tatsugami, Shinji Naganawa
    Magnetic Resonance in Medical Sciences 2023 22 4
  • Machine Learning for Medical Image Translation: A Systematic Review
    Jake McNaughton, Justin Fernandez, Samantha Holdsworth, Benjamin Chong, Vickie Shim, Alan Wang
    Bioengineering 2023 10 9
  • Advancements in synthetic CT generation from MRI: A review of techniques, and trends in radiation therapy planning
    Mohamed A. Bahloul, Saima Jabeen, Sara Benoumhani, Habib Abdulmohsen Alsaleh, Zehor Belkhatir, Areej Al‐Wabil
    Journal of Applied Clinical Medical Physics 2024 25 11
  • Deep Learning Techniques and Imaging in Otorhinolaryngology—A State-of-the-Art Review
    Christos Tsilivigkos, Michail Athanasopoulos, Riccardo di Micco, Aris Giotakis, Nicholas S. Mastronikolis, Francesk Mulita, Georgios-Ioannis Verras, Ioannis Maroulis, Evangelos Giotakis
    Journal of Clinical Medicine 2023 12 22
  • Evaluating the Hounsfield unit assignment and dose differences between CT‐based standard and deep learning‐based synthetic CT images for MRI‐only radiation therapy of the head and neck
    Kamal Singhrao, Catherine Lu Dugan, Christina Calvin, Luis Pelayo, Sue Sun Yom, Jason Wing‐Hong Chan, Jessica Elizabeth Scholey, Lisa Singer
    Journal of Applied Clinical Medical Physics 2024 25 1
  • Utility of zero echo time (ZTE) sequence for assessing bony lesions of skull base and calvarium
    V. Chauhan, K. Harikishore, S. Girdhar, S. Kaushik, F. Wiesinger, C. Cozzini, M. Carl, M. Fung, B.B. Mehta, B. Thomas, C. Kesavadas
    Clinical Radiology 2024 79 12
  • A systematic review of deep learning techniques for generating synthetic CT images from MRI data
    Isaac Kwesi Acquah, Shiraz Issahaku, Samuel Nii Adu Tagoe
    Polish Journal of Medical Physics and Engineering 2025 31 1
  • Assessing multiple MRI sequences in deep learning-based synthetic CT generation for MR-only radiation therapy of head and neck cancers
    Jacob Antunes, Tony Young, Dane Pittock, Paul Jacobs, Aaron Nelson, Jon Piper, Shrikant Deshpande
    Radiotherapy and Oncology 2025 205
  • Bone injury imaging in knee and ankle joints using fast-field-echo resembling a CT using restricted echo-spacing MRI: a feasibility study
    Nan Wang, Zhengshi Jin, Funing Liu, Lihua Chen, Ying Zhao, Liangjie Lin, Ailian Liu, Qingwei Song
    Frontiers in Endocrinology 2024 15
  • The diagnostic value of MRI segmentation technique for shoulder joint injuries based on deep learning
    Lina Dai, Md Gapar Md Johar, Mohammed Hazim Alkawaz
    Scientific Reports 2024 14 1

More in this TOC Section

Head & Neck

  • Chondrosarcoma vs Synovial Chondromatosis: Imaging
  • WHO Classification Update: Nasal&Skull Base Tumors
  • Peritumoral Signal in Vestibular Schwannomas
Show more Head & Neck

Functional

  • Kurtosis and Epileptogenic Tubers: A Pilot Study
  • Glutaric Aciduria Type 1: DK vs. Conventional MRI
  • Multiparametric MRI in PEDS Pontine Glioma
Show more Functional

Similar Articles

Advertisement

Indexed Content

  • Current Issue
  • Accepted Manuscripts
  • Article Preview
  • Past Issues
  • Editorials
  • Editor's Choice
  • Fellows' Journal Club
  • Letters to the Editor
  • Video Articles

Cases

  • Case Collection
  • Archive - Case of the Week
  • Archive - Case of the Month
  • Archive - Classic Case

More from AJNR

  • Trainee Corner
  • Imaging Protocols
  • MRI Safety Corner

Multimedia

  • AJNR Podcasts
  • AJNR Scantastics

Resources

  • Turnaround Time
  • Submit a Manuscript
  • Submit a Video Article
  • Submit an eLetter to the Editor/Response
  • Manuscript Submission Guidelines
  • Statistical Tips
  • Fast Publishing of Accepted Manuscripts
  • Graphical Abstract Preparation
  • Imaging Protocol Submission
  • Evidence-Based Medicine Level Guide
  • Publishing Checklists
  • Author Policies
  • Become a Reviewer/Academy of Reviewers
  • News and Updates

About Us

  • About AJNR
  • Editorial Board
  • Editorial Board Alumni
  • Alerts
  • Permissions
  • Not an AJNR Subscriber? Join Now
  • Advertise with Us
  • Librarian Resources
  • Feedback
  • Terms and Conditions
  • AJNR Editorial Board Alumni

American Society of Neuroradiology

  • Not an ASNR Member? Join Now

© 2025 by the American Society of Neuroradiology All rights, including for text and data mining, AI training, and similar technologies, are reserved.
Print ISSN: 0195-6108 Online ISSN: 1936-959X

Powered by HighWire