Computed tomography (CT) was publicly announced in 1972 [1, 2]. Since then, CT has evolved into a powerful and widely used diagnostic imaging tool, with approximately 70 million examinations performed in 2007 in the US alone [3]. While the great majority of CT examinations are requested for ‘routine’ head and body applications, the less commonly performed and more technically challenging examinations draw the most attention to CT technology in scientific publications and at trade shows. In addition, applications such as cardiac CT and CT perfusion have elicited greater public awareness of radiation dose, and have sparked new regulatory efforts with respect to radiation exposure [3].

European Radiology has been a principal outlet of scientific publications in the field of CT technology, cardiac CT, dual-source CT and dual-energy CT, to name just a few, over the last two decades. For example, at the time of writing, 44 of the 288 total articles on dual-source CT coronary angiography available via PubMed (US National Library of Medicine, Bethesda, MD, USA) have been published in European Radiology. Further important work on CT and radiation dose has also been published in this journal, and it is fair to say that European Radiology is one of the key resources for innovation, clinical application, and critical appraisal of computed tomography in the imaging literature.

The rapid evolution of CT with the constant introduction of apparently new technology obscures the fact that several of the ‘new’ techniques actually have their intellectual roots in the early days of CT. The purpose of this review is to illustrate and explain some of the latest innovations in CT together with their historic roots. Exploring earlier and simpler solutions to a given problem, or ideas which could not be implemented until today, may shed some light on the current technology.

Iterative reconstruction—an old friend of CT

Iterative reconstruction is currently a hot topic in CT, and all the major CT manufacturers have recently introduced new iterative reconstruction algorithms. While this is certainly a new trend, it obscures the fact that the very first CT machines actually used an iterative technique, the algebraic reconstruction technique (ART), to solve the fundamental task of computed tomography, which is to reconstruct a cross-sectional image based on the attenuation measurements of X-rays transmitted through a patient’s body. In its simplest form, iterative solutions such as ART are mathematical trial and error procedures that gradually converge to the correct answer. A simplified example of ART is given in Fig. 1. Note that only one iteration is needed in this simple case. Typically, several iterations are required to reconstruct the image from the projection data. The early iterative reconstructions were quickly superseded by the so-called analytic reconstruction techniques, notably filtered back projection (FBP), which is currently the gold standard on all modern CT systems [4].

Fig. 1
figure 1

Iterative reconstruction: The algebraic reconstruction technique (ART). To calculate the four pixel values in the image from the measured ray sums, we start with all pixel values at zero. Starting arbitrarily with the vertical rays, we find the ray sums in the estimated image to be zero, but the measured values are 11 and 9. These ‘errors’ are divided equally between the two pixels along each ray (5.5 in the two left pixels, 4.5 in the two right pixels), in a procedure called ‘back projection.’ Next, the horizontal ray sums in the estimated image (both 10) are compared with the measured values (12 and 8), and the resulting errors (+2 and −2) are backprojected along the horizontal rays. Finally, the errors in the diagonal ray sums are backprojected. In this simple case, the correct image is obtained in a single iteration, but typically, several iterations are required. (Figure modified from [34])

Analytic reconstruction algorithms such as FBP rely on the exact mathematical relationship between the measured x-ray-attenuation in the projection data and the pixel values in the corresponding image. Given exact projection data with infinite resolution, FBP produces an exact image. The mathematics for analytic reconstructions actually predate the era of CT. Allan Cormack, who was co-awarded the Nobel Prize with Sir Godfrey Hounsfield in 1979, pointed out in 1973 that “it has recently come to the author’s attention that the problem of determining a function in a plane from its line integrals was first solved by Johann Radon in 1917” [5].

FBP assumes exact data, but in reality, the projection data from the scanner are noisy. This noise is actually amplified by the filter in filtered back projection. In contrast, iterative techniques use a statistical model of the noise to improve the image on each iteration. These techniques sometimes also assume that smooth images are more likely, and so they try to set adjacent pixels to similar values.

There are some downsides to using iterative reconstruction. Iterative reconstructions by definition repeat the reconstruction process several times, and are therefore much slower than analytic methods. All projection data must be available before the iterative improvement can begin. Image quality can deteriorate if the reconstruction process is allowed to proceed beyond a certain optimum number of cycles, due to overfitting. Finally, the quality of the result depends on the noise model and the assumptions made about the image.

The major advantage of iterative reconstruction is that it produces much better image quality than FBP in the setting of a very low signal-to-noise ratio. Iterative reconstructions were thus successfully introduced for emission tomography in nuclear medicine [6], which typically has more noise and smaller amounts of projection data. Thanks to increasing computational power, this advantage can now be exploited for (transmission) CT as well, with potentially significant dose savings. Many iterative techniques only account for Poisson noise (which becomes important at low doses). Several additional sources of error, such as beam hardening, scatter, and motion, are seen with metal streak artifacts. These artifacts can also be reduced using iterative reconstruction [7].

The specific implementations of newly introduced iterative reconstructions on commercial systems vary widely, and are rapidly evolving. General Electric (Milwaukee, WI, USA) has introduced Adaptive Statistical Iterative Reconstruction (ASIR) [8], which uses a blend of filtered back projection images with iteratively reconstructed images that model the system noise. The main advantage of this technique is the ability to use lower radiation dose while maintaining image quality. General Electric also announced a Model-Based Iterative Reconstruction (MBIR), which is a full iterative reconstruction that models not only the noise statistics, but also the geometry of the machine itself [9]. MBIR is not yet commercially available in the US at the time of this writing, but preliminary experience shows that it has a dramatic effect on image quality in the poor contrast-to-noise situation (Fig. 2). The main benefit of iterative reconstructions in modern CT therefore appears to be a potential for significant radiation dose reduction. This has been shown for other CT systems and manufacturers as well. Siemens (Forchheim, Germany) developed Iterative Reconstruction in Image Space (IRIS) [10], Toshiba (Tochigi, Japan) developed Adaptive Iterative Dose Reduction (AIDR), and Philips (Einthoven, the Netherlands) developed iDose. The visual appearance of images reconstructed with iterative algorithms can be different, often described as blurry or blotchy, which is related to differences in the noise power spectrum compared to FBP. The Philips iDose attempts to reproduce the noise power spectrum found in standard FBP images, in an attempt to reproduce the noise texture that radiologists expect.

Fig. 2
figure 2

Iterative reconstruction using Advanced Statistical Iterative Reconstruction (ASIR) and Model-Based Iterative Reconstruction (MBIR), both from General Electric (Milwaukee, WI, USA). Coronal reformation of a non-contrast CT scan inadvertently obtained with 50 mA and reconstructed with filtered back projection (FBP) shows excessive image noise (left upper panel), requiring a repeat scan acquired at 750 mA (right upper panel). Reconstruction of the 50 mA dataset using ASIR shows decreased image noise (left lower panel). Dramatic reduction of image noise is achieved by reconstructing the 50 mA dataset with the full iterative reconstruction (MBIR) (right lower panel), which compares favorably with the 750 mA FBP image (right upper panel) which was acquired at 15 times more radiation dose

Cardiac CT has old roots as well

Imaging of the heart with computed tomography is challenging due to rapid cardiac motion. Clinical cardiac CT became practical with 16-channel CT systems in 2001, and became more widely used with 64-channel CT in 2004. The reader may thus be surprised to learn that the two fundamental techniques used for synchronizing CT data acquisition with the patient’s ECG signal were actually invented in the early 1970s. Cardiac CT pushes the limits of temporal resolution, tube power, acquisition modes, and reconstruction techniques, and has thus been a driving force for the technical development of CT.

Capturing a beating heart on CT would ideally require a rotation time of 50 ms or less, which is not possible with today’s mechanical (3rd generation) CT machines. However, the data can be acquired over multiple cardiac cycles, and correlated with the patient’s ECG signal. This correlation can be done prospectively (only collect data during a single phase of the cardiac cycle), or retrospectively (collect data continuously, and afterwards bin the data based on the cardiac phase during which it was acquired).

Prospective triggering was first presented in 1977 by Sagel et al. [11], who used an original translation-rotation EMI 5000 scanner. Hounsfield himself was one of the coauthors of this paper. In order to acquire a single slice through the heart, the X-ray tube was turned on only at certain times during the cardiac cycle. The authors were able to reconstruct a remarkably motion artifact free image through the heart (Fig. 3). Of note, only a single image could be acquired with this technique, which required an injection of 300 ml of ionic contrast medium over 7 min.

Fig. 3
figure 3

First example of cardiac CT using prospectively ECG-triggered acquisition, on a translation-rotation EMI CT system, in 1997. Cardiac motion is suppressed in this image, which took 7 min to acquire. RA—right atrium, arrow indicates left atrioventicular grove. (Reprinted with permission from Sagel et al. [11])

Also in 1977, retrospective gating was developed by Harell et al [12]. They oversampled the heart at a single table position, and then reconstructed a movie containing seven images from different phases of the cardiac cycle. Each image was reconstructed using raw data that was retrospectively selected from a specific phase of the cardiac cycle (based on the ECG signal). Retrospective gating was also performed in the first cardiac CT examinations using spiral acquisition. Oversampling was achieved by using a very low pitch (0.2–0.3). The main disadvantage of this approach is that oversampling is associated with a very high radiation dose [13]. The radiation dose can be reduced with efficient tube current modulation. Prospective ECG triggering does not require the acquisition of redundant data and thus allows cardiac CT with substantially less radiation dose, compared to retrospective ECG-gated CT without tube current modulation [14].

Wider detector banks can cover the cranio-caudal extent of the heart with fewer step-and-shoot (non-helical) acquisitions, which reduces step-off artifacts between adjacent step-and-shoot acquisitions, and reduces the acquisition time. For example, to cover 12–16 cm of the heart with a 4 cm (64 × 0.625 mm) detector bank, 3-4 step-and-shoot acquisitions are needed. The corresponding data acquisition time would be approximately 5 s for 12 cm, or 7 s for 16 cm, assuming a heart rate of 60 bpm, and accounting for the fact that every other heartbeat is used to move the table to the next position. Larger detector banks, such as 8 cm (Philips Brilliance iCT), allow even faster acquisitions, and a 16 cm detector bank (Toshiba ONE) allows for sub-second examination of the heart in a single gantry rotation.

While wider detector banks thus reduce the acquisition time, the width of the detector bank does not affect temporal resolution. Temporal resolution depends on rotation speed and the number of X-ray sources (assuming that information is collected during a single cardiac cycle). For the typical third generation architecture with a single X-ray tube and a single detector array positioned on a gantry and rotated jointly around the patient, the temporal resolution is approximately half of the gantry rotation time [15]. Modern single-tube systems have gantry rotation times of approximately 300 ms (270–350 ms), and a temporal resolution of approximately 150 ms. Temporal resolution can be improved using reconstruction techniques that work with limited projection data [16, 17].

Temporal resolution can also be improved using multiple X-ray tubes. Dual-source CT systems [18] use two separate X-ray tubes and two separate detector arrays. Again, the idea of using more than one X-ray source for a CT system to improve temporal resolution was conceived in the 1970s. An interesting example of this is shown in the Siemens patent by Franke in 1979 [19], shown in Fig. 4. This system had three x-ray sources arranged opposing three detector banks, and instead of a full 360 degree rotation, the system only needed to rotate 120 degrees to collect a full 360 degree raw data set. This triple-source CT system was never built as a commercial system, although a triple-source CT does have theoretically favorable properties for direct cone-beam reconstruction [20]. A more radical implementation of a multiple X-ray source system was the Dynamic Spatial Reconstructor (DSR), which was in fact built at the Mayo Clinic in Rochester in 1980. The DSR had 14 X-ray tubes and 14 television cameras, and weighed 13 tons (Fig. 5). With a gantry rotation time of 4 s, it could acquire 240 contiguous 0.9 mm slices in only 17 ms, and scan over 20 s [21].

Fig. 4
figure 4

An early CT system with multiple X-Ray sources, 1979. Three x-ray sources were arranged opposing three banks of detectors. The system rotated only 120° to collect full (360°) projection data. (Reprinted from [19] with permission from Siemens AG)

Fig. 5
figure 5

An extreme example of multiple X-ray sources, the Dynamic Spatial Reconstructor (DSR), Mayo Clinic, 1980. This CT system had 14 X-ray tubes. (From Ritman et al. [21], (1980) Science, 210(4467):273–280. Reprinted with permission from AAAS)

Dual-source systems are now a clinical reality, and first-generation [18] as well as the latest second-generation dual-source CT systems [22] have been a clinical and commercial success (Fig. 6). The temporal resolution of the latest dual-source CT (Siemens SOMATOM Definition Flash) is 75 ms (approximately one quarter of the gantry rotation time of 0.28 s). The geometry of the two separate X-ray tube and detector bank systems arranged 94° apart allows helical data acquisition at a pitch of up to 3.2 (depending on the field of view). This high pitch allows for very rapid helical acquisition at a low radiation dose (Fig. 7).

Fig. 6
figure 6

Dual-source coronary CT angiogram on a second generation dual-source scanner. Volume rendered and curved planar reformations, showing mild non-calcified coronary plaque in the proximal LAD, and a significant stenosis in the circumflex coronary artery. (Images courtesy of Dr. Stephan Achenbach, University of Erlangen, Germany)

Fig. 7
figure 7

The first generation dual-source CT system (Siemens SOMATOM Definition) has a 50 cm field of view for detector A, and a smaller (26 cm) field of view for detector B, due to space limitations (black and white drawing). The second generation dual-source CT system (Siemens SOMATOM Definition Flash) increases the field of view for detector B to 33 cm (shown in yellow), which requires increasing the angle between the X-ray sources and detectors from 90° to 94° (shown in red). (Figure modified from [18])

Anecdotally, it is worth noting that mounting two powerful x-ray tubes on a single gantry is only possible because the x-ray tubes used in dual-source CT machines are relatively small—and much lighter than traditional x-ray tubes with large anodes rotating in a vacuum. Siemens built a modern version of a so-called ‘rotating-vacuum-vessel x-ray tube,’ where the anode is in direct contact with cooling oil, resulting in a far greater heat capacity, despite a smaller anode [23]. The concept of this innovative tube goes back to a General Electric patent by Coolidge [24] in 1917, thus predating the era of CT. It took several decades and the development of modern electronics to actually manufacture such a tube with the ability to precisely steer the electron beam electromagnetically to its focal spot(s) on the anode.

Dual-energy CT is as old as CT itself

While dual-energy CT and spectral imaging to discriminate different materials are very recent developments on commercial CT systems, the basic principles are not new. The following quotation by G. Hounsfield is from one of the first scientific publications on computed tomography, in 1973: “Two pictures are taken of the same slice, one at 100 kV and the other at 140 kV so that areas of high atomic numbers can be enhanced. Tests carried out to date have shown that iodine (z = 53) can be readily differentiated from calcium (z = 20)” [2]. Soon thereafter, Alvarez and Macovski proposed [25] performing separate reconstructions for the photoelectric absorption (which occurs at low energies and depends on atomic number), and Compton scatter (which occurs at higher energies and depends on electron density, which is correlated with mass density).

The basic physical principle is that the linear attenuation coefficients of CT are a function of the X-ray energy, and this function is different for different materials and tissues. This is analogous to color vision. The human retina has photoreceptors that are sensitive to red, green, and blue photons, which allows us to distinguish up to three pigments that are mixed together. If we had only two types of color photoreceptors, we could only distinguish two pigments mixed together. This is the situation with dual-energy CT, where the object being scanned is examined using two different “colors” (energy spectra) of X-ray photons, which allows us to describe each pixel as a blend of two materials (Fig. 8). The basis materials can be chosen arbitrarily, as long as they have sufficiently different absorption spectra, such as water and iodine. Typically, the higher X-ray energy is 140 kVp and the lower energy is 80 or 100 kVp.

Fig. 8
figure 8

Dual-energy CT identifies materials based on their attenuation of X-rays at two different energies. Unknown materials can be expressed as a linear combination of two materials with known attenuations (A and B)

If a voxel of any other than the two basis materials is present, it will be described as a combination of the basis materials. For example, in human color vision, monochromatic yellow light appears similar to a mixture of red and green light, because yellow light stimulates both red and green photoreceptors. Similarly, if the basis materials for dual-energy CT are water and iodine, then each voxel is interpreted as containing a mixture of water and iodine. Other materials are mapped onto the two basis materials. For example, in a so-called iodine image, bone also appears bright.

In contrast, energy-sensitive photon counting CT can be used to discriminate more than two different materials. An energy-sensitive photon counting X-ray detector measures the full X-ray energy spectrum, which is compared to the X-ray spectrum of the source to determine the absorption spectrum. The first clinical images using this technique [26] were generated in 2009, and commercial systems are not yet available. These images had an energy resolution of 9.8 keV. However, the mAs of the scan had to be reduced by 90%, in order to allow the photon counting detector array sufficient time to determine the energy of each photon.

Commercially available clinical dual-energy CT systems have only recently been introduced [18, 22, 27]. The two different energy X-rays can be produced by two different X-ray tubes in a dual-source system (Siemens Definition and Definition Flash) [18, 22], or using fast kVp switching on a single X-ray tube (General Electric 750HD) [27]. Dual-energy systems can be compared based on the energy separation and the alignment between the two sets of measurements.

The dual-source solution allows for the best energy separation, since different filters can be used for the respective X-ray tubes. Another advantage of the dual-source solution is that they can be used together with automated tube current modulation, and no restrictions on gantry rotation time or pitch are necessary. The dual-source system produces two datasets, and the basis material decomposition is performed in image space rather than in the raw data domain, because the raw data are not perfectly registered (they are acquired at different times, due to the angular offset of the tube/detector systems).

The fast kVp switching method has the advantage of allowing the basis material decomposition to be performed in the raw data domain, because high- and low- energy projections are more accurately registered. The main benefit of this is a more accurate beam hardening correction, which becomes clinically apparent mainly in decreased artifacts in the posterior fossa of head CTs. The disadvantages of the fast kVp switching method are that the energy separation is currently not as good as using two separate tubes with different filters, automated tube current modulation techniques cannot be used currently, and there are some restrictions on the selection of pitch and table speed.

Several clinical applications for dual-energy imaging (using both dual-source and fast kVp switching) are emerging, including virtual unenhanced imaging, differentiating urinary stones, imaging of gout, and others [2832].

Of course, dual-energy is not a panacea for all CT problems. While it is relatively easy to separate iodine in large, well opacified vessels from dense calcium in cortical bone (a task that can be accomplished even without dual-energy by modern image postprocessing), this is no longer the case when the contrast opacification is weak, or the calcium content of a voxel is small (causing a partial volume effect). This is a problem when imaging small vessels with calcified atherosclerotic plaque, such as coronary atherosclerosis and below-knee arterial occlusive disease. In these cases, both humans and traditional image segmentation algorithms have difficulty separating iodine and calcium, but that is also the situation where dual-energy based segmentation is least reliable. Additional image post processing methods may need to be developed in order to clearly depict the vascular flow lumen [33].

Conclusion

CT is a rapidly developing technology, and many advances are on the horizon. As illustrated in this brief review, several innovative ideas conceived in the early days of this technology could not be realized until several decades later. Further progress in computational power, engineering capabilities, and new tube designs and detector materials may again overcome current limitations. This may lead to practical multi-source designs, energy-sensitive photon counting CT, phase contrast CT [20], or other yet unpredictable developments.