Skip to main content

Automated analysis of cardiovascular magnetic resonance myocardial native T1 mapping images using fully convolutional neural networks

Abstract

Background

Cardiovascular magnetic resonance (CMR) myocardial native T1 mapping allows assessment of interstitial diffuse fibrosis. In this technique, the global and regional T1 are measured manually by drawing region of interest in motion-corrected T1 maps. The manual analysis contributes to an already lengthy CMR analysis workflow and impacts measurements reproducibility. In this study, we propose an automated method for combined myocardium segmentation, alignment, and T1 calculation for myocardial T1 mapping.

Methods

A deep fully convolutional neural network (FCN) was used for myocardium segmentation in T1 weighted images. The segmented myocardium was then resampled on a polar grid, whose origin is located at the center-of-mass of the segmented myocardium. Myocardium T1 maps were reconstructed from the resampled T1 weighted images using curve fitting. The FCN was trained and tested using manually segmented images for 210 patients (5 slices, 11 inversion times per patient). An additional image dataset for 455 patients (5 slices and 11 inversion times per patient), analyzed by an expert reader using a semi-automatic tool, was used to validate the automatically calculated global and regional T1 values. Bland-Altman analysis, Pearson correlation coefficient, r, and the Dice similarity coefficient (DSC) were used to evaluate the performance of the FCN-based analysis on per-patient and per-slice basis. Inter-observer variability was assessed using intraclass correlation coefficient (ICC) of the T1 values calculated by the FCN-based automatic method and two readers.

Results

The FCN achieved fast segmentation (< 0.3 s/image) with high DSC (0.85 ± 0.07). The automatically and manually calculated T1 values (1091 ± 59 ms and 1089 ± 59 ms, respectively) were highly correlated in per-patient (r = 0.82; slope = 1.01; p < 0.0001) and per-slice (r = 0.72; slope = 1.01; p < 0.0001) analyses. Bland-Altman analysis showed good agreement between the automated and manual measurements with 95% of measurements within the limits-of-agreement in both per-patient and per-slice analyses. The intraclass correllation of the T1 calculations by the automatic method vs reader 1 and reader 2 was respectively 0.86/0.56 and 0.74/0.49 in the per-patient/per-slice analyses, which were comparable to that between two expert readers (=0.72/0.58 in per-patient/per-slice analyses).

Conclusion

The proposed FCN-based image processing platform allows fast and automatic analysis of myocardial native T1 mapping images mitigating the burden and observer-related variability of manual analysis.

Introduction

Cardiovascular magnetic resonance (CMR) myocardial native T1 mapping [1,2,3,4,5] enables quantification of interstitial diffuse fibrosis [6] and has been increasingly used in diagnosis and prognosis of different cardiomyopathies [7, 8]. In myocardial T1 mapping, a set of T1 weighted images are acquired by changing the time between the preparation pulse and image acquisition [1,2,3,4,5] to generate different T1 weightings. The T1 value at each voxel is then estimated by fitting an exponential relaxation curve to the voxel intensities of the different T1 weighted images [4, 9]. This necessitates that voxels align perfectly on different images to avoid errors in estimation and increase the reproducibility [4]. Both respiratory and cardiac motion could cause artifact in T1 maps and should be addressed during acquisition or post-processing step. To minimize the impact of cardiac motion, T1 mapping is acquired during systolic or diastolic quiescent period within a short acquisition window [2, 10]. For respiratory motion, both breath-holding [1, 2] and free-breathing using slice tracking have been used [5, 11]. However, both techniques still require post-processing motion correction [12,13,14]. Numerous semi-automatic techniques are available to compensate this respiratory motion [12,13,14], however these methods are not effective in all patients [15].

T1 mapping analysis requires manual segmentation of T1 maps from different slices [14]. Endocardial and epicardial contours are drawn manually on the maps to delineate the myocardial areas. Regional T1 values (e.g. septal T1) can also be measured by drawing a region of interest (ROI) in the desired area. However an experienced reader is often needed for reproducible measurements [6, 16]. Despite the availability of semi-automatic and automatic techniques for cardiac cine [17,18,19] and flow [20, 21] imaging, there is no software for automatic analysis of myocardial tissue characterization images. Therefore, there is a need for automating the analysis of myocardial tissue characterization sequences such as T1 mapping.

Recent advances in deep learning technologies, namely convolutional neural networks, have shown potential for fully automated segmentation of the left [18, 19, 22,23,24] and right [24, 25] ventricles in cine and myocardial scarring in late gadolinium enhancement [26]. Deep convolutional neural networks comprises several layers of linear and nonlinear operations with millions of functional parameters [27, 28]. The large number of network parameters allows representation of objects with diverse appearance and shape patterns. Deep learning based myocardium segmentation pipelines usually employs a single convolutional neural network architecture [24, 26]. However, a cascade of different neural network architectures has been also used to achieve different tasks such as locating the heart within the imaging field of view and extracting the myocardium boundaries [22, 25]. Also, combining classical image processing methods (e.g. level sets and deformable models) with deep learning has been proposed to refine the segmentation results [19, 23].

In this study, we propose to develop and evaluate a fully automated analysis platform for myocardial T1 mapping using fully convolutional neural networks (FCN) [27, 28]. The proposed method automates the analysis of short-axis T1 weighted images to estimate the myocardium T1 values. The performance of the proposed approach was evaluated against manual T1 calculation.

Methods

The proposed workflow for automated T1 map analysis is summarized in Fig. 1. In summary, the first step includes FCN-based myocardium segmentation, with additional automatic evaluation and refinement of the segmented myocardium shapes. The second step includes transformation of the segmented myocardium within the different T1 weighted images onto a polar coordinate system, which implicitly aligns the segmented myocardium regions. The myocardium T1 maps are then estimated (in the polar coordinate system) and transformed to the Cartesian coordinates for conventional map visualization. Validation of the proposed method was accomplished by comparing the automatically calculated myocardium T1 values to a current state-of-the-art semi-automatic T1 mapping technique [13]. Manual analysis by two independent readers was used to assess the inter-observer variability. Both per-slice and per-patient analyses were performed for all validation experiments. The following subsections provide further insight into the steps.

Fig. 1
figure 1

Pipeline for myocardium T1 map reconstruction. The myocardium in an input T1 weighted (T1w) image is first segmented using a fully convolutional neural network (FCN). The segmented myocardium is refined if needed (see text for details) and transformed into polar coordinates. All T1w images at a given slice are used to estimate the myocardium T1 map, which is displayed after applying inverse polar transformation

Myocardium segmentation

Fully convolutional neural networks

A deep FCN based on the U-Net architecture [29] was used for myocardium segmentation (Fig. 2). FCN is a special class of neural networks where all the network layers are based on convolutional sub-layers [29]. The FCN input is a two-dimensional 256 × 256 T1 weighted image, Ik,s(x,y), acquired at slice s (= 1 to 5) and inversion time TIk (with k = 1 to 11), and the output is a binary image, Bk,s(x,y), of the same size and with pixels labeled as myocardium or background. Our network comprised 149 processing layers with a total of approximately 9 million kernels. The basic structural unit in U-Net, referred to as a bottleneck (Fig. 2)b, contains three functional layers: batch-normalization which accelerates the network training [2, 30]; a rectified linear unit (ReLU) which introduces the nonlinearities required to model complex operations involved in the image segmentation; and [3] spatial convolution with a set of n kernels of size s × s × w, where the values of s and w are as indicated in Fig. 2. The weights of the convolutional kernels are the FCN parameters that are estimated during the training process. Spatial down-sampling (or up-sampling) operations are also applied during convolution and are combined with doubling (or halving) the number of kernels, n. To prevent overfitting, a dropout layer is used in each bottleneck to randomly (with 50% probability) pass or block the processed data [31]. A cross-entropy loss function was used to represent the network error and an Adam optimizer was used to estimate the network parameters [32]. A weight decay of 0.001 was used for regularization [33]. The final stage in the FCN network is a prediction block that generates two probability maps representing the likelihood of each pixel to belong to a background or a myocardium region. A softmax layer is then used to produce a binary image with pixels assigned 1 or 0 for myocardium or background, respectively.

Fig. 2
figure 2

The fully convolutional neural network architecture (a) comprises a number of building blocks, referred to as bottlenecks (b). An input 256 × 256 image undergoes a series of convolutions (Conv), nonlinear rectifications (ReLU), and batch normalizations (Norm). Down-sampling (↓) and up-sampling (↑) of the processed images are applied in the contracting and expansion paths, respectively. The l, k, m, and n values in (b) are determined by the image size and number of channels at the input and output of each bottleneck as shown in (a)

Post processing and automated segmentation assessment

The binary image resulting from FCN segmentation was enhanced through a set of post-processing operators. First, an area-filter was applied to remove all segmented objects with an area less than 5 cm2 and maintaining only the largest segmented object. The segmented myocardium was then automatically assessed for potential shape errors; e.g. absence of an annulus shape. A proper myocardium shape was quantified by two geometric parameters: Euler number and Eccentricity. The Euler number represents the number of holes in the object and should equal zero for the typical annulus shape of the myocardium in short axis slices. The Eccentricity represents deviation from a perfect circle (= zero for a circle and = 1 for a line segment). The training dataset was analyzed to determine the typical range of myocardium eccentricity and the maximum was 0.65. If the segmented myocardium has Euler number ≠0 or eccentricity > 0.65, it is marked as an improper segmentation and becomes eligible for automatic shape refinement.

Automatic segmentation refinement

Given an image, Ik,s(x,y), and its segmentation, Bk,s(x,y), that was identified to have a segmentation shape error, the binary image, B∞, s(x,y), resulting from segmenting the image with the longest inversion time was used to refine Bk,s(x,y). The image with the longest inversion time was chosen due to its high myocardium-to-blood contrast which leads to increased segmentation reliability. The refinement was done by applying an affine transformation (translation, rotation and scaling) to the binary image B∞, s(x,y) to obtain a refined binary image B̃k,s(x,y) with maximum overlap with Bk,s(x,y). It is worth noting that if B∞, s(x,y) was found to have segmentation error, no refinement was done and the image Bk,s(x,y) was excluded from analysis. An image segmentation is considered successful if a myocardium with a valid shape is produced (whether automatic refinement was applied or not).

T1 map reconstruction and analysis

To align the myocardium regions in different T1 weighted images, the segmented myocardium in a given image, Ik,s(x,y), was transformed to polar coordinates on a uniform grid (Additional file 1: Figure S1). The origin of the polar coordinates was located at the center-of-mass of the segmented myocardium. The transformation was achieved by sampling the myocardium intensities along 360 radial rays, with angular spacing of 1o, from the origin to the epicardium. A number of C intensity values were sampled between the endocardium and the epicardium along each ray. The result was a rectangular image, Pk,s(m,n), of size C × 360, which represents a temporary image in the polar coordinates, where the T1 map is generated and then inverse transformed to the Cartesian coordinates. To avoid loss of data during transformation; i.e. avoid many-to-one transformation, C was arbitrary fixed to a value (= 20) that is larger than the maximum myocardium thickness found in the training set (= 15 pixels). The set of all transformed images at a given slice location, Pk,s(m,n) for all inversion times k (= 1 to 11), was then used to estimate the myocardium T1 map, MAPs(m,n), at the given slice. This was achieved by performing pixel-wise curve fitting of a 2-parameter model to the myocardium intensities, Pk,s (m,n) for all k values [5]. T1 map reconstruction was done only for slices with at least 8 successfully segmented T1 weighted images. Finally, the resulting 20 × 360 T1 map was inverse transformed to the Cartesian coordinates. While any of the T1 weighted images could be used as a reference for the inverse polar transformation, the image with shortest inversion time was used to match the reference of the semi-automatic method. Inverse polar transformation is accomplished by determining the polar coordinate of each myocardium point on the Cartesian grid of the reference image and estimating its T1 value using bilinear interpolation of the reconstructed polar T1 map.

The pixels in sub-endo and sub-epicardial borders were excluded in automatic measurements to mimic the manual analysis. This was accomplished by automatically pruning the segmented myocardium, where the myocardium skeleton (i.e., central contour of 1 pixel width) [34] was extracted and dilated (using image morphological operator) to one-third of the segmented myocardium mean wall thickness. The thickness was chosen arbitrary and could be changed manually.

The global and regional myocardium T1 values were calculated by averaging the T1 values in the reconstructed maps over all 5 slices and over each slice, respectively. Any pixel with a T1 value outside the acceptance range for native T1 at 1.5 T (i.e. 850 ms to 1500 ms) was excluded from the average T1 calculations.

Image acquisition

We prospectively recruited 665 consecutive patients (526 male; age 56 ± 15 years) with known or suspected cardiovascular diseases referred for a clinical CMR exam during the period from 2014 to 2017. All patients provided consent at the time of examination for use of their imaging data in research; the imaging protocol was approved by the Institutional Review Board. Patient data was handled in compliance with the Health Insurance Portability and Accountability Act. Imaging was performed using a 1.5 T Philips Achieva system (Philips Healthcare, Best, The Netherlands) with a 32-channel cardiac coil. The imaging protocol included free-breathing, respiratory-navigated, slice-interleaved T1 (STONE) sequence [5] with the following parameters: TR/TE = 2.7/1.37 ms, FOV = 360 × 351 mm2, acquisition matrix = 172 × 166, voxel size = 2.1 × 2.1 mm2, linear ordering, SENSE factor = 1.5, slice thickness = 8 mm, bandwidth = 1845 Hz/pixel, diastolic imaging, and flip angle = 70o. Each patient imaging set was comprised of 55 images representing a stack of five short axial slices covering the left ventricle (LV) from base to apex. At each slice location, eleven T1 weighted images were acquired at eleven different inversion times, TI, (= ∞, 115 ms, 115 ms + RR, 115 ms + 2 RR, …, 115 ms + 4 RR, 350 ms, 350 ms + RR, …, 350 ms + 4 RR; and RR is duration of the cardiac cycle) [5]. The matrix size of all images was unified to 256 × 256.

The image dataset was split into two subsets for: 1) FCN training and testing and 2) validation of the T1 calculations. The first dataset contained 210 patients (134 male; 57 ± 14 years; total of 11,550 T1 weighted images) and was used to train and test the proposed FCN network. The myocardium of the LV in each image was manually segmented (HE, 4 year experience in medical image analysis); the resulting binary image was used as the segmentation reference standard. The dataset was then split at random into training and testing subsets containing 63 patients (total of 3465 images) and 147 patients (total of 8085 images), respectively.

The second image subset contained 455 patients (392 male; 56 ± 15 years) and was used to assess the agreement between T1 values computed by the automated versus the manual analysis. An experienced reader (SN, with 8 year CMR experience) used an in-house T1 map reconstruction tool to estimate T1 values for each myocardium slice [13]. First, for each slice, the reader manually delineated the endocardium on a reference T1 weighted image. Then, intensity-based similarity metrics were used to estimate the global LV motion of the T1 weighted images relative to the reference T1 weighted image. A regularized optical flow based method was then used to refine the image registration of the registered T1 weighted images to the reference using an optical flow based algorithm [13]. The resulting T1 maps were then manually processed to select a ROI within the myocardium that excluded all areas suspected of imaging or mapping artifacts. To assess the inter-observer variability, a subset of 40 patients (24 male; age 56 ± 11.7 years) was selected at random and manually processed by a second reader (MN, with 5 year CMR experience) to reconstruct and analyze the T1 maps as described above.

Implementation and evaluation

Network training was performed for 48 h (number of iterations = 6700) using a manually annotated dataset described below. The intensity dynamic range of each image was normalized by subtracting the mean and dividing by the standard deviation (SD) of the image pixel intensities. A transfer learning approach was employed to speed up training and to mitigate the requirement of large training datasets [35, 36]. That is, instead of random initialization of the network parameters, we re-used the optimal parameter values of a previously trained FCN. The re-used FCN had the same architecture as the current network and was trained (using 6305 images from 831 patients) to segment the myocardium in late gadolinium enhancement CMR images [26]. Image augmentation was also used to reduce overfitting [37], where each training image pair (T1 weighted image and its corresponding manually segmented image) was used to synthesize a number of training image pairs. Several methods of image augmentation were presented in literature, where geometric and/or intensity transformations can be used to synthetize the training images [38]. In our network, no intensity transformation was used for image augmentation because of the naturally high dynamic range of the image intensities and contrast in the T1 weighted images. Geometric image transformation was used through random translation, mirroring and elastic deformation of the training images with probabilities of 0.95, 0.95, 0.5, respectively. The FCN segmentation error was measured as by a cross-entropy loss function (between the FCN output and the manual segmentation). For network parameter estimation, the loss function was optimized using the Adam method with a learning rate of 0.001 and exponential decay rate [32].

The performance of the FCN network for myocardial segmentation was evaluated using the independent testing images. Dice similarity coefficient (DSC) was used to measure the overlap between the automatically and manually segmented myocardium in each testing image. The DSC ranges from 0 to 1 with higher values indicating higher similarity in shape between the automatically and manually segmented regions [39].

Network training and testing was performed on an Intel Core i7-6700 K CPU workstation with NVIDIA GeForce GTX Titan 12GB GPU. The network was implemented using Python (Python Software Foundation, Wilmington, Delaware, USA) with Tensorflow machine learning framework (Google Inc., California, USA).

Data analysis

Calculated T1 values were expressed as mean ± SD per patient and per slice. The performance of the automatic T1 analysis was evaluated by analyzing the agreement between the automated and manual T1 calculations. The Pearson correlation coefficient, r, was used to examine the linear relationship (with zero intercept) between the automated and manual T1 calculations. The Bland-Altman analysis was also used to assess the biases and limits of agreement between automated and manual T1 calculations.

Intraclass correlation coefficient (ICC) was used to assess inter-observer agreement. Inter-observer agreement was assessed between each pair of observers: automatic vs reader 1, automatic vs reader 2, and reader 2 vs reader 1. Intra-observer variability of the presented mapping and analysis method is deterministically zero (due to full automation) and thus was not studied using a dedicated experiment. All analyses were done on a per-patient and per-slice basis. All statistical analyses were performed using the statistical toolbox of Matlab (Mathworks Inc., Natick, Massachusetts, USA).

Results

The FCN successfully segmented the myocardium in 7382 testing images (91.3% of 8085 images) with an overall DSC score of 0.85 ± 0.07 (Fig. 3) after applying refinements. The computation time for segmenting a single T1 weighted image was less than 0.3 s. Automatic refinement of the myocardium segmentation was done to 241 images (3% of 8085 images) (Additional files 2: Figure S2 and Additional file 3: Figure S3). Table 1 summarizes the number of slices with correct, failed or refined segmentation. The FCN segmentation of the myocardium showed good overlapping with the manually segmented myocardium with mean DSC greater than 0.82 in all slices at all inversion times (Figs. 3 and 4). In the mapping validation images (2275 T1 maps for 455 patients), automatic reconstruction of T1 maps was successful in 1982 slices (87.1% of 2275 slices) in 449 patients (98.7% of 455 patients). The success rate of map reconstruction in the non-apical slices (1682 slices; 92.4% of 1820 slices) was higher than that in the apical slices (300 slices; 65.9% of 455 slices). The automatically and manually calculated T1 values within the myocardium ROI (Fig. 5) averaged over all patients were 1091 ± 59 ms and 1089 ± 59 ms, respectively. The automatically reconstructed T1 maps showed a strong correlation with the manually reconstructed T1 values in per-patient (r = 0.82; slope = 1.01; p < 0.0001; 449 patients) (Fig. 6)a and per-slice (r = 0.74; slope = 1.01; p < 0.0001; 2275 slices) analyses (Fig. 6) b. The correlation between the automatic and manual T1 mean values were comparable across the five slice locations (r/slope = 0.74/1.03, 0.76/1.02, 0.73/1.0, 0.76/1.01, and 0.75/1.01 for the 5 slices from apex to base respectively; p < 0.0001 for all slice locations). The automated and manual T1 calculations were in good agreement with 95% of the measurements located between the limits-of-agreement lines in per-patient (9.6 ± 86.6 ms) and per-slice (12.9 ± 110.1 ms) analyses (Fig. 7). The automated T1 calculations showed good agreement with the manual calculations in the per-patient (ICC = 0.86 and 0.74 for automatic vs. reader 1 and reader 2, respectively) and per-slice (ICC = 0. 56 and 0.49 for automatic vs. reader 1 and reader 2, respectively) analyses (Table 2). The ICC between the two expert readers was 0.72 and 0.58 in the per-patient and per-slice analyses, respectively. The average computation time for generating a T1 map of one slice was less than 15 s (segmentation and refinement = 5 s, polar transformation = 7.5 s, curve fitting = 1.5 s).

Fig. 3
figure 3

The Dice similarity coefficient of the automatic segmentation averaged over 147 patients (7382 images) categorized by inversion time (a) and slice location (b). Error bars represent standard deviation

Table 1 The number of images with correct, failed or refined segmentation reported for 147 patients (total of 7382 images) and categorized by the slice location and inversion time (TI)
Fig. 4
figure 4

Automatic (a) and the corresponding manual (b) segmentation of T1 weighted images for five slices (columns) and four different inversion times (rows) for one patient

Fig. 5
figure 5

Myocardial T1 mapping at five short axial slices (apex to base from left to right respectively) of the left ventricle of one patient. Automatically reconstructed map before (a) and after (b) pruning overlaid on a T1 weighted image with shortest inversion time; (c) Manually reconstructed T1 map. The contours in (c) represent the myocardium region of interest manually selected by the reader

Fig. 6
figure 6

Scatter plots of the automatic versus manual myocardium T1 values averaged over the patient volume (a) and each imaging slice (b). Solid lines represent the unity slope line

Fig. 7
figure 7

Bland-Altman plots of the automatic versus manual myocardium T1 values averaged over the patient volume (a) and each imaging slice (b). Solid and dashed lines represent the bias and ± 2SD limits, respectively

Table 2 Inter-observer analysis of the automated and manually calculated myocardium T1 maps in per-patient and per-slice analyses

Discussion

In this work, we introduced an automated method for combined segmentation, alignment, and T1 estimation of the myocardium. Myocardium segmentation was achieved using a deep learning approach where a FCN was used to segment the myocardium in all slice locations and all inversion times. Typically, multiple shape and appearance models are needed to capture the wide variation of myocardium shapes and intensity patterns in different T1 weighted images [14]. However, despite highly variable shapes and intensity patterns in the images, the employed FCN showed good performance as assessed by DSC. The developed FCN-based analysis platform showed the potential to mitigate the requirement of tedious manual T1 mapping analysis, with good agreement between automatic and manual calculations. Furthermore, inter-observer variability of the automatic vs manual calculations was comparable to that between the two expert readers.

In this work, we employed the same deep neural network architecture (Unet) that we previously trained and used to segment the myocardium and scar in short-axis late gadolinium enhancement (LGE) images [26]. We also used the estimated network weights of the previously trained network to initialize the weights of our network. Deep neural network architectures requires a large training set to allow generalization of the trained model through seeing all potential variations of the images [36]. In our study, limited size of the training image sets was supported by employing two standard techniques for improving network training; namely, data augmentation and transfer learning [35, 36]. Data augmentation has been shown previously to be effective to reduce over-fitting and thus boost the segmentation performance in images from outside the training set [40, 41]. Also, transfer learning approach was shown to outperform training-from-scratch approach (i.e. initialization of network parameters using random values) [42]. Networks trained using transfer learning approach were also shown to be more robust to the size of training sets compared to networks trained from scratch [42]. Several techniques based on deep learning have been proposed and evaluated for the segmentation of cine images [18, 19, 22,23,24, 43,44,45]. Network design varied among the different methods and included using only one fully convolutional neural network [24, 44], multi-stage convolutional neural networks [45], or cascaded convolutional and auto-encoder networks [25]. Convolutional neural networks have been also combined with classical image processing techniques such as deformable models [19] and level sets [23] aiming to refine the segmentation results. These methods achieved high segmentation accuracy of the LV cavity (DSC =0.9–0.94) but determining the parameters of the refinement algorithm can be a limitation. Oktay et al. showed that applying anatomical shape constraints to the convolutional neural networks can improve the segmentation accuracy without a need for a refinement step [43]. A non-convolutional neural network model was also proposed for cine image segmentation, where the segmentation problem was formulated as parameter regression rather than conventional pixel classification. In this formulation, the network was trained to estimate the radial distance between the myocardium boundary points and the myocardium centroid [18]. Any of these methods can be readily incorporated into our analysis framework. However, further investigation is warranted to adapt these methods to segment T1 weighted images and evaluate T1 map analysis.

Unlike current T1 mapping analysis methods that require explicit image registration of the T1 weighted images prior to T1 map reconstruction [12,13,14], our method inherently aligns the myocardium regions through polar transformation. For example, maintaining the location of the origin point of the polar coordinates at the center-of-mass of the segmented myocardium results in inherent correction of the global translational heart motion [46]. Also, resampling of the segmented myocardium via a uniform polar grid results in non-rigid alignment of the myocardium across all T1 weighted images. Utilization of geometric transformations leads to image alignment that is independent of the image intensity and contrast, and thus overcomes a limitation of conventional intensity-based image registration methods [12,13,14]. The proposed workflow did not include an explicit motion correction and instead relied on polar transformation and alignment to compensate for motion. An alternative approach is to apply motion correction to the T1 weighted images, reconstruct the T1 maps, and then use deep-learning based segmentation of myocardium from T1 maps. While this approach can be simpler, it might be limited by cascading the errors of the motion correction and the segmentation steps. Also, training the network to segment the myocardium in presence of residual motion artifacts can be challenging. A dedicated study is needed to investigate the performance of this workflow.

The polar grids have been previously used to register the myocardial strain and displacement maps using ultrasound imaging [47, 48]. The myocardium contours were first extracted at each cardiac time frame by means of semi-automatic tracking and then a polar grid was used to accumulate the displacement values of the deforming myocardium. One limitation of our method is that inaccurate myocardium segmentation can lead to erroneous T1 maps especially at the boundaries. However, T1 mapping errors at the myocardium boundaries are common to T1 mapping techniques due to partial volume effects and/or residual uncompensated motion effects, which necessitate manual exclusion of erroneous regions. In our method, these errors can be reduced by automatic pruning of the segmented myocardium.

The automatic refinement used in this work is a simple form of affine binary image registration, where the best segmentation mask is aligned with any given mask at the same slice location that has improper myocardium shape. An additional advantage of this approach is the efficient computations that result from confining image alignment and curve fitting to the myocardium regions-of-interest, rather than the entire field-of-view [13, 14].

The automated T1 calculations showed strong agreement with manual calculations in both per-patient and per-slice comparisons. Residual biases in automated T1 calculations might not necessarily correspond to T1 estimation errors and may be due to the inherent differences between the two methods of reconstructing and analyzing the T1 maps. In 12.9% of the slices, the myocardium was detected in less than eight T1 weighted images, which we set as the minimum number of T1 weighted images per slice required for T1 map reconstruction. These slices can be processed using manual or semi-automatic analysis. Alternatively, reconstruction might be allowed using fewer T1 weighted images to increase the success rate but might impact the accuracy of the T1 calculations. The failed reconstruction cases were mostly apical slices where the success rate (66%) was lower compared non-apical slices (92%). This is directly related to the higher segmentation failure of the myocardium at the apical slices, which is commonly encountered in myocardial segmentation techniques due to the blurred myocardium boundaries caused by motion artifacts or partial voluming [49, 50].

In our study, we used STONE sequence for T1 mapping, which results in 11 T1 weighted images spanning a relatively high dynamic range (due to the use of inversion recovery pulses). Training the FCN with images of diverse contrast, combined with data augmentation, and allowed a higher level of abstraction in learning the important image features. The roughly similar image contrast and dynamic range between STONE and other inversion recovery based techniques warrants validation of extending our trained FCN-based method to automate T1 map analysis in other mapping sequences such as modified Look-Locker inversion recovery (MOLLI) and [1] and shortened modified Look-Locker inversion recovery (ShMOLLI) [2]. Extension of our trained FCN to segment saturation-recovery based sequences such as saturation recovery single-shot acquisition (SASHA) [3], or combined inversion-recovery and saturation-recovery such as SAPPHIRE [4], are yet to be studied to investigate its reliability for analyzing T1 weighted images with inherent elevated noise levels. The additional post-processing of the FCN output was needed to correct for improper automatic segmentation of the LV structure. Alternative training strategies, including training of a separate network for each T1 weighted image, may be useful to improve the FCN performance and avoid heuristic post-processing. One limitation of this study is the lack of a ground truth for the myocardial T1 maps. Also, we did not investigate the capacity of the proposed analysis method to automate post-contrast T1 mapping and extracellular volume (ECV) mapping.

Conclusion

The proposed FCN-based image processing platform allows fast and automatic analysis of myocardial native T1 mapping images mitigating the burden and observer-related variability of manual analysis.

Abbreviations

CI:

Confidence interval

CMR:

Cardiovascular magnetic resonance

DSC:

Dice similarity coefficient

ECV:

Extracellular volume

FCN:

Fully convolutional neural network

GRE:

Gradient recalled echo

ICC:

Intraclass correlation coefficient

LV:

Left ventricle

MOLLI:

Modified Lock-Looker inversion recovery

ROI:

Region of Interest

SAPPHIRE:

Saturation pulse prepared heart rate independent inversion-recovery

SASHA:

Saturation recovery single-shot acquisition

SD:

Standard deviation

ShMOLLI:

Shortened modified Lock-Looker inversion recovery

SNR:

Signal to noise ratio

STONE:

Slice-interleaved T1

TE:

Time of echo

TI:

Inversion time

TR:

Time of repetition

References

  1. Messroghli DR, Radjenovic A, Kozerke S, Higgins DM, Sivananthan MU, Ridgway JP. Modified look-locker inversion recovery (MOLLI) for high-resolution T1 mapping of the heart. Magn Reson Med. 2004;52(1):141–6.

    Article  Google Scholar 

  2. Piechnik SK, Ferreira VM, Dall’Armellina E, Cochlin LE, Greiser A, Neubauer S, et al. Shortened modified look-locker inversion recovery (ShMOLLI) for clinical myocardial T1-mapping at 1.5 and 3 T within a 9 heartbeat breathhold. J Cardiovasc Magn Reson. 2010;12(1):69.

    Article  Google Scholar 

  3. Chow K, Flewitt JA, Green JD, Pagano JJ, Friedrich MG, Thompson RB. Saturation recovery single-shot acquisition (SASHA) for myocardial T1 mapping. Magn Reson Med. 2014;71(6):2082–95.

    Article  Google Scholar 

  4. Roujol S, Weingärtner S, Foppa M, Chow K, Kawaji K, Ngo LH, et al. Accuracy, precision, and reproducibility of four T1 mapping sequences: a head-to-head comparison of MOLLI, ShMOLLI, SASHA, and SAPPHIRE. Radiology. 2014;272(3):683–9.

    Article  Google Scholar 

  5. Weingärtner S, Roujol S, Akçakaya M, Basha TA, Nezafat R. Free-breathing multislice native myocardial T1 mapping using the slice-interleaved T1 (STONE) sequence. Magn Reson Med. 2015;74(1):115–24.

    Article  Google Scholar 

  6. Messroghli DR, Moon JC, Ferreira VM, Grosse-Wortmann L, He T, Kellman P, et al. Clinical recommendations for cardiovascular magnetic resonance mapping of T1, T2, T2* and extracellular volume: a consensus statement by the Society for Cardiovascular Magnetic Resonance (SCMR) endorsed by the European Association for Cardiovascular Imagi. J Cardiovasc Magn Reson. 2017;19(1):75.

    Article  Google Scholar 

  7. Sibley CT, Noureldin RA, Gai N, Nacif MS, Liu S, Turkbey EB, et al. T1 mapping in cardiomyopathy at cardiac MR: comparison with endomyocardial biopsy. Radiology. 2012;265(3):724–32.

    Article  Google Scholar 

  8. Puntmann VO, Carr-White G, Jabbour A, Yu C-Y, Gebker R, Kelle S, et al. T1-mapping and outcome in nonischemic cardiomyopathy. JACC Cardiovasc Imaging. 2016;9(1):40–50.

    Article  Google Scholar 

  9. Akçakaya M, Weingärtner S, Roujol S, Nezafat R. On the selection of sampling points for myocardial T1 mapping. Magn Reson Med. 2015;73(5):1741–53.

    Article  Google Scholar 

  10. Ferreira VM, Wijesurendra RS, Liu A, Greiser A, Casadei B, Robson MD, et al. Systolic ShMOLLI myocardial T1-mapping for improved robustness to partial-volume effects and applications in tachyarrhythmias. J Cardiovasc Magn Reson. 2015;17(1):77.

    Article  Google Scholar 

  11. Jyun-Ming T, Teng-Yi H, Yu-Shen T, Yi-Ru L. Free-breathing MOLLI: application to myocardial T1 mapping. Med Phys. 2012;39(12):7291–302.

    Article  Google Scholar 

  12. Xue H, Shah S, Greiser A, Guetter C, Littmann A, Jolly M-P, et al. Motion correction for myocardial T1 mapping using image registration with synthetic image estimation. Magn Reson Med. 2012;67(6):1644–55.

    Article  Google Scholar 

  13. Roujol S, Foppa M, Weingärtner S, Manning WJ, Nezafat R. Adaptive registration of varying contrast-weighted images for improved tissue characterization (ARCTIC): application to T1 mapping. Magn Reson Med. 2015;73(4):1469–82.

    Article  Google Scholar 

  14. El-Rewaidy H, Nezafat M, Jang J, Nakamori S, Fahmy AS, Nezafat R. Nonrigid active shape model-based registration framework for motion correction of cardiac T1 mapping. Magn Reson Med. 2018;80(2):780–91.

    Article  Google Scholar 

  15. Bellm S, Basha TA, Shah RV, Murthy VL, Liew C, Tang M, et al. Reproducibility of myocardial T1 and T2 relaxation time measurement using slice-interleaved T1 and T2 mapping sequences. J Magn Reson Imaging. 2016;44(5):1159–67.

    Article  Google Scholar 

  16. Moon JC, Messroghli DR, Kellman P, Piechnik SK, Robson MD, Ugander M, et al. Myocardial T1 mapping and extracellular volume quantification: a Society for Cardiovascular Magnetic Resonance (SCMR) and CMR working Group of the European Society of cardiology consensus statement. J Cardiovasc Magn Reson. 2013;15(1):92.

    Article  Google Scholar 

  17. Liu F, Zhou Z, Jang H, Samsonov A, Zhao G, Kijowski R. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoskeletal magnetic resonance imaging. Magn Reson Med. 2017;79(4):2379–91.

    Article  Google Scholar 

  18. Tan LK, Liew YM, Lim E, McLaughlin RA. Convolutional neural network regression for short-axis left ventricle segmentation in cardiac cine MR sequences. Med Image Anal. 2017;39:78–86.

    Article  Google Scholar 

  19. Avendi MR, Kheradvar A, Jafarkhani H. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI. Med Image Anal. 2016;30:108–19.

    Article  CAS  Google Scholar 

  20. Schnell S, Entezari P, Mahadewia RJ, Malaisrie SC, McCarthy PM, Collins JD, et al. Improved semi-automated 4D-flow MRI analysis in the aorta in patients with congenital aortic anomalies vs tricuspid aortic valves. J Comput Assist Tomogr. 2016;40(1):102–8.

    Article  Google Scholar 

  21. Goel A, McColl R, King KS, Whittemore A, Peshock RMA. Fully automated tool to identify the aorta and compute flow using phase-contrast MRI: validation and application in a large population based study. J Magn Reson Imaging. 2014;40(1):221–8.

    Article  Google Scholar 

  22. Yang X, Zeng Z, Yi S. Deep convolutional neural networks for automatic segmentation of left ventricle cavity from cardiac magnetic resonance images. IET Comput Vis. 2017;11(8):643–9.

    Article  Google Scholar 

  23. Ngo TA, Lu Z, Carneiro G. Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance. Med Image Anal. 2017;35:159–71.

    Article  Google Scholar 

  24. Tran PV. A fully convolutional neural network for cardiac segmentation in short-Axis MRI. ArXiv: 1604.00494. 2016;

  25. Avendi MR, Kheradvar A, Jafarkhani H. Automatic segmentation of the right ventricle from cardiac MRI using a learning-based approach. Magn Reson Med. 2017;78(6):2439–48.

    Article  CAS  Google Scholar 

  26. Fahmy AS, Rausch J, Neisius U, Chan RH, Maron M, Appelbaum E, et al. Automated cardiac MR scar quantification in hypertrophic cardiomyopathy using deep convolutional neural networks. JACC Cardiovasc Imaging. 2018;2677. https://doi.org/10.1016/j.jcmg.2018.04.030.

  27. Kayalibay B, Jensen G, van der Smagt P. CNN-based segmentation of medical imaging data. ArXiv: 1701.03056. 2017

  28. Shen D, Wu G, Suk H-I. Deep learning in medical image analysis. Annu Rev Biomed Eng. 2017;19:221–48.

    Article  CAS  Google Scholar 

  29. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF, editors. Medical image computing and computer-assisted intervention -- MICCAI 2015: 18th international conference, Munich, Germany, October 5–9, 2015, vol. 3. Cham: Springer International Publishing; 2015. p. 234–41.

    Chapter  Google Scholar 

  30. Ioffe S, Szegedy C. Batch Normalization: Accelerating deep network training by reducing internal covariate shift. ArXiv:1502.03167. 2015;

  31. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15:1929–58.

    Google Scholar 

  32. Kingma DP, Ba J. Adam: a method for stochastic optimization. In: Proceedings of international conference on learning representations. 2015.

  33. Krogh A, Hertz JA. Simple weight decay can improve generalization. In: Advances in neural information processing systems (NIPS)-Volume 4. USA: Morgan-Kaufmann; 1992. p. 950–7.

  34. Maragos P, Schafer R. Morphological skeleton representation and coding of binary images. IEEE Trans Acoust. 1986;34(5):1228–44.

    Article  Google Scholar 

  35. Bengio Y. Deep Learning of Representations for Unsupervised and Transfer Learning. In: Proceedings of the 2011 International Conference on Unsupervised and Transfer Learning Workshop, vol. 27; 2011. p. 17–37.

    Google Scholar 

  36. Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging. 2016;35:1285–98.

    Article  Google Scholar 

  37. Hussain Z, Gimenez F, Yi D, Rubin D. Differential data augmentation techniques for medical imaging classification tasks. AMIA Annu Symp Proc. 2017;2017:979–84.

    PubMed  Google Scholar 

  38. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. In: proceedings of the 25th international conference on neural information processing systems, vol. 1. USA: Curran Associates Inc; 2012. p. 1097–105.

    Google Scholar 

  39. Zou KH, Warfield SK, Bharatha A, Tempany CMC, Kaus MR, Haker SJ, et al. Statistical validation of image segmentation quality based on a spatial overlap index1. Acad Radiol. 2004;11(2):178–89.

    Article  Google Scholar 

  40. Pereira S, Pinto A, Alves V, Silva CA. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans Med Imaging. 2016;35(5):1240–51.

    Article  Google Scholar 

  41. Roth HR, Lu L, Liu J, Yao J, Seff A, Cherry K, et al. Improving computer-aided detection using convolutional neural networks and random view aggregation. IEEE Trans Med Imaging. 2016;35(5):1170–81.

    Article  Google Scholar 

  42. Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, et al. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging. 2016;35:1299–312.

    Article  Google Scholar 

  43. Oktay O, Ferrante E, Kamnitsas K, Heinrich M, Bai W, Caballero J, et al. Anatomically constrained neural networks (ACNNs): application to cardiac image enhancement and segmentation. IEEE Trans Med Imaging. 2018;37(2):384–95.

    Article  Google Scholar 

  44. Bai W, Sinclair M, Tarroni G, Oktay O, Rajchl M, Vaillant G, et al. Automated cardiovascular magnetic resonance image analysis with fully convolutional networks. J Cardiovasc Magn Reson. 2018;20:65.

    Article  Google Scholar 

  45. Vigneault DM, Xie W, Ho CY, Bluemke DA, Noble JA. Ω-net (omega-net): fully automatic, multi-view cardiac MR detection, orientation, and segmentation with deep neural networks. Med Image Anal. 2018;48:95–106.

    Article  Google Scholar 

  46. Gupta SN, Solaiyappan M, Beache GM, Arai AE, Foo TKF. Fast method for correcting image misregistration due to organ motion in time-series MRI data. Magn Reson Med. 2003;49(3):506–14.

    Article  Google Scholar 

  47. Ma C, Varghese T. Lagrangian displacement tracking using a polar grid between endocardial and epicardial contours for cardiac strain imaging. Med Phys. 2012;39(4):1779–92.

    Article  Google Scholar 

  48. Ma C, Wang X, Varghese T. Segmental analysis of cardiac short-Axis views using Lagrangian radial and circumferential strain. Ultrason Imaging. 2016;38(6):363–83.

    Article  Google Scholar 

  49. Lee H-Y, Codella N, Cham M, Prince M, Weinsaft J, Wang Y. Left ventricle segmentation using Graph searching on Intensity and Gradient and A priori knowledge (lvGIGA) for short axis cardiac MRI. J Magn Reson Imaging. 2008;28(6):1393–401.

    Article  Google Scholar 

  50. Childs H, Ma L, Ma M, Clarke J, Cocker M, Green J, et al. Comparison of long and short axis quantification of left ventricular volume parameters by cardiovascular magnetic resonance, with ex-vivo validation. J Cardiovasc Magn Reson. 2011;13(1):40.

    Article  Google Scholar 

Download references

Acknowledgments

We thank Jennifer Rodriguez for editing the manuscript.

Funding

Research reported in this publication was supported in part by National Institutes of Health under award numbers: 5R01HL129185, 1R01HL129157-01A1 and AHA 15EIA22710040.

Availability of data and materials

The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.

Author information

Authors and Affiliations

Authors

Contributions

Author contribution are as following; conception and study design (ASF, RN), development of algorithms and analysis software (ASF,HAE), data collection and protocol design (SN,RN), image reading (SN, MN), data analysis (ASF, SN, MN), interpretation of data and results (ASF,RN), drafting (ASF, RN), revising (RN). All authors read and approved the final manuscript.

Corresponding author

Correspondence to Reza Nezafat.

Ethics declarations

Authors’ information

Ahmed S. Fahmy,PhD; Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA. Hossam El-Rewaidy, MS; Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA. Maryam Nezafat, PhD Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA. Shiro Nakamori, MD; Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA. Reza Nezafat, PhD (corresponding author); Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA.

Ethics approval and consent to participate

This study was approved by the Institutional Review Board at Beth Israel Deaconess Medical Center, Harvard University. All subjects provided informed consent for research participation.

Consent for publication

Not applicable.

Competing interests

RN holds a patent on a system for tissue characterization using multi-slice magnetic resonance imaging (US Patent 2015/0323630). The authors declare that they have no other competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional information

Dr. Francesca Delling served as a JCMR Guest Editor for this manuscript.

Additional files

Additional file 1:

Figure S1. Transformation of the segmented myocardium into a uniform grid of size 20 × 360 in the polar coordinates. The origin of the polar coordinates is located at the center of mass of the segmented myocardium. (DOCX 115 kb)

Additional file 2:

Figure S2. Example results of the automatic segmentation before and after refinement. (DOCX 470 kb)

Additional file 3:

Figure S3. Effect of area filter on the output of the neural network. (a) input T1 weighted image; (b,c) network output before and after area filtering, respectively; (d) manual segmentation. (DOCX 225 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fahmy, A.S., El-Rewaidy, H., Nezafat, M. et al. Automated analysis of cardiovascular magnetic resonance myocardial native T1 mapping images using fully convolutional neural networks. J Cardiovasc Magn Reson 21, 7 (2019). https://doi.org/10.1186/s12968-018-0516-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12968-018-0516-1

Keywords