Skip to main content
Fig. 1 | Journal of Cardiovascular Magnetic Resonance

Fig. 1

From: An inline deep learning based free-breathing ECG-free cine for exercise cardiovascular magnetic resonance

Fig. 1

Deep learning model and generation of training data. A The deep learning model consists of a three-dimensional U-Net trained to filter out streaking artifacts from complex-valued real-time radial cine \(n\times n\) images with \({n}_{t}\) time frames. The inputs and outputs to the U-Net are the real and imaginary components concatenated along the spatial dimension, and the loss during training was the mean square error (MSE) between inputs and outputs. During inference, the generated real and imaginary parts with suppressed artifacts are re-combined. B Synthetic real-time image training pairs were generated from electrocardiogram (ECG)-gated segmented Cartesian k-space data acquired at rest in 503 patients. First, the k-space data were reconstructed using the generalized autocalibrating partial parallel acquisition (GRAPPA) technique. Spatiotemporal interpolation followed by coil combination was then applied to the reconstructed images to simulate reference (i.e., ground-truth) real-time images. In addition, prior to coil combination, the interpolated images were also used to simulate undersampled real-time radial cine images by applying an inverse and forward non-uniform fast Fourier transform (NUFFT) with 12 radial lines. The coil-combined images with streaking artifacts were used as inputs during training

Back to article page