Skip to main content
Fig. 2 | Journal of Cardiovascular Magnetic Resonance

Fig. 2

From: An inline deep learning based free-breathing ECG-free cine for exercise cardiovascular magnetic resonance

Fig. 2

Inline implementation of real-time cine with deep learning-based radial acceleration. Multi-coil raw radial k-space data acquired from the scanner is processed in the Image Reconstruction Environment (ICE) on the vendor reconstruction computer. Using the International Society for Magnetic Resonance in Medicine Raw Data (ISMRMRD) format, the collected data is transferred to the Framework for Image Reconstruction (FIRE) server using a FireEmitter functor. The FIRE server is located in the vendor reconstruction computer. The data is then transferred from the FIRE server to an external server via a connecting Secure Shell Protocol (SSH) tunnel. In the external sever, a Docker containing all Python dependencies such as PyTorch is used to process the raw k-space data in a single 32 GB Graphics Processing Unit (GPU). The deep-learning radial acceleration with parallel reconstruction (DRAPR) technique was implemented in the Docker. First, a non-uniform fast Fourier transform (NUFFT) is used to grid and reconstruct undersampled multi-coil radial k-space data. GPU parallelization is done in PyTorch by treating frames and coils as batch and channel dimensions. This approach enables application of the NUFFT at 10 ms per frame. Coil sensitivity and combination is subsequently performed in PyTorch at negligible computational cost. These coil-combined images are send to the U-Net for de-aliasing, which requires 6.6 ms per frame. The total processing time of 16.6 ms is about half the 37.7 ms temporal resolution of collected frames. Images are then returned to the FIRE server via the same SSH tunnel, and to ICE using a FireInjector functor. Finally, the reconstructed de-aliased images are finalized into DICOM format and returned to the scanner computer console for immediate display

Back to article page