Skip to content

Advertisement

  • Poster presentation
  • Open Access

Real-time low-latency self-calibrating grog for interventional mri

  • 1,
  • 1,
  • 1,
  • 2,
  • 3 and
  • 2
Journal of Cardiovascular Magnetic Resonance201012 (Suppl 1) :P61

https://doi.org/10.1186/1532-429X-12-S1-P61

  • Published:

Keywords

  • Weight Calculation
  • Reconstruction Performance
  • Coil Sensitivity
  • Density Compensation
  • Radial Acquisition

Introduction

Self-Calibrating GROG (SC-GROG)[1] is a GRAPPA Operator[2] based gridding algorithm. When compared to existing non-Cartesian imaging methods, e.g. convolution gridding[3], SC-GROG is advantageous in many respects. First, the gridding kernel is derived from the non-Cartesian data itself, and there are no parameters to deal with (e.g. gridding kernel size, gridding kernel choice). Second, density compensation is straightforward (simple averaging). Third, it can successfully grid both undersampled and fully sampled datasets. Auto-calibrated, parameter-free imaging with SC-GROG is well suited to MRI guided interventions.

Purpose

We present the first real-time low-latency implementation of the SC-GROG algorithm, RT-GROG, for multi-slice radial acquisitions to guide cardiovascular interventions.

Methods

SC-GROG's weights calculation and image reconstruction steps are decoupled to run asynchronously, and parallelized in C++ using OpenMP and Pthreads libraries. Per sample 2D weights that are normally calculated during gridding are pre-calculated and stored in a lookup table (LUT) to increase reconstruction performance.

Results

Real-time cardiac images of a healthy subject were acquired (according to our IRB approved protocol, with prior written consent) using an Avanto 1.5 T (Siemens Medical Solutions, Erlangen, Germany). Data acquisition employed a 2D Radial TrueFisp sequence (flip angle = 45 degrees, TR = 2.36 ms, #samples = 128). Long and short axis cardiac data were acquired using 32 element array (InVivo Corporation) with 64 or 96 projections. Images were reconstructed on a 32-core custom Linux workstation (HiPerStation 8000, HPC Systems). Figure 1 and 2 represent 128 × 64 short axis and 128 × 96 long axis cine image series reconstructed in real-time. Reconstruction and weights calculation performances of RT-GROG and SC-GROG are given in Table 1 and 2. Our results show that image reconstruction is faster than the data acquisition, even with 32 channel data due to LUT usage during the gridding process. LUT calculation adds 86 ms to the weights calculation for a 32 channel 128 × 64 dataset and 91 ms for 128 × 96, but provides 25× faster reconstruction than conventional SC-GROG for a 128 × 96 matrix.
Table 1

Single frame reconstruction performance comparison between RT-GROG and SC-GROG (32 coil acquisition)

 

128 × 64

128 × 96

SC-GROG

0.840

1.305

RT-GROG

0.041

0.052

Table 2

Weights calculation performance comparison between RT-GROG and SC-GROG (32 coil acquisition)

 

128 × 64

128 × 96

SC-GROG

0.294

0.369

RT-GROG

0.380

0.460

Figure 1

Figure 2

Conclusion

We present the first real-time SC-GROG implementation. Weights calculation and reconstruction processes are decoupled to run asynchronously and parallelized. An LUT improves reconstruction performance by avoiding the need for the time-consuming 2D gridding weights calculation. RT-GROG reconstruction was always faster than the data acquisition even for 32 channel data sets. Additionally, weights update performance is fast enough to track changes in coil sensitivity profiles for better adaptability. Our implementation can be easily adapted to spiral imaging[1].

Authors’ Affiliations

(1)
Translational Medicine Branch, NIH/NHLBI, DHHS, Bethesda, MD, USA
(2)
Department of Radiology, University Hospitals, Cleveland, OH, USA
(3)
Institute of Biomedical Engineering, Bogazici University, Istanbul, Turkey

References

  1. Magn Reson Med. 2008, 59 (4): 930-935. 10.1002/mrm.21565.Google Scholar
  2. Magn Reson Med. 2005, 54: 1553-1556. 10.1002/mrm.20722.Google Scholar
  3. IEEE Trans Med Imaging. 1991, 10: 473-478. 10.1109/42.97598.Google Scholar

Copyright

Advertisement