[Unrelated to challenge] Possibility of fast multi-echo reconstruction?

Hi, firstly thanks for all efforts towards the code and especially the reproducibility aspects. The idea of making scripts that closely follow some of the leaderboard submissions is especially great.

This is unrelated to the 2020 challenge as the dataset would be different, but I was hoping to get someone’s comment on it. I’m trying to adapt the fastMRI code (as of the August 2020 refactor) to reconstruct multi-echo GRE (http://mriquestions.com/gradient-echo.html). Being one of the most common clinical MR sequences, I think developing a highly accelerated deep learning based reconstruction would be very interesting.

In short, multi-contrast multi-echo GRE image recon is currently done by acquiring the same samples (lines of k-space) across each echo, and treating each echo time as an essentially separate reconstruction problem. In that sense, I think the current methods probably oversample – they don’t exploit the shared structure between the echoes.

However, if we could integrate this whole process into one reconstruction, i.e joint reconstruction, a network could learn to exploit the shared information across the echoes. In that scenario, higher acceleration can be achieved by using complementary k-space undersampling, e.g., sampling points across the echoes in a staggered K_y, K_z pattern, similar to 2D-CAIPIRINHA (http://mriquestions.com/caipirinha.html) to create controlled aliasing in the phase (y) and partition (z) encoding plane, which increases the distance between aliased voxels.

Any comments on how best to go about this, especially along with the planned refactor, is appreciated.

It sounds like an interesting project. For joint reconstructions I’ve generally referred to some of the work from the MGH group (e.g., this one). They use a variational network applied to multi-contrast data and my quick glance suggests they also have a joint encoding scheme. We do have a variational network in the repository that performs fairly well.

Unfortunately I don’t know much more about this work beyond the abstract. It sounds like something that might be hard to train with the fastMRI data, but maybe you could figure out a way to pretrain the VN on the fastMRI data and then fine-tune a few layers on multi-contrast data that you collect yourself.