Deep admm-net for compressive sensing MRI
NIPS 2016
abstract : Compression perception (CS) is an effective method for rapid magnetic resonance imaging (fast magnetic Resonance Imaging MRI). The goal is to reconstruct MR images from k-space small amounts of non-sampled data, accelerating the MRI data acquisition. In order to improve the accuracy and speed of MRI system reconstruction, we propose a novel depth structure called admm-net. The admm-net is defined by a data flow graph, which is obtained from the iterative process of the ADMM algorithm to optimize a cs-based MRI model. In the training phase, all the parameters of the network, namely: Image Transforms,shrinkage functions and so on, use the L-BFGS algorithm for end-to-end training. In the test phase, the parameters learned from the network (data obtained from the cs-based refactoring Task) are used. The validity of this method is verified by experiments.
In order to optimize the Cs-mri MODEL,ADMM has been proved to be an effective method for separating variables and has the guarantee of convergence. Given a CS-MRI model, considering the augmented Lagrangian function, the variable is separated into several subgroups and solved by the method of optimizing each sub-problem separately. Although the ADMM method seems to be effective, it is difficult to determine the optimal parameters (e.g., update rates,penalty parameters), which greatly affects the performance of the final Cs-mri.
In this work, our goal is to design a fast but accurate way to refactor high-quality Mr Image from under-sampled k-space data. We propose a new depth structure to be: Admm-net, inspired by the ADMM iterative process. This network structure is composed of several stages, each of which corresponds to each iteration of the ADMM algorithm. More specifically, we have designed a depth structure that is represented by a data flow graph. Each operation of the ADMM is represented as a direct edge by the data flow of two operations in a graph NODES,ADMM process. Therefore, the process of ADMM iteration naturally determines the structure of depth through the flow chart. Given under-sampled data in K-space, it flows through graph and produces a reconstructed image. Parameters in all depth structures can be obtained from under-sampled data in K-space, reconstructed image using fully sampled data backpropagation over the data Flow graph.
Our experiment proves the validity of the proposed method, in the precision and speed of the reconstruction.
The contribution points of the article can be summarized as follows:
1. A novel admm-netis proposed, which is cs-mri by combining the ADMM algorithm into the deep network.
---this was achieved by designing a data flow graph for ADMM to effectively build and train the admm-net.
2. Admm-net got a very good quarter, and it was fast and fast.
3. This is the first time that the discriminant parameter learning method has been applied to sparse coding and MRF.
2. deep admm-net for Fast MRI
2.1 Compressive sensing MRI Model and ADMM algorithm
General Cs-mri Model: Suppose X is an MRI image needs to be reconstructed, Y is under-sampled k-space data, according to CS theory, the reconstructed image can be predicted by the following optimization algorithm:
ADMM Solver:
The above problems can be solved effectively by ADMM algorithm. By introducing additional variables, the above formula can be converted to:
Its augmented Lagrangian function is:
Then, the ADMM algorithm can iteratively solve the following three sub-problems:
Then we can get the corresponding three solutions:
2.2 Data Flow Graph for the ADMM algorithm
To design our deep admm-net, we first map the process of the admm iteration in equation 5 to a data flow graph. As shown in Figure 1, node of the graph is composed of different operations in the ADMM, anddirected edges corresponds to the data flow between operations. In this case, the nth iteration of the ADMM algorithm corresponds to the nth phase of the data flow graph. In the nth phase of graph, there are four types of nodes mapped from four types of operations, namely: Reconstruction operation,convolution operation, Nonlinear transform operation and multiplier update operation in Equ. 5. The entire streaming diagram is a multiple repetition of the above phase, corresponding to the successive iterations of the ADMM. Given a under-sampled data in K-space, it flows through graph and then eventually produces a reconstructed image. Using this approach, we map the ADMM iteration to a data flow graph, which is useful for defining and training our deep admm-net.
2.3 Deep admm-net
Our deep admm-net is defined by the data flow graph. It maintains the structure of the diagram, but layers the four types of operations into a network-to-learn parameter. These actions correspond to: Reconstruction layer,convolution layer,non-linear transform layer, and multiplier update layer.
reconstruction Layer (X(n)):
Convolution layer,
Non-linear transform layer,
Multiplier Update Layer
Network Parameters:
3. Network Training:
We take the reconstructed Mr Image using fully sampled data in K-space as the ground-truth Mr Image X gt, and under-sampled data y in K-space as the input. Then, the training set T has these two parts of the data composition. Therefore, the loss function of the network can be defined as:
3.1 Initialization
3.2 Gradient computation by back P ropagation over Data Flow Graph
Deep admm-net for compressive sensing MRI NIPS 2016