• Med Phys · Sep 2019

    Deep learning approaches using 2D and 3D convolutional neural networks for generating male pelvic synthetic computed tomography from magnetic resonance imaging.

    • Jie Fu, Yingli Yang, Kamal Singhrao, Dan Ruan, Fang-I Chu, Daniel A Low, and John H Lewis.
    • David Geffen School of Medicine, University of California, Los Angeles, 10833 Le Conte Ave, Los Angeles, 90095, CA, USA.
    • Med Phys. 2019 Sep 1; 46 (9): 3788-3798.

    PurposeThe improved soft tissue contrast of magnetic resonance imaging (MRI) compared to computed tomography (CT) makes it a useful imaging modality for radiotherapy treatment planning. Even when MR images are acquired for treatment planning, the standard clinical practice currently also requires a CT for dose calculation and x-ray-based patient positioning. This increases workloads, introduces uncertainty due to the required inter-modality image registrations, and involves unnecessary irradiation. While it would be beneficial to use exclusively MR images, a method needs to be employed to estimate a synthetic CT (sCT) for generating electron density maps and patient positioning reference images. We investigated 2D and 3D convolutional neural networks (CNNs) to generate a male pelvic sCT using a T1-weighted MR image and compare their performance.MethodsA retrospective study was performed using CTs and T1-weighted MR images of 20 prostate cancer patients. CTs were deformably registered to MR images to create CT-MR pairs for training networks. The proposed 2D CNN, which contained 27 convolutional layers, was modified from the state-of-the-art 2D CNN to save computational memory and prepare for building the 3D CNN. The proposed 2D and 3D models were trained from scratch to map intensities of T1-weighted MR images to CT Hounsfield Unit (HU) values. Each sCT was generated in a fivefold cross-validation framework and compared with the corresponding deformed CT (dCT) using voxel-wise mean absolute error (MAE). The sCT geometric accuracy was evaluated by comparing bone regions, defined by thresholding at 150 HU in the dCTs and the sCTs, using dice similarity coefficient (DSC), recall, and precision. To evaluate sCT patient positioning accuracy, bone regions in dCTs and sCTs were rigidly registered to the corresponding cone-beam CTs. The resulting paired Euler transformation vectors were compared by calculating translation vector distances and absolute differences of Euler angles. Statistical tests were performed to evaluate the differences among the proposed models and Han's model.ResultsGenerating a pelvic sCT required approximately 5.5 s using the proposed models. The average MAEs within the body contour were 40.5 ± 5.4 HU (mean ± SD) and 37.6 ± 5.1 HU for the 2D and 3D CNNs, respectively. The average DSC, recall, and precision for the bone region (thresholding the CT at 150 HU) were 0.81 ± 0.04, 0.85 ± 0.04, and 0.77 ± 0.09 for the 2D CNN, and 0.82 ± 0.04, 0.84 ± 0.04, and 0.80 ± 0.08 for the 3D CNN, respectively. For both models, mean translation vector distances are less than 0.6 mm with mean absolute differences of Euler angles less than 0.5°.ConclusionsThe 2D and 3D CNNs generated accurate pelvic sCTs for the 20 patients using T1-weighted MR images. Statistical tests indicated that the proposed 3D model was able to generate sCTs with smaller MAE and higher bone region precision compared to 2D models. Results of patient alignment tests suggested that sCTs generated by the proposed CNNs can provide accurate patient positioning. The accuracy of the dose calculation using generated sCTs will be tested and compared for the proposed models in the future.© 2019 American Association of Physicists in Medicine.

      Pubmed     Full text   Copy Citation     Plaintext  

      Add institutional full text...

    Notes

     
    Knowledge, pearl, summary or comment to share?
    300 characters remaining
    help        
    You can also include formatting, links, images and footnotes in your notes
    • Simple formatting can be added to notes, such as *italics*, _underline_ or **bold**.
    • Superscript can be denoted by <sup>text</sup> and subscript <sub>text</sub>.
    • Numbered or bulleted lists can be created using either numbered lines 1. 2. 3., hyphens - or asterisks *.
    • Links can be included with: [my link to pubmed](http://pubmed.com)
    • Images can be included with: ![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
    • For footnotes use [^1](This is a footnote.) inline.
    • Or use an inline reference [^1] to refer to a longer footnote elseweher in the document [^1]: This is a long footnote..

    hide…

What will the 'Medical Journal of You' look like?

Start your free 21 day trial now.

We guarantee your privacy. Your email address will not be shared.