• Spine · Jan 2021

    Automated Detection of Spinal Schwannomas Utilizing Deep Learning Based on Object Detection from MRI.

    • Sadayuki Ito, Kei Ando, Kazuyoshi Kobayashi, Hiroaki Nakashima, Masahiro Oda, Masaaki Machino, Shunsuke Kanbara, Taro Inoue, Hidetoshi Yamaguchi, Hiroyuki Koshimizu, Kensaku Mori, Naoki Ishiguro, and Shiro Imagama.
    • Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan.
    • Spine. 2021 Jan 15; 46 (2): 9510095-100.

    Study DesignA retrospective analysis of magnetic resonance imaging (MRI) was conducted.ObjectiveThis study aims to develop an automated system for the detection of spinal schwannoma, by employing deep learning based on object detection from MRI. The performance of the proposed system was verified to compare the performances of spine surgeons.Summary Of Background DataSeveral MRI scans were conducted for the diagnoses of patients suspected to suffer from spinal diseases. Typically, spinal diseases do not involve tumors on the spinal cord, although a few tumors may exist at the unexpectable level or without symptom by chance. It is difficult to recognize these tumors; in some cases, these tumors may be overlooked. Hence, a deep learning approach based on object detection can minimize the probability of overlooking these tumors.MethodsData from 50 patients with spinal schwannoma who had undergone MRI were retrospectively reviewed. Sagittal T1- and T2-weighted magnetic resonance imaging (T1WI and T2WI) were used in the object detection training and for validation. You Only Look Once version3 was used to develop the object detection system, and its accuracy was calculated. The performance of the proposed system was compared to that of two doctors.ResultsThe accuracies of the proposed object detection based on T1W1, T2W1, and both T1W1 and T2W1 were 80.3%, 91.0%, and 93.5%, respectively. The accuracies of the doctors were 90.2% and 89.3%.ConclusionAutomated object detection of spinal schwannoma was achieved. The proposed system yielded a high accuracy that was comparable to that of the doctors.Level of Evidence: 4.Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.

      Pubmed     Full text   Copy Citation     Plaintext  

      Add institutional full text...

    Notes

     
    Knowledge, pearl, summary or comment to share?
    300 characters remaining
    help        
    You can also include formatting, links, images and footnotes in your notes
    • Simple formatting can be added to notes, such as *italics*, _underline_ or **bold**.
    • Superscript can be denoted by <sup>text</sup> and subscript <sub>text</sub>.
    • Numbered or bulleted lists can be created using either numbered lines 1. 2. 3., hyphens - or asterisks *.
    • Links can be included with: [my link to pubmed](http://pubmed.com)
    • Images can be included with: ![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
    • For footnotes use [^1](This is a footnote.) inline.
    • Or use an inline reference [^1] to refer to a longer footnote elseweher in the document [^1]: This is a long footnote..

    hide…