-
- Mattea L Welch, Chris McIntosh, Alberto Traverso, Leonard Wee, Tom G Purdie, Andre Dekker, Benjamin Haibe-Kains, and David A Jaffray.
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada. Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada. The Techna Institute for the Advancement of Technology for Health, Toronto, Ontario, Canada. Author to whom any correspondence should be addressed.
- Phys Med Biol. 2020 Feb 5; 65 (3): 035017.
AbstractQuality assurance of data prior to use in automated pipelines and image analysis would assist in safeguarding against biases and incorrect interpretation of results. Automation of quality assurance steps would further improve robustness and efficiency of these methods, motivating widespread adoption of techniques. Previous work by our group demonstrated the ability of convolutional neural networks (CNN) to efficiently classify head and neck (H&N) computed-tomography (CT) images for the presence of dental artifacts (DA) that obscure visualization of structures and the accuracy of Hounsfield units. In this work we demonstrate the generalizability of our previous methodology by validating CNNs on six external datasets, and the potential benefits of transfer learning with fine-tuning on CNN performance. 2112 H&N CT images from seven institutions were scored as DA positive or negative. 1538 images from a single institution were used to train three CNNs with resampling grid sizes of 643, 1283 and 2563. The remaining six external datasets were used in five-fold cross-validation with a data split of 20% training/fine-tuning and 80% validation. The three pre-trained models were each validated using the five-folds of the six external datasets. The pre-trained models also underwent transfer learning with fine-tuning using the 20% training/fine-tuning data, and validated using the corresponding validation datasets. The highest micro-averaged AUC for our pre-trained models across all external datasets occurred with a resampling grid of 2563 (AUC = 0.91 ± 0.01). Transfer learning with fine-tuning improved generalizability when utilizing a resampling grid of 2563 to a micro-averaged AUC of 0.92 ± 0.01. Despite these promising results, transfer learning did not improve AUC when utilizing small resampling grids or small datasets. Our work demonstrates the potential of our previously developed automated quality assurance methods to generalize to external datasets. Additionally, we showed that transfer learning with fine-tuning using small portions of external datasets can be used to fine-tune models for improved performance when large variations in images are present.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*,_underline_or**bold**. - Superscript can be denoted by
<sup>text</sup>and subscript<sub>text</sub>. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3., hyphens-or asterisks*. - Links can be included with:
[my link to pubmed](http://pubmed.com) - Images can be included with:
 - For footnotes use
[^1](This is a footnote.)inline. - Or use an inline reference
[^1]to refer to a longer footnote elseweher in the document[^1]: This is a long footnote..