Pneumothorax Detection On Chest Radiographs: A Comparative Analysis Of Public Datasets, Deep Learning Architectures, And Domain Adaptation Via Iterative Self-Training

R. Kluge

In part I of this thesis, we show an open \& transparent deep learning approach to pneumothorax detection. We combine various publicly available datasets that are accessible to anyone, giving us a verifiable multi-center approach. Even though we only train on public datasets, we achieve equal performance (AUC of 0.94) compared to state-of-the-art research that uses additional private datasets (Majkowska et al., 2020). Because all the datasets used in this research are public, we publish our model weights \& algorithm1 and hope that future research can benefit from our work.

Further, in part II we propose an unsupervised domain adaptation method - iterative self-training - that improves performance on an unseen dataset without the need for additional labelling (i.e. different hospital data). These results show an increase in performance (AUC 0.82 - 0.89) for pneumothorax detection on public datasets CheXpert - SIIM. This method was submitted to MIDL 2020 as a short paper (Appendix C).

Finally, in part III we evaluate the complete pipeline including our iterative self-training method on a local private dataset of 28.207 images (RadboudCXR), and verify that iterative self-training successfully adapts to an unseen local dataset (AUC 0.87 - 0.92). We integrate the final algorithm using the grand-challenge platform and is publicly accessible for testing: https://grand-challenge.org/algorithms/cxr-pneumothorax-detection/

Request PDF

A pdf file of this publication is available for personal use. Enter your e-mail address in the box below and press the button. You will receive an e-mail message with a link to the pdf file.