Few-shot learning for medical image segmentation

Niels van Hoeffelen
Start date: October, 2022
End date: June, 2023

Bob Sanders
Start date: Nov 14, 2022
End date: May 14, 2023

Clinical problem

Segmentation of anatomical structures and lesions in 3D scans is important for many AI radiology applications. Convolutional neural networks such as the U-net, an encoder-decoder architecture with skip connections, generally produce good results when trained with a few hundred scans in which the structures to be segmented have been manually outlined. But annotating a few hundred scans is very time consuming. Moreover, a trained segmentation network may fail if it is applied to new cases that are somewhat different from the data is was trained on, for example because the new case contains abnormalities that were not well represented in the training data.


Recently it has been demonstrated that large language models such as GPT-3 that have been trained a huge corpora of text in a self-supervised manner, can be quickly adapted to solve new tasks by providing them with only a few examples of the desired input and output (this is called few-shot learning). In this project, the goal is to investigate if it is possible to develop 2D segmentation methods that can learn a new tasks from only one or a few example slices with segmented output. Such a method could be used in an interactive setting to massively speed-up annotation of 3D scans; a user would segment one or a few slices and the few-shot 2D segmentation engine produces a 3D output by simply iterating over all slices.

Based on recent deep learning literature, we envision several strategies that could be explored to build a good few-shot 2D segmentation method. A small literature search is also part of the project before you embark on building a system.


We have many thousands of 3D scans available, consisting of hundreds of thousands 2D slices, in which a large number of objects have been manually segmented. These data sets can be used to develop the few-shot segmentation model. We will focus on applications in computed tomography (CT) and optical coherence tomography (OCT) scans.


The project should result in a reusable method to speed-up the annotation of new 3D scans, possibly scans for which existing segmentation models produce poor results, so that those methods can be improved by retraining them with additional interactively corrected cases. We intend to make the model available on grand-challenge.org and possibly write a publication.


You will be embedded in the Department Of Medical Imaging at Radboudumc. We provide access to Sol, our large GPU cluster, and the cloud-based compute infrastructure of grand-challenge.org, powered by Amazon Web Services.


  • Students Artificial Intelligence, Data Science, Computer Science, Bioinformatics, or Biomedical Sciences in the final stages of their Master education.
  • You should be proficient in Python programming and have a theoretical understanding of deep learning
  • Basic biological / biomedical knowledge is preferred.


  • Project duration: 6 months
  • Location: Radboud University Medical Center
  • For more information or to apply for this project, please contact bram.vanginneken@radboudumc.nl.


Niels van Hoeffelen

Niels van Hoeffelen

Master Student

Bob Sanders

Bob Sanders

Master Student

Silvan Quax

Silvan Quax

Research Scientist RTC Deep Learning

RTC Deep Learning