This project focuses on integration of histopathological and radiological images to improve our understanding of disease diagnosis and progression in prostate cancer. In medicine, different specialties tend to operate in silos, each with their own tools to best help patients. They seldom cross-pollinate, limiting cross-specialty learning, which could offer key insights to improve diagnostics, especially at the intersection of radiology (in vivo) and pathology (ex vivo). In this project we aim to fuse these two specialties (pathology and radiology) to combine information from both specialties.
The objective of the project is to develop AI algorithms to perform a 3D reconstruction of 2D histopathology images and to align these 3D reconstructions with the corresponding MRI of these prostates. With this aligned dataset of both the histopathology and MR images we will train generative adversarial models to transform one modality into the other. We expect that these generative adversarial methods can pick up correlations between the different types of images such that we can use these models to artificially generate the other modality when only one modality is present. In the end, we expect that these models can aid in estimating the prognosis or selecting the optimal treatment for prostate cancer patients.