Projects 2020

The summer school includes practical and interactive sessions throughout the week, which will be used to encourage communication and hands-on learning. These will include a poster session and mini-projects that participants will work on together in groups. This year’s available projects include:

3D Reconstruction and Learning from Endoscopic Videos

Minimally Invasive Surgery (MIS) or keynote surgery is performed by entering the anatomical site via small incisions (laparoscopy) or through a body cavity (colonoscopy, gastrointestinal endoscopy and endo-nasal endoscopic surgery) to perform a surgical procedure. Through a miniature endoscopic camera, the surgeon navigates and localises the abnormal anatomical site for diagnosis or operation. MIS aims to achieve the same results as that of open surgery with minimum damage to the tissues and reduced trauma and recovery time of the patient. However, the constrained endoscopic environment, limited fieldofview and low-resolution imaging pose challenges during the procedure.  

This project will focus on 3D scene reconstruction from endoscopic video. Structure-from-Motion software will be used to obtain dense reconstructions from multiple endoscopic views [1]. Additionally, synthetic data generated from existing 3D reconstructions will be used to train a machine learning algorithm to perform single image registration [2] and depth estimation [3]. 

References:

[1] AliceVision, 2018. Meshroom: A 3D reconstruction software. https://github.com/alicevision/meshroom  

[2] Bano, S., Vasconcelos, F., Amo, M.T., Dwyer, G., Gruijthuijsen, C., Deprest, J., Ourselin, S., Vander Poorten, E., Vercauteren, T. and Stoyanov, D., 2019. Deep sequential mosaicking of fetoscopic videos. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 311-319). Springer, Cham. 

[3] Rau, A., Edwards, P.E., Ahmad, O.F., Riordan, P., Janatka, M., Lovat, L.B. and Stoyanov, D., 2019. Implicit domain adaptation with conditional generative adversarial networks for depth prediction in endoscopy. International journal of computer assisted radiology and surgery, 14(7), pp.1167-1176. 

Prerequisites: Python, Tensorflow/Keras, GPU

Basic Augmented Reality Demo (BARD)

In computer aided surgery, we seek to build systems that guide the surgeon. This should lead to safer surgery, and better outcomes for the patients. In this project, students will investigate some of the key concepts in building an “Augmented Reality” (AR) system for computer aided surgery.

In key hole surgery, the video camera of the laparoscope provides a convenient means to capture a view of the surgical scene. The video image provides  the “Reality” part of “Augmented Reality”. The video image can then be “Augmented” with additional information from pre-operative data such as Magnetic Resonance (MR) or Computed Tomography (CT) scans, for example, highlighting the position of tumours and critical structures like blood vessels.

In this project students will construct a basic AR system using Python. A live video camera (Laptop Webcam) will be used to capture a view of a surgical scene (Pelvis Phantom). A surgical pointer will be calibrated. The calibrated pointer will be used to mark fiducial points, and used to register a CT model of the pelvis to the scene. The CT data will then be overlaid on the video to complete the demo.After completing this workshop the student should be able to:

  • Calibrate a camera using a chessboard or similar.
  • Calibrate a tracked pointer, using an invariant point method.
  • Use the tracked pointer to locate fiducial markers in the camera scene.
  • Use the located fiducial markers to register the pre-operative CT scan of a phantom to the camera scene.

Deep Learning for Medical Image Segmentation and Registration

One of the most successful modern deep-learning applications in medical imaging is image segmentation. From neurological pathology in MR volumes to fetal anatomy in ultrasound videos, from cellular structures in microscopic images to multiple organs in whole-body

 CT scans, the list is ever expanding. This tutorial project will guide students to build and train a state-of-the-art convolutional neural network from scratch, then validate it on real patient data. The objective of this project is to obtain 1) basic understanding of machine learning approaches applied for medical image segmentation, 2) practical knowledge of essential components in building and testing deep learning algorithms, and 3) obtain hands-on experience in coding a deep segmentation network for real-world clinical applications.

Prerequisites: Python, GPU

Detection of Pneumonia Caused by COVID-19 Using Chest X-ray Imaging

Since the declaration of a global pandemic by the WHO, there has been a worldwide research effort to share data and resources. The majority of patients that present to the clinic with suspected COVID-19 pneumonia initially undergo a chest X-ray (CXR). The difference between COVID-19 pneumonia and other pneumonia presented on a chest X-ray scan can be subtle and with a large number of patients presenting, specialists are under great pressure. In an effort to improve diagnosis and early detection, a number of clinics have released their data to public CXR data repositories with confirmed disease status through current standard PCR testing [1].  These datasets can be utilised to build deep learning models to help mitigate the pressure on clinicians by differentiating between COVID-19 pneumonia and non-COVID-19 pneumonia on CXR images.

References:

[1] Joseph Paul Cohen and Paul Morrison and Lan Dao, COVID-19 image data collection, arXiv:2003.11597, 2020, https://github.com/ieee8023/covid-chestxray-dataset

Prerequisites: Python, Tensorflow

Image Quality Transfer in MRI with Deep Neural Networks

Image Quality Transfer (IQT) is a machine learning based framework to propagate rich information in high-quality but expensive images to low-quality clinical data. We study the application of IQT to enhancement of human brain MR images. In specific, this requires solving two problems: super-resolution inferring sub-voxel structures and contrast enhancement between anatomical structures. In this project, we will practically explore deep learning algorithms on MRI data. The project will involve testing the popular network architectures [1-2] on publicly available MRI database and studying the image quality for epilepsy detection.

Prerequisites: GPU

References:

[1] Tanno, Ryutaro, et al. “Bayesian image quality transfer with CNNs: exploring uncertainty in dMRI super-resolution.” MICCAI 2017.

[2] Heinrich, Larissa, John A. Bogovic, and Stephan Saalfeld. “Deep learning for isotropic super-resolution from non-isotropic 3D electron microscopy.” MICCAI 2017.

Lung image analysis

Lung cancer is the third most frequently diagnosed cancer type in the UK and is the most common cause of cancer death. Radiotherapy is a standard treatment but can cause damage to the lungs (Radiotherapy Induced Lung Damage, RILD) and other side effects. Historically, the poor survival rate of the lung cancer patients has meant that while acute-RILD (pneumonitis) has been widely studied, chronic-RILD (fibrosis) has received far less attention. Recent trials of novel radiotherapy treatment regimens have reported improved local control and longer survival times. Consequently, there is growing interest in better characterising RILD and understanding the relationship and progression between acute- and chronic-RILD. This can ultimately help to optimise radiotherapy plans and improve lung cancer patient’s quality of life. 

During the course of this project, the participants will have the opportunity to investigate methods dedicated for lung image analysis. In particular this includes the analysis of the changes to the lungs of patients who underwent radiotherapy treatment. They will apply these methods to Computed Tomography (CT) images acquired during 24 months of follow up. With the use of the dedicated set of imaging RILD biomarkers, which objectively measure changes to the anatomy and shape of the lungs, the participant will assess them and track their evolution across different time points. After the project the participants will be familiar with tools applicable to lung, airway and vessel segmentation. They will learn how these segmentations can be further used for the extraction of the RILD biomarkers. They will explore different paths of the RILD biomarkers changes observed for individual patients and compare the results. The lung image analysis methods will be demonstrated on the example of lung cancer patients, however their potential application extends to other lung diseases, in particular to idiopathic pulmonary fibrosis (IPF). For IPF patients the analysis of changes in airways and vessels can play the crucial role in better understanding of the disease.

Microstructure model selection in diffusion MRI

Cancer is one of the leading causes of mortality worldwide and its incidence is on the rise. The current diagnostic pathways are highly invasive, with associated risks, and are prone to misclassifications. Magnetic Resonance Imaging (MRI) is increasingly being used for non-invasive cancer detection. However, standard MRI cannot characterise the aggressiveness of cancer, which is determined by features like the size, shape and density of the cancer cells and is crucial for cancer management. Diffusion MRI (DMRI) is sensitive to the microstructural properties of tissue and thus is a promising tool for characterisation of cancer.

Getting clinically useful measures from DMRI signal requires a model to describe how the measured DMRI signal depends on the microstructure, which varies for the different cancer pathologies. In this project, you will fit a variety of models to DMRI data, acquired on cancer patients and quantify which model best describe the data within the cancerous tissue. The best model can then be used to quantify and compare the microstructure estimates made across the tissue, for example, to highlight lesions with varying cancer aggressiveness.

Associated publications:

http://cancerres.aacrjournals.org/content/74/7/1902

https://www.ncbi.nlm.nih.gov/pubmed/25426656

Multi-atlas segmentation of brain MRI scans for Alzheimer’s disease studies

Following the success of 3D image registration algorithms in brain MRI, multi-atlas segmentation (MAS) has become one of the most widespread techniques for segmentation of brain structures in MRI scans. The idea behind atlas-based segmentation is pretty simple: if we have a brain MRI scan with manual segmentations (i.e., an atlas), we can deform (“register”) it to a new, test image we want to analyze, and propagate the manual labels in order to obtain an automated segmentation of the test scan. In MAS, when N>1 atlases are available, we can repeat the procedure N times, and then use label fusion techniques to merge the propagated labelings into a single, more robust estimate of the segmentation. In this project, we will explore some simple MAS/label fusion techniques (majority voting, globally weighted fusion, locally weighted fusion), and apply them to a morphometric study of Alzheimer’s disease.

Associated publications:

Xavier Artaechevarria et al., “Combination Strategies in Multi‐Atlas Image Segmentation: Application to Brain MR Data”, TMI 2009

Juan Eugenio Iglesias and Mert Sabuncu: “Multi-Atlas Segmentation of Biomedical Images: A Survey”, MedIA, 2015

Prerequisites: MATLAB

Respiratory motion modeling

Respiratory motion models have great potential in radiotherapy planning and treatment guidance and can be used to aid motion-compensated image reconstruction. They allow estimation of the internal motion of a patient based on one or more easily acquired ‘respiratory surrogate signals’. 

To build the motion models, image registration is commonly used to measure the internal motion in a set of training images. Thereafter a correspondence model is fit in order to relate the measured motion with the surrogate signal. Most image registration methods do not allow for sliding motion since the transformation is regularised to produce only smooth deformations – however, sliding is observed between the lung and the chest wall. This project aims to build motion models that allow for sliding motion based on a B-spline transformation that allows for this type of motion. 

Participants will be able to engage with all parts of the motion modelling process. Working with a cine-MR image sequence, they will generate a suitable respiratory surrogate signal(s), fit different correspondence models to the training data set, and evaluate the accuracy of the models. Advanced tasks involve preparing the motion models for inter-cycle variation or investigating how the model’s accuracy evolves over time. 

TADPOLE Challenge: Prediction of Alzheimer’s Disease Evolution using Statistical Models and Machine Learning

Alzheimer’s disease and related dementias affect more than 50 million people worldwide. No current treatments are available that can provably cure or even slow the progression of Alzheimer’s disease — all clinical trials have so far failed to prove a disease-modifying effect. One reason why they fail is the difficulty in identifying patients at early disease stages, when treatments are most likely to have an effect. The Alzheimer’s Disease Prediction Of Longitudinal Evolution (TADPOLE) Challenge was designed to find the best approaches for predicting disease progression and thus help with early identification of at-risk subjects.

This project will be run as a live 3-day competition, where participants will group into teams and create algorithms to predict the future in patients and those at risk of Alzheimer’s disease using a publicly-available dataset. We will run a live Kaggle-style leader board where participants will make predictions and see their performance results in real time. Prizes may be offered to the teams making the best predictions. 

Associated resources:

https://arxiv.org/abs/1805.03909

https://tadpole.grand-challenge.org/  

Prerequisites: Python