Previous years projects (2022)

Introduction to Imaging Genetics

Leader: Andre Altmann

Traditional genetic studies often compare cases with controls in order
to gain insights into disease processes. Instead of these crisp labels,
imaging genetics uses quantitative biomarkers derived from imaging data
as intermediate phenotypes to gain more nuanced insights. Imaging
genetics is most widely applied in the field of neuroimaging to
understand the genetic mechanisms that underlie brain development, brain
function and brain disease.

This tutorial project will guide students to gain hands-on experience
with imaging genetics data and workflows. The first objective will be
processing genetic data and conducting classic genetic analysis with
imaging features [2,3]. The second objective will be the use of machine
learning methods to conduct multivariate imaging genetics analyses [1,4].

Background reading:
[1] (The hitchhiker‘s guide to) Imaging-Genetics: https://marcolorenzi.github.io/material/winter_school/Imaging_Genetics_Book_Chapter.pdf
[2] Medland, et al. Whole-genome analyses of whole-brain data: working within an expanded search space. Nat Neurosci 17, 791–800 (2014): https://doi.org/10.1038/nn.3718
[3] Elliott, et al. Genome-wide association studies of brain imaging phenotypes in UK Biobank. Nature 562, 210–216 (2018): https://doi.org/10.1038/s41586-018-0571-7
[4] Lorenzi, et al. Susceptibility of brain atrophy to TRIB3 in Alzheimer’s disease, evidence from functional prioritization in imaging genetics. PNAS 115 3162-3167 (2018): https://doi.org/10.1073/pnas.1706100115

Prerequisites: Plink (version 1.9) (https://www.cog-genomics.org/plink/), Python3, Jupyter notebook

IQT: Image Quality Transfer

Leaders: Matteo Figini, Ahmed Abdelkarim

Image Quality Transfer (IQT) is a machine learning based framework to propagate information from state-of-the-art imaging systems into a clinical environment where the same image quality cannot be achieved [1]. It has been successfully applied to increase the spatial resolution of diffusion MRI data [1, 2] and to enhance both contrast and resolution in images from low-field scanners [3]. In this project, we will explore the deep learning implementation of IQT and investigate the effect of the different parameters and options, using MRI data from publicly available MRI databases. We will also test the algorithms on clinical data to assess the enhancement of images from epilepsy patients.

Prerequisites: GPU, Tensorflow

References:
[1] D. Alexander et al., “Image quality transfer and applications in diffusion MRI”, Neuroimage 2017. 152:283-298
[2] R. Tanno, et al. “Uncertainty modelling in deep learning for safer neuroimage enhancement: Demonstration in diffusion MRI.” NeuroImage 2021. 225
[3] M. Figini et al., “Image Quality Transfer Enhances Contrast and Resolution of Low-Field Brain MRI in African Paediatric Epilepsy Patients”, ICLR 2020 workshop on Artificial Intelligence for Affordable Healthcare

Implementing Reproducible Medical Image Analysis Pipelines

Leaders: Dave Cash, Haroon Chughtai

This project provides a demonstration of how to implement a reproducible medical image analysis pipeline for scalable, high throughput analysis without the need for substantial experience of coding. It will also show how solutions for maintaining the privacy of study participants can be implemented with low overhead. It will use all open source software, in particular the data and analysis will be managed through XNAT, a widely used web-based platform. In this tutorial, we will go through the benefits this platform for automating handling, importing, and cleaning of DICOM data, conversion to Nifti, de-facing structural T1 data to provide additional assurance of privacy, and finally volumetric and cortical thickness analysis using FastSurfer, a free deep learning implementation of FreeSurfer. The project will determine how much de-facing algorithms change the results of FastSurfer compared to the original images.

Further courses and tutorials on medical image data management, software development, and medical image analysis will soon be available at Health and Bioscience IDEAS, a UKRI-funded training program around medical imaging for UK researchers.

Prerequisites: FastSurfer, XNAT

FetReg: Placental Vessel Segmentation and Registration in Fetoscopy

Leaders: Sophia Bano, Francisco Vasconcelos

Fetoscopy Laser Photocoagulation (FLP) is a widely used procedure for the treatment of Twin-to-Twin Transfusion Syndrome (TTTS). In TTTS, the flow of blood between the two fetuses becomes uneven as a result the donor experiences slow growth while the recipient is at risk of heart failure due to the excess of blood it takes. During FLP, the abnormal vascular anastomoses are identified, and laser ablated to regulate the flow of blood. The procedure is particularly challenging due to the limited field–of–view, poor manoeuvrability of the fetoscopy, poor visibility due to fluid turbidity and variability in light source, and unusual position of the placenta. This may lead to increased procedural time and incomplete ablation, resulting in persistent TTTS. Computer-assisted intervention can help overcome these challenges by expanding the fetoscopic field-of-view and providing better visualization of the vessel map. This in turn can guide the surgeons in better localizing abnormal anastomoses.

This project aims at using supervised image segmentation models for segmenting the placental vessels and performing direct image registration on the segmented vessel maps for generating a consistent mosaic of the intra-operative environment [1]. The project will utilise the publicly available Placental Vessel Dataset [2] and will also provide the basics for participating in the MICCAI2021 EndoVis FetReg Challenge [3] to interested students. The FetReg challenge was featured as the challenge of the month in Computer Vision News magazine June 2021 issue [4].

Technical Requirements:
– Basic understanding of the image segmentation and registration techniques
– Hands on experience of using a deep learning framework (Pytorch/Tensorflow)

Useful links:
[1] Bano, S., Vasconcelos, F., Shepherd, L.M., Vander Poorten, E., Vercauteren, T., Ourselin, S., David, A.L., Deprest, J. and Stoyanov, D., 2020, October. Deep placental vessel segmentation for fetoscopic mosaicking. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 763-773). Springer, Cham. arxiv.org/pdf/2007.04349.pdf
[2] Placental Vessel Dataset: www.ucl.ac.uk/interventional-surgical-sciences/fetoscopy-placenta-data
[3] endovis.grand-challenge.org/
[4] www.rsipvision.com/ComputerVisionNews-2021June/22/

Acceleration of microstructure imaging in diffusion MRI with deep learning

Leaders: Ting Gong, Tobias Goodwin-Allcock

Diffusion MRI (dMRI) can probe tissue microstructure properties by acquiring certain amount of dMRI measurements and solving the inverse problem of a diffusion model. Conventional model fitting-based methods usually require large amount of dMRI measurements with long acquisition time, which can be highly accelerated by deep learning-based methods through neural networks.

This tutorial project will guide participates (code and data provided) to build neural networks to effectively solve the inverse problem of diffusion parameter estimation. Take the diffusion tensor/kurtosis model as an example, we will practically explore some voxel-based and patch-based neural networks. The objectives of this project are to obtain 1) basic understanding of deep learning approaches applied in dMRI, 2) practical knowledge of essential components in building neural networks for diffusion model fitting.

Prerequisites: Python, Tensorflow, Keras

References:
[1] Gong, T., et al. (2018).  Efficient Reconstruction of Diffusion Kurtosis Imaging Based on a Hierarchical Convolutional Neural Network. Presented at: the 26th ISMRM Annual Meeting.
[2] Li, Z., et al. (2019).  Fast and Robust Diffusion Kurtosis Parametric Mapping Using a Three-Dimensional Convolutional Neural Network. IEEE Access, 7, 71398-71411: https://doi.org/10.1109/access.2019.2919241

Tractography: modelling connections in the human brain, and the improvements offered by deep learning

Leaders: Ellie Thompson, Tiantian He

Tractography [1] is currently the only tool available to probe non-invasively the structural connectivity of the brain in-vivo. It has wide-spread potential applications, from surgical planning [2] to modelling the spread of neurodegenerative diseases through the brain [3]. However, tractography is subject to extensive modelling errors, resulting in a large number of false positive and false negative connections in the resulting connectome [4]. This severely limits the applications and reliability of tractography.

This project will introduce participants to the basic principles of tractography. Participants will have the opportunity to implement tractography algorithms from scratch, and compare these results to state-of-the-art tractography software tools in MRtrix3 [5]. Participants will also have the opportunity to explore a fast DTI [6] deep learning algorithm, and understand how this work can improve the accessibility of tractography.

Prerequisites:
– Knowledge: MATLAB or python
– Equipment: Own laptop, no GPU needed

References:
[1] Jeurissen, B., et al. (2019). Diffusion MRI fiber tractography of the brain. NMR in Biomedicine, 32(4), p.e3785.
[2] Winston, G.P., et al. (2012). Optic radiation tractography and vision in anterior temporal lobe resection. Annals of neurology, 71(3), pp.334-341.
[3] Brettschneider, J., et al. (2015). Spreading of pathology in neurodegenerative diseases: a focus on human studies. Nature Reviews Neuroscience, 16(2), pp.109-120.
[4] Maier-Hein, K.H., et al. (2017). The challenge of mapping the human connectome based on diffusion tractography. Nature communications, 8(1), pp.1-13.
[5] Tournier, J.D., et al. (2019). MRtrix3: A fast, flexible and open software framework for medical image processing and visualisation. NeuroImage, 202, p.116137.
[6] Tian, Q., et al. (2020). DeepDTI: High-fidelity six-direction diffusion tensor imaging using deep learning. NeuroImage, 219, p.117017.

Basic Augmented Reality Demo (BARD)

Leaders: Matt Clarkson, Stephen Thompson, Thomas Dowrick

In computer aided surgery, we seek to build systems that guide the surgeon. This should lead to safer surgery, and better outcomes for the patients. In this project, students will investigate some of the key concepts in building an “Augmented Reality” (AR) system for computer aided surgery.

In key hole surgery, the video camera of the laparoscope provides a convenient means to capture a view of the surgical scene. The video image provides  the “Reality” part of “Augmented Reality”. The video image can then be “Augmented” with additional information from pre-operative data such as Magnetic Resonance (MR) or Computed Tomography (CT) scans, for example, highlighting the position of tumours and critical structures like blood vessels.

In this project students will construct a basic AR system using Python. A live video camera (Laptop Webcam) will be used to capture a view of a surgical scene (Pelvis Phantom). A surgical pointer will be calibrated. The calibrated pointer will be used to mark fiducial points, and used to register a CT model of the pelvis to the scene. The CT data will then be overlaid on the video to complete the demo.After completing this workshop the student should be able to:

Calibrate a camera using a chessboard or similar.

Calibrate a tracked pointer, using an invariant point method.

Use the tracked pointer to locate fiducial markers in the camera scene.

Use the located fiducial markers to register the pre-operative CT scan of a phantom to the camera scene.

Deep Learning for Medical Image Segmentation and Registration

Leader: Yipeng Hu

One of the most successful modern deep-learning applications in medical imaging is image segmentation. From neurological pathology in MR volumes to fetal anatomy in ultrasound videos, from cellular structures in microscopic images to multiple organs in whole-body CT scans, the list is ever expanding.

This tutorial project will guide students to build and train a state-of-the-art convolutional neural network from scratch, then validate it on real patient data.

The objective of this project is to obtain
1) basic understanding of machine learning approaches applied for medical image segmentation,
2) practical knowledge of essential components in building and testing deep learning algorithms, and
3) obtain hands-on experience in coding a deep segmentation network for real-world clinical applications.

Prerequisites: Python, GPU (via Colaboratory)

Devising a deep learning pipeline for retinal image analysis

Leaders: Yukun Zhou, Peter Woodward-Court

The significance of the retinal image analysis for assessing ophthalmic disease and systemic disease is well known [1–3]. Considering that manual tissue segmentation and feature extraction can be extremely time consuming, as well as poorly reproducible, there has been growing interest in the development of tools which can conduct retinal image analysis in a fully-automated manner.

This project will guide participants to understand and develop a deep learning pipeline, which consists of a series of typical deep neural networks [4,5], including image quality assessment, anatomical tissue segmentation, morphology feature measurement, and association study. Participants will learn to plan a systemic project and transfer practical knowledge to their own research.

Prerequisites: Python, GPU (via Colaboratory)

References:

  1. Wagner SK et al. Insights into Systemic Disease through Retinal Imaging-Based Oculomics. Transl Vis Sci Technol. 2020;9: 6.
  2. Cheung CY et al. A deep-learning system for the assessment of cardiovascular disease risk via the measurement of retinal-vessel calibre. Nat Biomed Eng. 2021;5: 498–508.
  3. De Fauw J et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018;24: 1342–1350.
  4. Zhou Y et al. Learning to Address Intra-segment Misclassification in Retinal Imaging. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. Springer International Publishing; 2021. pp. 482–492.
  5. Zhou Y et al. A refined equilibrium generative adversarial network for retinal vessel segmentation. Neurocomputing. 2021;437: 118–130.

TADPOLE Challenge: Prediction of Alzheimer’s Disease Evolution using Statistical Models and Machine Learning

Leaders: Isaac Llorente-Saguer

Alzheimer’s disease and related dementias affect more than 50 million people worldwide. No current treatments are available that can provably cure or even slow the progression of Alzheimer’s disease — all clinical trials of experimental drugs have so far failed to prove a disease-modifying effect. One reason why they fail is the difficulty in identifying patients at early disease stages, when treatments are most likely to have an effect. The Alzheimer’s Disease Prediction Of Longitudinal Evolution (TADPOLE) Challenge was designed to find the best approaches for predicting disease progression and thus help with early identification of at-risk subjects.

This project will run as an open, collaborative effort to forecast Alzheimer’s progression. In a 3-day friendly competition, attendees will group into teams and play with algorithms to predict the future in patients and those at risk of Alzheimer’s disease using a publicly-available dataset. We may run a live Kaggle-style leader board where participants will make predictions and see their performance results in near-to-real time. Team(s) making the best predictions will earn massive data science street cred. Error! Filename not specified.

Background:
– TADPOLE Challenge website: tadpole.grand-challenge.org
– TADPOLE MedICSS repository: github.com/ucl-pond/MedICSS-TADPOLE

References:
[1] Challenge description manuscript: Marinescu et al., 2018 arXiv:1805.03909
[2] Challenge results manuscript: Marinescu et al., 2020 arXiv:2002.03419

Prerequisites: Python; ADNI Data access (we can help)

How old is your brain? Predicting age from neuroimaging data.

Leader: James Cole

Ageing has a pronounced effect on the brain and is associated with functional impairment, cognitive decline, and risk of neurodegenerative disease. While universal, brain ageing is not uniform; some people experience cognitive decline and dementia in late middle age, while others retain normal cognitive function well into their tenth decade. To understand this variability in brain ageing and help identify people at risk of poor brain health as they age, we have employed the ‘brain-age’ paradigm to index the brain’s biological age. Having an older appearing brain has been associated with neurological and psychiatric diseases and poorer age-related health outcomes.

In this project, we will take a large dataset of brain MRI scans from healthy people (IXI dataset) and use machine learning to predict the chronological age of people in the dataset. Students will construct a deep convolutional neural network (CNN) architecture and train it to answer the following questions: 1) can we accurately predict chronological age using T1-weighted structural MRI data? 2) can we accurately predict chronological age using diffusion-weighted MRI data? 3) can we improve age prediction by combining data from T1-weighted and diffusion-weighted MRI brain scans?

Background reading:

Franke, K., & Gaser, C. (2019). Ten Years of BrainAGE as a Neuroimaging Biomarker of Brain Aging: What Insights Have We Gained? Frontiers in Neurology, 10(789). doi:10.3389/fneur.2019.00789.

Cole, J. H., Poudel, R. P. K., Tsagkrasoulis, D., Caan, M. W. A., Steves, C., Spector, T. D., & Montana, G. (2017). Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker. NeuroImage, 163C, 115-124. doi:10.1016/j.neuroimage.2017.07.059

Wood, D. A., Kafiabadi, S., Busaidi, A. A., Guilhem, E., Montvila, A., Lynch, J., . . . Booth, T. C. (2022). Accurate brain‐age models for routine clinical MRI examinations. NeuroImage, 249, 118871. doi:https://doi.org/10.1016/j.neuroimage.2022.118871

Prerequisites: Python, GPU (via Colaboratory)

Estimation of brain tissue microstructure with dMRI

Leader: Michele Guerreri

Microstructure imaging aims to estimate and map micron scale properties of the tissues in-vivo and non-invasively, using models that link these properties to the MR signal in each voxel of an image [1]. Brain microstructural imaging is of particular interest as it provides a unique window on the structural basis of brain functions. For a given model and dataset, a key component of microstructure imaging is the parameter estimation, for which a variety of options are available. The standard approach is to use maximum likelihood estimation, typically via non-linear fitting. Another approach is to exploit faster convex optimization methods after a linear re-formulation of the model [2]. More recently, deep learning (DL) approaches have been used to approximate the functional relation between the input signal and the output parameters with the potential of reducing the amount of data required for the parameter estimation [3].

This project aims at providing an overview as well as a comparison of these three approaches. As a practical example, we will focus our attention on a popular model in the neuroimaging field called NODDI [4], which has been originally developed to infer various indices of neurite morphology. We will first review the conventional non-linear fitting approach, learning how to judge the quality of the fit. Next, we will explore a convex optimization-based framework known as AMICO [2], comparing its output against the results obtained from the non-linear fitting approach. Finally, we will assess the potential of DL for model parameter estimation.

Prerequisites: Matlab

References:

  1. Alexander, Daniel C., et al. “Imaging brain microstructure with diffusion MRI: practicality and applications.” NMR in Biomedicine 32.4 (2019): e3841.
  2. Daducci, Alessandro, et al. “Accelerated microstructure imaging via convex optimization (AMICO) from diffusion MRI data.” Neuroimage 105 (2015): 32-44.
  3. Golkov, Vladimir, et al. “Q-space deep learning: twelve-fold shorter and model-free diffusion MRI scans.” IEEE transactions on medical imaging 35.5 (2016): 1344-1351.
  4. Zhang, Hui, et al. “NODDI: practical in vivo neurite orientation dispersion and density imaging of the human brain.” Neuroimage 61.4 (2012): 1000-1016.

Lung Image Analysis

Leader: Adam Szmul

In this project, you will have the opportunity to investigate methods dedicated to lung image analysis. After completing the project, you will be familiar with tools applicable to lung and airway segmentation using Python. You will learn how these segmentations can be further used for the extraction of the Radiation Induced Lung Damage (RILD) biomarkers, which can potentially also be applied to other lung related diseases. You will analyse the results of the RILD biomarkers for individual patients and compare them look for distinct patterns.

Prerequisites: Python

Multi-modal (video and kinematics) learning for action recognition and skill estimation in robotic surgery

Leader: Evans Mazomenos

The JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) is a surgical activity dataset for human motion modelling [1]. The data was collected through a collaboration between The Johns Hopkins University (JHU) and Intuitive Surgical, Inc. (Sunnyvale, CA. ISI) within an IRB-approved study [2, 3]. The dataset was captured using the da Vinci Surgical System (dVSS) from eight surgeons with different levels of skill performing five repetitions of three elementary surgical tasks (suturing, knot-tying and needle-passing) on a bench-top model.

This tutorial project focuses on multi-modal learning architectures for surgical activity recognition and skill estimation. It will provide hand-on experience to participants on developing, deploying and evaluating state-of-the-art spatio-temporal machine learning architectures for video and timeseries analysis, on the JIGSAWS dataset [4]. The objectives of this project are to obtain 1) basic understanding of the state-of-the-art deep learning methods for video and time-series analysis, 2) practical knowledge on designing learning approaches for different spatio-temporal tasks (instantaneous classification, overall regression).

Prerequisites: Python, Pytorch, Tensorflow, Keras (the JIGSAWS data will be provided).

References:

[1] https://cirl.lcsr.jhu.edu/research/hmm/datasets/jigsaws_release/

[2] Y. Gao et. al. “The JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS): A Surgical Activity Dataset for Human Motion Modeling”, Modeling and Monitoring of Computer Assisted Interventions (M2CAI) – MICCAI Workshop, 2014. (Online at: https://cirl.lcsr.jhu.edu/wp-content/uploads/2015/11/JIGSAWS.pdf)

[3] N. Ahmidi et al., “A Dataset and Benchmarks for Segmentation and Recognition of Gestures in Robotic Surgery,” in IEEE Transactions on Biomedical Engineering, vol. 64, no. 9, pp. 2025-2041, Sept. 2017, doi: 10.1109/TBME.2016.2647680.

[4] B. van Amsterdam, M. J. Clarkson and D. Stoyanov, “Gesture Recognition in Robotic Surgery: A Review,”, IEEE Trans Biomed Eng, vol. 68, no. 6, pp. 2021-2035, June 2021, doi: 10.1109/TBME.2021.3054828.