MedICSS includes interactive sessions throughout the week, including the group mini projects below, and an optional poster session for attendees to share their own research with each other.
This year’s available projects include (gallery is a work in progress):
Image Quality Requirements for Digital Pathology Diagnostics
Leader: Lydia Neary-Zajiczek
Pathology is the study of human material for the evidence of disease, and like many health care services, it is under tremendous pressure due to staff shortages and increasing demand [1]. Unlike radiology, pathology is still largely “analogue”, where physical samples (tissue, blood etc.) are inspected using conventional light microscopes and a diagnosis is made. The digitization of pathology has been a goal for many years with the aim of improving efficiency and increasing access to expert pathologists in underserved areas, however it still has not seen widespread adoption.
One of the major obstacles to adoption is pathologists’ perception of the quality of digital images as being inferior compared to physical samples, despite enormous technological advances and evidence that diagnostic accuracy is equivalent between the two modalities [2]. Digitally scanning samples at the high resolution demanded by pathologists is time consuming, requires expensive equipment and generates enormous amounts of data (multiple gigabytes for a single glass slide).
In this project you will learn about how microscopes and cameras create digital representations of physical objects, and using publicly available histopathology image datasets and well-established machine learning techniques, explore the minimum image quality requirements for accurate pathology diagnostics.
Background reading:
– Read through the following tutorial on basic image classification using Keras: https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
– An overview of a Fourier treatment of an imaging system [3] (password summerUCL):
https://liveuclac-my.sharepoint.com/:b:/g/personal/ucabzaj_ucl_ac_uk/ETA29v95DrxLrRUaO-1K43UBGQVS_qrw1l6kNTOurWX0Ww?e=KJQd7V
Prerequisites: MATLAB, Python, Tensorflow (optional: GPU+CUDA)
– MATLAB: available free to UCL students (if not available, we can do everything in python, but image manipulation is easier in MATLAB)
– Python 3.6.8 (Pycharm IDE recommended) with the following packages:
– numpy
– tensorflow 1.11.0/tensorflow-gpu 1.11.0
– If your computer has a NVIDIA GPU, you need to install CUDA 9.0 as well
https://www.tensorflow.org/install/gpu
– keras 2.24
– matplotlib
References:
[1] The Royal College of Pathologists, “Meeting pathology demand: Histopathology workforce census,” London, UK, 2018
[2] E. Goacher, R. Randell, B. J. Williams, and D. Treanor, “The Diagnostic Concordance of Whole Slide Imaging and Light Microscopy: A Systematic Review,” Arch. Pathol. Lab. Med., vol. 141, no. 1, pp. 151–161, Jan. 2017.
[3] J. W. Goodman, Introduction to Fourier optics. Roberts & Co, 2005.
TADPOLE Challenge: Prediction of Alzheimer’s Disease Evolution using Statistical Models and Machine Learning
Leaders: Mar Garcia, Neil Oxtoby
Alzheimer’s disease and related dementias affect more than 50 million people worldwide. No current treatments are available that can provably cure or even slow the progression of Alzheimer’s disease — all clinical trials of experimental drugs have so far failed to prove a disease-modifying effect. One reason why they fail is the difficulty in identifying patients at early disease stages, when treatments are most likely to have an effect. The Alzheimer’s Disease Prediction Of Longitudinal Evolution (TADPOLE) Challenge was designed to find the best approaches for predicting disease progression and thus help with early identification of at-risk subjects.
This project will run as an open, collaborative effort to forecast Alzheimer’s progression. In a 3-day friendly competition, attendees will group into teams and play with algorithms to predict the future in patients and those at risk of Alzheimer’s disease using a publicly-available dataset. We may run a live Kaggle-style leader board where participants will make predictions and see their performance results in near-to-real time. Team(s) making the best predictions will earn massive data science street cred. 😉
Background:
– TADPOLE Challenge website: tadpole.grand-challenge.org
– TADPOLE MedICSS repository: github.com/ucl-pond/MedICSS-TADPOLE
References:
[1] Challenge description manuscript: Marinescu et al., 2018 arXiv:1805.03909
[2] Challenge results manuscript: Marinescu et al., 2020 arXiv:2002.03419
Prerequisites: Python; ADNI Data access (we can help)
Introduction to Imaging Genetics
Leader: Andre Altmann
Traditional genetic studies often compare cases with controls in order
to gain insights into disease processes. Instead of these crisp labels,
imaging genetics uses quantitative biomarkers derived from imaging data
as intermediate phenotypes to gain more nuanced insights. Imaging
genetics is most widely applied in the field of neuroimaging to
understand the genetic mechanisms that underlie brain development, brain
function and brain disease.
This tutorial project will guide students to gain hands-on experience
with imaging genetics data and workflows. The first objective will be
processing genetic data and conducting classic genetic analysis with
imaging features [2,3]. The second objective will be the use of machine
learning methods to conduct multivariate imaging genetics analyses [1,4].
Background reading:
[1] (The hitchhiker‘s guide to) Imaging-Genetics: https://marcolorenzi.github.io/material/winter_school/Imaging_Genetics_Book_Chapter.pdf
[2] Medland, et al. Whole-genome analyses of whole-brain data: working within an expanded search space. Nat Neurosci 17, 791–800 (2014): https://doi.org/10.1038/nn.3718
[3] Elliott, et al. Genome-wide association studies of brain imaging phenotypes in UK Biobank. Nature 562, 210–216 (2018): https://doi.org/10.1038/s41586-018-0571-7
[4] Lorenzi, et al. Susceptibility of brain atrophy to TRIB3 in Alzheimer’s disease, evidence from functional prioritization in imaging genetics. PNAS 115 3162-3167 (2018): https://doi.org/10.1073/pnas.1706100115
Prerequisites: Plink (version 1.9) (https://www.cog-genomics.org/plink/), Python3, Jupyter notebook
IQT: Image Quality Transfer
Leaders: Matteo Figini, Georgia Doumou
Image Quality Transfer (IQT) is a machine learning based framework to propagate information from state-of-the-art imaging systems into a clinical environment where the same image quality cannot be achieved [1]. It has been successfully applied to increase the spatial resolution of diffusion MRI data [1, 2] and to enhance both contrast and resolution in images from low-field scanners [3]. In this project, we will explore the deep learning implementation of IQT and investigate the effect of the different parameters and options, using MRI data from publicly available MRI databases. We will also test the algorithms on clinical data to assess the enhancement of images from epilepsy patients.
Prerequisites: GPU, Tensorflow
References:
[1] D. Alexander et al., “Image quality transfer and applications in diffusion MRI”, Neuroimage 2017. 152:283-298
[2] R. Tanno, et al. “Uncertainty modelling in deep learning for safer neuroimage enhancement: Demonstration in diffusion MRI.” NeuroImage 2021. 225
[3] M. Figini et al., “Image Quality Transfer Enhances Contrast and Resolution of Low-Field Brain MRI in African Paediatric Epilepsy Patients”, ICLR 2020 workshop on Artificial Intelligence for Affordable Healthcare
Implementation and evaluation of the learning-based method R2D2 for endoscopy image registration
Leader: Frans Chadebecq
Context and objectives:
While learning-based registration approaches have demonstrated to outperform hand-crafted approaches for usual computer vision scenario such as city mapping or landscape image mosaicking [1], there actually exists numerous applicative scenarios for which registration remains a challenging problem and hand-crafted methods such as SIFT the de-facto gold standard. This is particularly the case for endoscopy due to the paucity of reliable scene landmarks, the complex reflectance properties of tissue and the constrained manipulation of endoscopes within confined and deformable environments.
The objective of this project is to implement and evaluate the learning-based method R2D2 for endoscopy image registration [2]. We will first consider different image augmentation schemes [3] for simulating usual colonoscopy constraints. We will then implement the core architecture of R2D2 and evaluate the efficiency of this approach on real endoscopy images. Finally, according to the progress of the project, we will extend R2D2 to improve its robustness towards colonoscopy artifacts such as motion blur and illumination artifacts.
Prerequisites: Python, Pytorch, MATLAB
References:
[1] Ma, J., Jiang, X., Fan, A. et al. Image Matching from Handcrafted to Deep Features: A Survey. Int J Comput Vis 129, 23–79 (2021). https://doi.org/10.1007/s11263-020-01359-2
[2] Revaud, J., De Souza, C., Humenberger, M., Weinzaepfel, P. R2D2: Repeatable and Reliable Detector and Descriptor. NEURIPS2019. https://arxiv.org/abs/1906.06195
[3] Imgaug: a library for image augmentation in machine learning experiments. https://imgaug.readthedocs.io/en/latest/
Implementing Reproducible Medical Image Analysis Pipelines
Leaders: Dave Cash, Haroon Chughtai
This project provides a demonstration of how to implement a reproducible medical image analysis pipeline for scalable, high throughput analysis without the need for substantial experience of coding. It will also show how solutions for maintaining the privacy of study participants can be implemented with low overhead. It will use all open source software, in particular the data and analysis will be managed through XNAT, a widely used web-based platform. In this tutorial, we will go through the benefits this platform for automating handling, importing, and cleaning of DICOM data, conversion to Nifti, de-facing structural T1 data to provide additional assurance of privacy, and finally volumetric and cortical thickness analysis using FastSurfer, a free deep learning implementation of FreeSurfer. The project will determine how much de-facing algorithms change the results of FastSurfer compared to the original images.
Further courses and tutorials on medical image data management, software development, and medical image analysis will soon be available at Health and Bioscience IDEAS, a UKRI-funded training program around medical imaging for UK researchers.
Prerequisites: FastSurfer, XNAT
Diffusion MRI analysis for fetal MRI analysis
Leader: Paddy Slator
Diffusion MRI is sensitive to the microstructural and microcirculatory properties of tissue, and is emerging as a promising tool for diagnosis and monitoring of the mother and fetus during pregnancy. However, this is a relatively new application, and there has been little diffusion MRI model development compared to other organs, such as the brain. Consequently, the best models for quantitative assessment of pregnancy-specific tissue structures — such as the placenta, uterine wall, and fetal organs — are not known.
In this project, you will fit a variety of models to diffusion MRI scans acquired during pregnancy, and quantify which models best describe the data within distinct tissue regions.
Prerequisites: MATLAB installation
References:
[1] https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27036
[2] https://www.sciencedirect.com/science/article/pii/S1053811911011566
Foveation for Segmentation of Mega-pixel Histology Images
Leader: Chen Jin, Thomy Mertzanidou
Segmenting histology images is challenging because of the sheer size of the images with millions or even billions of pixels. Typical solutions pre-process each histology image by dividing it into patches of fixed size and/or apply uniform down-sampling to meet memory constraints. Such operations incur information loss in the field-of-view (FoV) (i.e., spatial coverage) and image resolution.
In this project, students will first be guided to construct a basic segmentation model in Python and Pytorch, then validate it on real prostate cancer Histology data. We start with investigating how typical dividing/down-sampling pre-processing leads to a trade-off between FoV and resolution, and thus impacts the segmentation performance. Then we will test popular methods (e.g. foveation module [1]) designed to address such a problem and students are encouraged to provide their own analysis/insights on those methods.
Prerequisites: Python, Pytorch, GPU
References:
[1] Jin, Chen, et al. “Foveation for Segmentation of Mega-pixel Histology Images” MICCAI 2020
FetReg: Placental Vessel Segmentation and Registration in Fetoscopy
Leaders: Sophia Bano, Francisco Vasconcelos
Fetoscopy Laser Photocoagulation (FLP) is a widely used procedure for the treatment of Twin-to-Twin Transfusion Syndrome (TTTS). In TTTS, the flow of blood between the two fetuses becomes uneven as a result the donor experiences slow growth while the recipient is at risk of heart failure due to the excess of blood it takes. During FLP, the abnormal vascular anastomoses are identified, and laser ablated to regulate the flow of blood. The procedure is particularly challenging due to the limited field–of–view, poor manoeuvrability of the fetoscopy, poor visibility due to fluid turbidity and variability in light source, and unusual position of the placenta. This may lead to increased procedural time and incomplete ablation, resulting in persistent TTTS. Computer-assisted intervention can help overcome these challenges by expanding the fetoscopic field-of-view and providing better visualization of the vessel map. This in turn can guide the surgeons in better localizing abnormal anastomoses.
This project aims at using supervised image segmentation models for segmenting the placental vessels and performing direct image registration on the segmented vessel maps for generating a consistent mosaic of the intra-operative environment [1]. The project will utilise the publicly available Placental Vessel Dataset [2] and will also provide the basics for participating in the MICCAI2021 EndoVis FetReg Challenge [3] to interested students. The FetReg challenge was featured as the challenge of the month in Computer Vision News magazine June 2021 issue [4].
Technical Requirements:
– Basic understanding of the image segmentation and registration techniques
– Hands on experience of using a deep learning framework (Pytorch/Tensorflow)
Useful links:
[1] Bano, S., Vasconcelos, F., Shepherd, L.M., Vander Poorten, E., Vercauteren, T., Ourselin, S., David, A.L., Deprest, J. and Stoyanov, D., 2020, October. Deep placental vessel segmentation for fetoscopic mosaicking. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 763-773). Springer, Cham. arxiv.org/pdf/2007.04349.pdf
[2] Placental Vessel Dataset: www.ucl.ac.uk/interventional-surgical-sciences/fetoscopy-placenta-data
[3] endovis.grand-challenge.org/
[4] www.rsipvision.com/ComputerVisionNews-2021June/22/
Acceleration of microstructure imaging in diffusion MRI with deep learning
Leader: Ting Gong
Diffusion MRI (dMRI) can probe tissue microstructure properties by acquiring certain amount of dMRI measurements and solving the inverse problem of a diffusion model. Conventional model fitting-based methods usually require large amount of dMRI measurements with long acquisition time, which can be highly accelerated by deep learning-based methods through neural networks.
This tutorial project will guide participates (code and data provided) to build neural networks to effectively solve the inverse problem of diffusion parameter estimation. Take the diffusion tensor/kurtosis model as an example, we will practically explore some voxel-based and patch-based neural networks. The objectives of this project are to obtain 1) basic understanding of deep learning approaches applied in dMRI, 2) practical knowledge of essential components in building neural networks for diffusion model fitting.
Prerequisites: Python, Tensorflow, Keras
References:
[1] Gong, T., et al. (2018). Efficient Reconstruction of Diffusion Kurtosis Imaging Based on a Hierarchical Convolutional Neural Network. Presented at: the 26th ISMRM Annual Meeting.
[2] Li, Z., et al. (2019). Fast and Robust Diffusion Kurtosis Parametric Mapping Using a Three-Dimensional Convolutional Neural Network. IEEE Access, 7, 71398-71411: https://doi.org/10.1109/access.2019.2919241
Tractography: modelling connections in the human brain, and the improvements offered by deep learning
Leaders: Anna Schroder, Lawrence Binding, Marco Palombo, Neil Oxtoby
Tractography [1] is currently the only tool available to probe non-invasively the structural connectivity of the brain in-vivo. It has wide-spread potential applications, from surgical planning [2] to modelling the spread of neurodegenerative diseases through the brain [3]. However, tractography is subject to extensive modelling errors, resulting in a large number of false positive and false negative connections in the resulting connectome [4]. This severely limits the applications and reliability of tractography.
This project will introduce participants to the basic principles of tractography. Participants will have the opportunity to implement tractography algorithms from scratch, and compare these results to state-of-the-art tractography software tools in MRtrix3 [5]. Participants will also have the opportunity to explore a fast DTI [6] deep learning algorithm, and understand how this work can improve the accessibility of tractography.
Prerequisites:
– Knowledge: MATLAB or python
– Equipment: Own laptop, no GPU needed
References:
[1] Jeurissen, B., et al. (2019). Diffusion MRI fiber tractography of the brain. NMR in Biomedicine, 32(4), p.e3785.
[2] Winston, G.P., et al. (2012). Optic radiation tractography and vision in anterior temporal lobe resection. Annals of neurology, 71(3), pp.334-341.
[3] Brettschneider, J., et al. (2015). Spreading of pathology in neurodegenerative diseases: a focus on human studies. Nature Reviews Neuroscience, 16(2), pp.109-120.
[4] Maier-Hein, K.H., et al. (2017). The challenge of mapping the human connectome based on diffusion tractography. Nature communications, 8(1), pp.1-13.
[5] Tournier, J.D., et al. (2019). MRtrix3: A fast, flexible and open software framework for medical image processing and visualisation. NeuroImage, 202, p.116137.
[6] Tian, Q., et al. (2020). DeepDTI: High-fidelity six-direction diffusion tensor imaging using deep learning. NeuroImage, 219, p.117017.
Basic Augmented Reality Demo (BARD)
Leaders: Matt Clarkson
In computer aided surgery, we seek to build systems that guide the surgeon. This should lead to safer surgery, and better outcomes for the patients. In this project, students will investigate some of the key concepts in building an “Augmented Reality” (AR) system for computer aided surgery.
In key hole surgery, the video camera of the laparoscope provides a convenient means to capture a view of the surgical scene. The video image provides the “Reality” part of “Augmented Reality”. The video image can then be “Augmented” with additional information from pre-operative data such as Magnetic Resonance (MR) or Computed Tomography (CT) scans, for example, highlighting the position of tumours and critical structures like blood vessels.
In this project students will construct a basic AR system using Python. A live video camera (Laptop Webcam) will be used to capture a view of a surgical scene (Pelvis Phantom). A surgical pointer will be calibrated. The calibrated pointer will be used to mark fiducial points, and used to register a CT model of the pelvis to the scene. The CT data will then be overlaid on the video to complete the demo.After completing this workshop the student should be able to:
- Calibrate a camera using a chessboard or similar.
- Calibrate a tracked pointer, using an invariant point method.
- Use the tracked pointer to locate fiducial markers in the camera scene.
- Use the located fiducial markers to register the pre-operative CT scan of a phantom to the camera scene.
Prerequisites: Python
Deep Learning for Medical Image Segmentation and Registration
Leader: Yipeng Hu
One of the most successful modern deep-learning applications in medical imaging is image segmentation. From neurological pathology in MR volumes to fetal anatomy in ultrasound videos, from cellular structures in microscopic images to multiple organs in whole-body CT scans, the list is ever expanding.
This tutorial project will guide students to build and train a state-of-the-art convolutional neural network from scratch, then validate it on real patient data.
The objective of this project is to obtain
1) basic understanding of machine learning approaches applied for medical image segmentation,
2) practical knowledge of essential components in building and testing deep learning algorithms, and
3) obtain hands-on experience in coding a deep segmentation network for real-world clinical applications.
Prerequisites: Python, GPU (via Colaboratory)