Sotirios A Tsaftaris, PhD

Chancellor's Fellow (Senior Lecturer) @ University of Edinburgh

CardiacA.I.: Machine learning for the analysis of multimodal cardiac MR images used in the diagnosis of coronary heart disease

Funder: EPSRC (UK) Number: EP/P022928/1 Dates: 01 September 2017 - 31 January 2019 Status: Ongoing


A sedentary lifestyle, poor diet, smoking, and genetic and other health factors are major contributors to coronary heart disease (CHD). Despite recent medical advances that have lowered the number of deaths compared to the past decades, CHD still remains the number 1 disease in mortality in the UK (73,000 deaths per year) with a tremendous economic burden: estimates put the cost to UK's economy at £6.7 billion per year. The overriding goal of this project is to take advantage of multimodal information within cardiac magnetic resonance images to improve their analysis and facilitate the diagnosis and improve treatment of CHD.

Magnetic Resonance Imaging (MRI) as an imaging diagnostic tool is uniquely positioned to help as it is non-invasive and does not use radiation. A typical cardiac protocol relies on several MR imaging sequences to provide images of different contrast, termed as modalities hereafter, to assess disease progression and status. As a result of this range of acquisitions, hundreds of multidimensional multimodal images are generated in a single patient exam leading to severe data overload.

Therefore, robust and automated analyses algorithms would help alleviate the clinical reading burden. Several algorithms have been proposed to segment and register the myocardium in the most commonly used modalities by considering them independently. However, the problem remains difficult and performance is not yet adequate. Currently, the analysis of cardiac imaging data still remains a manual, time consuming, and expensive process typically performed by clinical experts. As a result, despite the huge amount of data generated, not only in a clinical but also in a research setting, only a fraction is being analysed robustly, due to the vast amount of time required for the analysis of this data.

This proposal aims to address the above shortcomings by proposing mechanisms that take advantage of the shared information that exists across modalities to enable the jint analysis of cardiac imaging data and thus make a significant leap in how we approach their analysis.

People involved and collaborators


  1. A. Chartsias, T. Joyce, V. Giuffrida, and S. A. Tsaftaris, 'Multimodal MR Synthesis via Modality-Invariant Latent Representation,' in IEEE Trans on Medical Imaging, vol. 37, no. 3, pp. 803-814, Mar. 2018. [Full text][PDF] [Source code]
  2. T. Joyce, A. Chartsias, S.A. Tsaftaris, 'Robust Multi-Modal MR Image Synthesis,' MICCAI 2017. [PDF] [Source code]
  3. A. Chartsias, T. Joyce, R. Dharmakumar, S.A. Tsaftaris, 'Adversarial Image Synthesis for Unpaired Multi-Modal Cardiac Data,' SASHIMI 2017, Simulation and Synthesis in Medical Imaging, Second International Workshop, Held in Conjunction with MICCAI 2017, Quebec City, Canada, Sept. 10, 2017. [PDF]


This work is supported by a First Grant [grant number EP/P022928/1] of the UK's Engineering and Physical Sciences Research Council (EPSRC).


News & Progress

Agis submitted a nice paper to MICCAI on semi-supervised learning and segmentation (WP2).
Sotos gave a seminar at Univ. of Bristol on image synthesis.
Code on multimodal synthesis and modality invariant representation learning available as [Source code].
Agis presented his work (joined with Thomas) on learning mappings between modalities at MICCAI (WP1).
Agis presented his work on learning mappings between modalities without pairing and co-registration at SASHIMI @ MICCAI 2017 (WP1).
Our data collection is growing. 40 patients data obtained from Royal Infirmary.
CardiacAI start date