PROPOSAL

Fairness and bias in multi-task learning with demographics


Supervisors: Veronika Cheplygina
Semester: Fall 2022
Tags: machine learning, medical imaging, data analysis, fairness

In medical imaging, multi-task learning can be used to train a model that jointly predicts both a diagnosis, and other patient characteristics, such as demographic variables. Among others, this strategy has frequently been used for diagnosis of Alzheimer’s from brain MR scans, with age as an additional variable, see Zhang et al as an example. The idea is that both the disease, and age, affect the brain negatively, thus using both labels can help to regularize the model.

More recently there has been more attention to potential bias in deep learning models, with the motivation that performance should be independent of the patient’s demographics. As such there has been research on “debiasing” while training, by encouraging models that are NOT able to predict a patient’s age or other variables, from the input image, see for example Abbasi et al.

These approaches seem to be at odds with each other. The aim of the project would be to look at comparing these two strategies for a specific application (e.g. skin lesions, chest x-rays) both in terms of performance, but also other relevant metrics, recently recommended by the community.

Multiple projects are possible, groups of 2 preferred. Ideally you should have experience with deep learning and the HPC at ITU.