PROPOSAL

Comparing different pre-training strategies for transfer learning in medical imaging


Supervisors: Dovile Juodelyte
Semester: Spring 2022
Tags: transfer learning, deep learning, medical imaging

Deep neural networks have been revolutionary in computer vision and publicly available image datasets played an important role in this success. Due to their size, neural networks require vast amounts of data for training. Yet when it comes to medical settings dataset sizes are very limited due to the cost of data annotation, privacy concerns, differences in imaging techniques, and others. In such cases, transfer learning is often used. A most common approach is to use a supervised classifier trained on ImageNet (or any other available large dataset) and finetune its parameters using the target dataset such as clinical images. However, other feature representation learning methods can be useful as well, e.g. self-supervised pre-training [1]. In this project, we aim to compare different pre-training strategies (supervised, self-supervised [2], multi-task learning [3]) for medical image target datasets.

The project is suitable for a group project as the experiments can be run in parallel, mostly BDS thesis or KDS/KCS research project since it requires background knowledge and experience in deep learning. This project is co-supervised by Veronika Cheplygina.

[1] Newell, A., & Deng, J. (2020). How useful is self-supervised pretraining for visual tasks?. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7345-7354).

[2] Jing, L., & Tian, Y. (2020). Self-supervised visual feature learning with deep neural networks: A survey. IEEE transactions on pattern analysis and machine intelligence, 43(11), 4037-4058.

[3] Raumanns, R., Schouten, G., Joosten, M., Pluim, J. P., & Cheplygina, V. (2021). ENHANCE (ENriching Health data by ANnotations of Crowd and Experts): A case study for skin lesion classification. arXiv preprint arXiv:2107.12734.