Medical Visual Question Answering
A medical Visual Question Answering (VQA) system can provide meaningful references for both doctors and patients during the treatment process. Different from normal images, a learning setting with medical images is more challenging due limited amounts of data, class-imbalance and the presence of label noise for diagnosis tasks. Moreover, little attention is paid to how the images and meta-data is obtained for medical datasets. Recent studies show that even with high performance on the existing data, algorithms can learn “shortcuts” like visibility of medical tools and fail to generalize. This prevents responsible translation of algorithms to real-life situations.
Projects related to the use of meta-data, meta-learning and transfer learning are possible. More precise research questions will be decided together with the student. It requires some background knowledge and experience in deep learning. This project is co-supervised by Veronika Cheplygina.
Do, Tuong, et al. “Multiple meta-model quantifying for medical visual question answering.” International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2021.
Lau, Jason J., et al. “A dataset of clinically generated visual questions and answers about radiology images.” Scientific data 5.1 (2018): 1-10.
He, Xuehai, et al. “PathVQA: 30000+ questions for medical visual question answering.” arXiv preprint arXiv:2003.10286 (2020).