Incremental Model Updates on Tiny Hardware

Supervisors: Pınar Tözün, Robert Bayer
Semester: Fall 2024
Tags: resource-constrained hardware, data management, ML model updates, tinyML

Today many data sources are small low-powered and hardware-constrained devices such as mobile phones, wearable or self-driving smart platforms, etc. Edge computing is a broad term that refers to computations performed on such edge devices. It becomes increasingly important to enable techniques that get more value out of data at the edge rather than always sending the data to a remote and more powerful hardware device in the cloud for further data processing and training machine learning models. Processing the data closer to the source would reduce data movement. This, in turn, would reduce latency, costs, and power required to deploy data-intensive applications at the edge.

To enable efficient data processing and machine learning on resource-constrained devices, though, has many challenges. One is keeping the machine learning models deployed on resource-constrained devices up-to-date without frequent retraining. This requires exploring the impact of different model update mechanisms at the edge.

This project would be suitable as a standalone project or BSc or MSc thesis at ITU during Fall 2024. If you are interested in resource-constrained hardware, benchmarking, and machine learning in general, this project would be a great fit for you.