Detail

Publication date: 20 de June, 2024

Split Learning and its Optimization Challenges

Split learning (SL) has been recently proposed as a way to enable resource-constrained devices to train neural network models in a distributed way and participate in federated learning. In a nutshell, SL splits the model into parts, and allows clients (devices) to offload the largest part as a processing task to a computationally powerful helper (edge server, cloud, or other devices). In parallel SL, multiple helpers can process model parts of one or more clients, thus considerably reducing the maximum training time over all clients (makespan). This talk will give an in-depth presentation of this setup and its advantages, as well as some optimization challenges that it poses.  In particular, we will focus on orchestrating the workflow of this operation by formulating the joint optimization problem of client-helper assignments and scheduling decisions with the goal of minimizing the training makespan. We will propose a solution strategy and study its performance in a variety of numerical evaluations using our testbed’s measurements.

Presenter

Dimitra Tsigkari (Telefonica Research)

Date 26/06/2024 2:00 pm
Location DI Seminars Room and Zoom
Host Bio Dimitra Tsigkari completed her Ph.D. in Computer Science, Telecommunications and Electronics at EURECOM and Sorbonne University, France, where she worked on the optimization of caching and recommendation systems in video streaming services. She was then a postdoctoral researcher at TU Delft, Netherlands, where she worked on federated and distributed learning algorithms in mobile computing systems. She is currently a Research Scientist at Telefonica Research in Barcelona, Spain. Her research interests lie in the broad area of network optimization with a focus on edge/cloud computing and distributed learning.