seminars
Detail
Publication date: 1 de June, 2021Choosing wisely your deep training loss
Deep neural architectures require to be properly trained. Inherently, training a deep network implies the curation of a carefully annotated data set and the existence of a differentiable loss function. Two case-studies will be presented in this talk. First, when the collection and annotation of a large-scale dataset is too tedious, and an automatically annotated data set is used instead. Such new data set has the prominent advantage that is collected without much human effort, but the drawback of containing potentially many annotation errors and other kinds of outliers. Therefore, when training a deep network on such dataset, on must design losses that are robust to the kind of contamination present in the dataset. In our case, we propose a probabilistic loss based on a Gaussian-Uniform mixture, and design an EM algorithm to split outliers from inliners. Then use only the inliers to train the deep network. The second case-study happens when facing tasks, whose associated algorithms are evaluated with performance measures that are far from being differentiable. In those cases, the performance measure cannot be directly used as the training loss of the deep architecture, and on must find alternative proxy loss functions that approximate the behavior of the original evaluation metric. We will take the case of multi-object tracking, since the evaluation metrics need to go through a estimation-to-ground-truth assignment step usually solved by means of the Hungarian algorithm. Since this step is not differentiable, such evaluation metrics cannot be used to train the tracking network. To overcome this problem, we propose to train an auxiliary network to approximate the behavior of the Hungarian algorithm, but with a differentiable mapping: the Deep Hungarian Network (DHN). After its training step, DHN can be used to approximate the evaluation metrics with differentiable losses based on the assignment step done by the DHN.
Date | 05/03/2020 |
---|---|
State | Concluded |