Video Semantics Quality Assessment using Deep Learning
Oct 2020
This work proposes a method to assess the quality of user-generated videos (UGVs) of specific social events. The method is based on matching the semantic information extracted from videos and the information obtained from text news of the same event. Deep learning techniques are used to detect objects in the video scenes. News articles are represented by a set of relevant terms automatically extracted from the news. This paper describes our method and an evaluation of it.