Semantic VideoLectures.NET segmentation service
VideoLectures.NET mostly hosts lectures 1 to 1.5h long linked with slides and enriched with metadata and additional textual contents. With automatic temporal segmentation and annotation of the video we would gain on efficiency of our video search engine and be able to provide users with the ability to search for sections within a video, as well as recommend similar content. This would mean that the challenge partcipants develop tools for automatic segmentation of videos that could then be implemented in VideoLectures.NET.
There will be 3 criteria for evaluation:
1. The quality of segmentation and annotation. The key criteria for evaluation will be the quality of the segmentations and annotations* that are extracted from a particular video. Goal: most clearly separated and detailed automatic description of why a segment is a new segment.
* For the annotations of the segments we do not have a specified vocabulary. Participants should annotate the segments with descriptive labels that allow us to differentiate between various segments and retrieve subsets of them, e.g. using labels of actions (presentation, Q&A, etc.) or also labels based on presentation (sub-)topics.
2. Service effectiveness. How to increase the searchability of the content (when searching for specific content using a search engine, or while browsing)? Goal: quality of annotations used for searching; duration of the processing in time; amount of processed data.
3. Efficiency of the underlying algorithm. Goal: ease of integration, processing speed.
Video datasets for this challenge are available (please email the contacts listed below for receiving instructions on how to access these datasets).
Participants are also free to use additional datasets for testing their approaches, in addition to the videos provided by the Challenge organizers.
Tanja Zdolšek tanja.zdolsek -at- ijs.si
Vasilis Mezaris bmezaris -at- iti.gr