RMIT University
Browse

How unlabeled web videos help complex event detection?

conference contribution
posted on 2024-11-03, 14:45 authored by Huan Liu, Qinghua Zheng, Minnan Luo, Dingwen Zhang, Xiaojun ChangXiaojun Chang, Cheng Deng
The lack of labeled exemplars is an important factor that makes the task of multimedia event detection (MED) complicated and challenging. Utilizing artificially picked and labeled external sources is an effective way to enhance the performance of MED. However, building these data usually requires professional human annotators, and the procedure is too time-consuming and costly to scale. In this paper, we propose a new robust dictionary learning framework for complex event detection, which is able to handle both labeled and easy-to-get unlabeled web videos by sharing the same dictionary. By employing the lq-norm based loss jointly with the structured sparsity based regularization, our model shows strong robustness against the substantial noisy and outlier videos from open source. We exploit an effective optimization algorithm to solve the proposed highly non-smooth and non-convex problem. Extensive experiment results over standard datasets of TRECVID MEDTest 2013 and TRECVID MEDTest 2014 demonstrate the effectiveness and superiority of the proposed framework on complex event detection.

History

Start page

4040

End page

4046

Total pages

7

Outlet

Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017)

Name of conference

IJCAI 2017

Publisher

International Joint Conferences on Artifical Intelligence

Place published

United States

Start date

2017-08-19

End date

2017-08-25

Language

English

Copyright

© 2017 International Joint Conferences on Artificial Intelligence

Former Identifier

2006109453

Esploro creation date

2021-09-08

Usage metrics

    Scholarly Works

    Categories

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC