Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Effective transfer tagging from image to video

Effective transfer tagging from image to video Effective Transfer Tagging from Image to Video YANG YANG, The University of Queensland YI YANG, Carnegie Mellon University HENG TAO SHEN, The University of Queensland Recent years have witnessed a great explosion of user-generated videos on the Web. In order to achieve an effective and efficient video search, it is critical for modern video search engines to associate videos with semantic keywords automatically. Most of the existing video tagging methods can hardly achieve reliable performance due to deficiency of training data. It is noticed that abundant well-tagged data are available in other relevant types of media (e.g., images). In this article, we propose a novel video tagging framework, termed as Cross-Media Tag Transfer (CMTT), which utilizes the abundance of well-tagged images to facilitate video tagging. Specifically, we build a "cross-media tunnel" to transfer knowledge from images to videos. To this end, an optimal kernel space, in which distribution distance between images and video is minimized, is found to tackle the domainshift problem. A novel cross-media video tagging model is proposed to infer tags by exploring the intrinsic local structures of both labeled and unlabeled data, and learn reliable video classifiers. An efficient algorithm is designed to optimize the http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP) Association for Computing Machinery

Loading next page...
 
/lp/association-for-computing-machinery/effective-transfer-tagging-from-image-to-video-OTMuvL6wG0
Publisher
Association for Computing Machinery
Copyright
Copyright © 2013 by ACM Inc.
ISSN
1551-6857
DOI
http://dx.doi.org/10.1145/2457450.2457456
Publisher site
See Article on Publisher Site

Abstract

Effective Transfer Tagging from Image to Video YANG YANG, The University of Queensland YI YANG, Carnegie Mellon University HENG TAO SHEN, The University of Queensland Recent years have witnessed a great explosion of user-generated videos on the Web. In order to achieve an effective and efficient video search, it is critical for modern video search engines to associate videos with semantic keywords automatically. Most of the existing video tagging methods can hardly achieve reliable performance due to deficiency of training data. It is noticed that abundant well-tagged data are available in other relevant types of media (e.g., images). In this article, we propose a novel video tagging framework, termed as Cross-Media Tag Transfer (CMTT), which utilizes the abundance of well-tagged images to facilitate video tagging. Specifically, we build a "cross-media tunnel" to transfer knowledge from images to videos. To this end, an optimal kernel space, in which distribution distance between images and video is minimized, is found to tackle the domainshift problem. A novel cross-media video tagging model is proposed to infer tags by exploring the intrinsic local structures of both labeled and unlabeled data, and learn reliable video classifiers. An efficient algorithm is designed to optimize the

Journal

ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)Association for Computing Machinery

Published: May 1, 2013

There are no references for this article.