Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Multi-view visual Bayesian personalized ranking for restaurant recommendation

Multi-view visual Bayesian personalized ranking for restaurant recommendation In recent recommendation systems, the image information of items is often used in conjunction with deep convolution network to directly learn the visual features of items. However, the existing approaches usually use only one image to represent an item. These approaches are inadequate for an item with multi-view related images. For a restaurant, it has visual information of food, drink, environment, and so on. Each view of an item can be represented by multiple images. In this paper, we propose a new factorization model that combines multi-view visual information with the implicit feedback data for restaurant prediction and ranking. The visual features (visual information) of images are extracted by using a deep convolution network and are integrated into a collaborative filtering framework. In order to conduct personalized recommendation better, the multi-view visual features are fused through user related weights. User related weights reflect the personalized visual preference for restaurants and the weights are different and independent between users. We applied this model to make personalized recommendations for users on two real-world restaurant review datasets. Experimental results show that our model with multi-view visual information achieves better performance than models without or with only single-view visual information. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Intelligence Springer Journals

Multi-view visual Bayesian personalized ranking for restaurant recommendation

Loading next page...
 
/lp/springer-journals/multi-view-visual-bayesian-personalized-ranking-for-restaurant-cspWYp9Hoz

References (39)

Publisher
Springer Journals
Copyright
Copyright © Springer Science+Business Media, LLC, part of Springer Nature 2020
ISSN
0924-669X
eISSN
1573-7497
DOI
10.1007/s10489-020-01703-6
Publisher site
See Article on Publisher Site

Abstract

In recent recommendation systems, the image information of items is often used in conjunction with deep convolution network to directly learn the visual features of items. However, the existing approaches usually use only one image to represent an item. These approaches are inadequate for an item with multi-view related images. For a restaurant, it has visual information of food, drink, environment, and so on. Each view of an item can be represented by multiple images. In this paper, we propose a new factorization model that combines multi-view visual information with the implicit feedback data for restaurant prediction and ranking. The visual features (visual information) of images are extracted by using a deep convolution network and are integrated into a collaborative filtering framework. In order to conduct personalized recommendation better, the multi-view visual features are fused through user related weights. User related weights reflect the personalized visual preference for restaurants and the weights are different and independent between users. We applied this model to make personalized recommendations for users on two real-world restaurant review datasets. Experimental results show that our model with multi-view visual information achieves better performance than models without or with only single-view visual information.

Journal

Applied IntelligenceSpringer Journals

Published: Sep 13, 2020

There are no references for this article.