Interactive Multimodal Learning for Venue Recommendation
2015; Institute of Electrical and Electronics Engineers; Volume: 17; Issue: 12 Linguagem: Inglês
10.1109/tmm.2015.2480007
ISSN1941-0077
AutoresJan Zahálka, Stevan Rudinac, Marcel Worring,
Tópico(s)Music and Audio Processing
ResumoIn this paper, we propose City Melange, an interactive and multimodal content-based venue explorer. Our framework matches the interacting user to the users of social media platforms exhibiting similar taste. The data collection integrates location-based social networks such as Foursquare with general multimedia sharing platforms such as Flickr or Picasa. In City Melange, the user interacts with a set of images and thus implicitly with the underlying semantics. The semantic information is captured through convolutional deep net features in the visual domain and latent topics extracted using Latent Dirichlet allocation in the text domain. These are further clustered to provide representative user and venue topics. A linear SVM model learns the interacting user's preferences and determines similar users. The experiments show that our content-based approach outperforms the user-activity-based and popular vote baselines even from the early phases of interaction, while also being able to recommend mainstream venues to mainstream users and off-the-beaten-track venues to afficionados. City Melange is shown to be a well-performing venue exploration approach.
Referência(s)