Introduction

Learning Multi-Objective Rewards and User Utility Function in Contextual Bandits for Personalized Ranking

Introduction

Learning Multi-Objective Rewards and User Utility Function in Contextual Bandits for Personalized Ranking

Abstract

This paper tackles the problem of providing users with ranked lists of relevant search results, by incorporating contextual features of the users and search results, and learning how a user values multiple objectives. For example, to recommend a ranked list of hotels, an algorithm needs to learn which hotels are the right price for particular users, as well as how users vary in their weighting of price against the location. In our paper, we formulate the context-aware, multi-objective, ranking problem as a Multi-Objective Contextual Ranked Bandit (MOCR-B). To solve the MOCR-B problem, we present a novel algorithm, named MultiObjective Utility-Upper Confidence Bound (MOUUCB). The goal of MOU-UCB is to learn how to generate a ranked list of resources that maximizes the rewards in multiple objectives to give relevant search results. Our algorithm learns to predict rewards in multiple objectives based on contextual information (combining the Upper Confidence Bound algorithm for multi-armed contextual bandits with neural network embeddings), as well as learns how a user weights the multiple objectives. Our empirical results reveal that the ranked lists generated by MOU-UCB lead to better clickthrough rates, compared to approaches that do not learn the utility function over multiple reward objectives..

Publication
IJCAI 2019
Date
Links