Understanding users’ search intents on the web can be enhanced by contextual information about users and web resources. Providing users with relevant search results requires balancing multiple objectives, such as users’ explicitly stated preferences for novelty, versus similarity to what they have liked in the past. We consider a novel problem in which an algorithm needs to learn a user’s unknown utility function in different contexts over the multiple reward objectives for online ranking. We call this problem the Multi-Objective Contextual Ranked Bandit (MOCR-B) problem. To solve the MOCR-B problem, we present a novel algorithm, Multi-Objective Contextual Ranked-UCB (MOR-LinUCB), which uses generates a ranked list of resources, by using context information about users and resources to maximize a user’s specific utility function over multiple reward objectives. We use Neural Network Embeddings to efficiently model user-resource context information for the MOR-LinUCB algorithm. We evaluate MOR-LinUCB empirically, using synthetic data and real-world data from TripAdvisor. Our results reveal that the ranked lists generated by MOR-LinUCB lead to better click-through rates, compared to approaches that do not learn the utility function over multiple objectives.