RECE: Reduced Cross-Entropy Loss for Large-Catalogue Sequential Recommenders
Author(s)
Gusak, Danil; Mezentsev, Gleb; Oseledets, Ivan; Frolov, Evgeny
Download3627673.3679986.pdf (1.165Mb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
Scalability is a major challenge in modern recommender systems. In sequential recommendations, full Cross-Entropy (CE) loss achieves state-of-the-art recommendation quality but consumes excessive GPU memory with large item catalogs, limiting its practicality. Using a GPU-efficient locality-sensitive hashing-like algorithm for approximating large tensor of logits, this paper introduces a novel RECE (REduced Cross-Entropy) loss. RECE significantly reduces memory consumption while allowing one to enjoy the state-of-the-art performance of full CE loss. Experimental results on various datasets show that RECE cuts training peak memory usage by up to 12 times compared to existing methods while retaining or exceeding performance metrics of CE loss. The approach also opens up new possibilities for large-scale applications in other domains.
Date issued
2024-10-21Publisher
ACM|Proceedings of the 33rd ACM International Conference on Information and Knowledge Management
Citation
Gusak, Danil, Mezentsev, Gleb, Oseledets, Ivan and Frolov, Evgeny. 2024. "RECE: Reduced Cross-Entropy Loss for Large-Catalogue Sequential Recommenders."
Version: Final published version
ISBN
979-8-4007-0436-9