FSL-QuickBoost: Minimal-Cost Ensemble for Few-Shot Learning
Author(s)
Bai, Yunwei; Cai, Bill Yang; Tan, Ying Kiat; Zheng, Zangwei; Chen, Shiming; Chen, Tsuhan; ... Show more Show less
Download3664647.3681446.pdf (1.924Mb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
Few-shot learning (FSL) usually trains models on data from one set of classes, but tests them on data from a different set of classes, providing a few labeled support samples of the unseen classes as a reference for the trained model. Due to the lack of target-relevant training data, there is usually high generalization error with respect to the test classes. In this work, we conduct empirical explorations and propose an ensemble method (namely QuickBoost), which is efficient and effective for improving the generalization of FSL. Specifically, QuickBoost includes an alternative-architecture pretrained encoder with a one-vs-all binary classifier (namely FSL-Forest) based on random forest algorithm, and is ensembled with the off-the-shelf FSL models via logit-level averaging. Experiments on three benchmarks demonstrate that our method achieves state-of-the-art performance with good efficiency. Codes are available at https://github.com/WendyBaiYunwei/FSL-QuickBoost.
Description
MM ’24, October 28-November 1, 2024, Melbourne, VIC, Australia
Date issued
2024-10-28Publisher
ACM|Proceedings of the 32nd ACM International Conference on Multimedia
Citation
Bai, Yunwei, Cai, Bill Yang, Tan, Ying Kiat, Zheng, Zangwei, Chen, Shiming et al. 2024. "FSL-QuickBoost: Minimal-Cost Ensemble for Few-Shot Learning."
Version: Final published version
ISBN
979-8-4007-0686-8
Collections
The following license files are associated with this item: