报告题目:Conditional Self-Supervised Learning for Few-Shot Classification
报告摘要:How to learn a transferable feature representation from limited examples is a key challenge for few-shot classification. Self-supervision as an auxiliary task to the main supervised few-shot task is considered to be a conceivable way to solve the problem since self-supervision can provide additional structural information easily ignored by the main task. However, learning a good representation by traditional self-supervised methods is usually de-pendent on large training samples. In few-shot scenarios, due to the lack of sufficient samples, these self-supervised methods might learn a biased representation, which more likely leads to the wrong guidance for the main tasks and finally causes the performance degradation. In this paper, we pro-pose conditional self-supervised learning (CSS) to use prior knowledge to guide the representation learning of self-supervised tasks. Specifically, CSS leverages inherent supervised information in labeled data to shape and improve the learning feature manifold of self-supervision without auxiliary unlabeled data, so as to reduce representation bias and mine more effective semantic information. Moreover, CSS exploits more meaningful information through supervised learning and the improved self-supervised learning respectively and integrates the information into a unified distribution, which can further enrich and broaden the original representation. Extensive experiments demonstrate that our proposed method without any fine-tuning can achieve a significant accuracy improvement on the few-shot classification scenarios compared to the state-of-the-art few-shot learning methods.
个人介绍:
安悦瑄,现于东南大学计算机科学与工程学院PALM实验室攻读博士学位,师从薛晖教授。研究方向包括机器学习与模式识别,目前专注于小样本学习和自监督学习,在IJCAI、PRJ等重要国际会议/期刊发表论文多篇。