Posts by Collection

portfolio

publications

Text-Based Face Retrieval: Methods and Challenges

Published in , 2023

Previous researches on face retrieval have concentrated on using image-based queries. In this paper, we focus on the task of retrieving faces from a database based on queries given as texts, which holds significant potential for practical applications in public security and multimedia. Our approach employs a vision-language pre-training model as the backbone, effectively incorporating contrastive learning, image-text matching learning, and masked language modeling tasks. Furthermore, it employs a coarse-to-fine retrieval strategy to enhance the accuracy of text-based face retrieval. We present CelebA-Text-Identity dataset, comprising of 202,599 facial images of 10,178 unique identities, each paired with an accompanying textual description. The experimental results we obtained on CelebA-Text-Identity demonstrate the inherent challenges of text-based face retrieval. We expect that our proposed benchmark will …

Recommended citation: Yuchuan Deng and Qijun Zhao and Zhanpeng Hu and Zixiang Xu. Text-Based Face Retrieval: Methods and Challenges. 2023.

DualFocus: Integrating Plausible Descriptions in Text-based Person Re-identification

Published in arXiv preprint arXiv:2405.07459, 2024

Text-based Person Re-identification (TPR) aims to retrieve specific individual images from datasets based on textual descriptions. Existing TPR methods primarily focus on recognizing explicit and positive characteristics, often overlooking the role of negative descriptions. This oversight can lead to false positives-images that meet positive criteria but should be excluded based on negative descriptions. To address these limitations, we introduce DualFocus, a unified framework that integrates plausible descriptions to enhance the interpretative accuracy of vision-language models in TPR tasks. DualFocus leverages Dual (Positive/Negative) Attribute Prompt Learning (DAPL), which incorporates Dual Image-Attribute Contrastive (DIAC) Learning and Sensitive Image-Attributes Matching (SIAM) Learning, enabling the detection of non-existent attributes and reducing false positives. To achieve a balance between coarse and fine-grained alignment of visual and textual embeddings, we propose the Dynamic Tokenwise Similarity (DTS) loss, which refines the representation of both matching and non-matching descriptions, thereby improving the matching process through detailed and adaptable similarity assessments. The comprehensive experiments on CUHK-PEDES, ICFG-PEDES, and RSTPReid, DualFocus demonstrates superior performance over state-of-the-art methods, significantly enhancing both precision and robustness in TPR.

Recommended citation: Yuchuan Deng and Zhanpeng Hu and Jiakun Han and Chuang Deng and Qijun Zhao. DualFocus: Integrating Plausible Descriptions in Text-based Person Re-identification. arXiv preprint arXiv:2405.07459. 2024.

Empowering Small VLMs to Think with Dynamic Memorization and Exploration

Published in arXiv preprint arXiv:2506.23061, 2025

Empowering Small-scale Vision-Language Models (SVLMs) with reliable thinking capabilities remains fundamentally challenging due to their limited parameter capacity and weak instruction-following abilities. Existing training paradigms, including Supervised Fine-Tuning (SFT) and Reinforcement Learning with Verifiable Reward (RLVR), impose substantial demands on the base VLM, exceeding the capabilities of SVLMs. Consequently, directly applying these paradigms to SVLMs often suffers from severe pseudo thinking traces and advantage collapse, ultimately undermining both thinking reliability and task performance. A natural solution is to combine SFT and RLVR, leveraging their complementarity to reduce the dependence on model capacity. However, the widely adopted two-stage training paradigm still performs poorly on SVLMs, as their tendency toward sub-optimal convergence hinders the trade-off and limits the benefits of the combination. To address this, we propose DyME, a novel training paradigm that Dynamically selects between Memorization (via SFT) and Exploration (via RLVR) modes at each optimization step, ensuring that every update contributes to the trade-off. Extensive experiments across diverse domains demonstrate that DyME consistently achieves this balance, and thus delivers substantial performance improvements. These results establish DyME as a practical and effective solution for empowering SVLMs with reliable thinking capabilities. GitHub: https://github.com/HKUST-LongGroup/DyME

Recommended citation: Jiazhen Liu and Yuchuan Deng and Long Chen. Empowering Small VLMs to Think with Dynamic Memorization and Exploration. arXiv preprint arXiv:2506.23061. 2025.

talks

teaching