I have published several papers at top-tier conferences including NeurIPS, EMNLP, NAACL, ICCV, and received the Best Paper Award at ICLR workshop. My research spans multiple areas with a focus on: Human-AI Interaction (UCFE-Benchmark), Multi-Agent System (TwinMarket), AI for Scientific Applications (Open-FinLLMs), Trustworthy NLP (SAE-free), and Socially Aware NLP.
Currently, I am seeking a PhD position (2026 Fall).
I am also open to research collaborations. If you are interested in my work or would like to discuss potential collaboration opportunities, please feel free to reach out and
schedule time
with me.
08/2024: A financial foundation model, FinLLaMA-8B, and a multimodal model, FinLLaVA-8B, were released.
03/2024: My first publication (FAST-CA) was accepted to Information Fusion 2024.
•••
Research
My vision is to build reliable and trustworthy AI systems that serve as a bridge between machine intelligence and human society. I am dedicated to enhancing the social intelligence of (vision) language models, empowering them to not only understand the principles of the physical world but also to gain insight into complex social environments, enabling reliable and meaningful interactions with humans.
TwinMarket: A Scalable Behavioral and Social Simulation for Financial Markets Proceedings of NeurIPS 2025 Best Paper Award, ICLR 2025 Workshop on Advances in Financial AI
A multi-agent framework that leverages LLMs to simulate socio-economic systems
Paper /
Code /
Project Page
Yuzhe Yang*, Yifei Zhang*, Yan Hu*, et al.
UCFE: A User-Centric Financial Expertise Benchmark for Large Language Models Findings of NAACL 2025
A User-Centric framework designed to evaluate LLMs' ability to handle complex financial tasks
Paper /
Code /
Dataset
Zihao Li*, Xu Wang*, Yuzhe Yang, et al.
Feature Extraction and Steering for Enhanced Chain-of-Thought Reasoning in Language Models Proceedings of EMNLP 2025
Enhance LLM reasoning by steering activations via a novel SAE-free method using Chain-of-Thought features, without external data
Paper
Jiaqi Wu, Simin Chen, Jing Tang, Yuzhe Yang, et al.
FDPT: Federated Discrete Prompt Tuning for Black-Box Visual-Language Models Proceedings of ICCV 2025
Federated prompt tuning approach for black-box visual-language models
Paper
School of Data Science, CUHK-Shenzhen2023.08 – 2024.06 Undergraduate Research Assistant
Advised by Prof. Jianfeng Mao Undergraduate Research Award (2024, 2025)
Miscellaneous
📸 I am an amateur photographer with an interest in
digital, film, and aerial photography, with a special
passion for landscape photography 🏔️. You can find my
photos on
Unsplash 🎞️.
Also, you can visit my
HDR photo gallery.