About

Hi there! I’m Baixiang, a CS PhD student at Emory University, advised by Dr. Kai Shu. My research has focused on improving the factuality, safety, and robustness of foundation models, particularly through model editing, a technique that enables precise and efficient modifications to large language models while preserving their overall capabilities.
Previously, I have worked on authorship attribution, which aims to identify the author of a text based on their unique writing style.
In my free time, I enjoy spending time in nature and staying active through various outdoor sports. Iโm an avid runner and swimmer, and Iโve recently taken up weightlifting. I also find joy in playing the piano and expanding my reading list.
[LinkedIn] [Google Scholar] [GitHub] [Email] [Twitter]
Publications and Preprints
Model Editing as a Double-Edged Sword: Steering Agent Ethical Behavior Toward Beneficence or Harm
Baixiang Huang, Zhen Tan, Haoran Wang, Zijie Liu, Dawei Li, Ali Payani, Huan Liu, Tianlong Chen, and Kai Shu.
arXiv preprint (2025) [arXiv] [GitHub] [Website]
Can Knowledge Editing Really Correct Hallucinations?
Baixiang Huang, Canyu Chen, Xiongxiao Xu, Ali Payani, and Kai Shu.
International Conference on Learning Representations (ICLR 2025) [arXiv] [GitHub] [Website]
SST: Multi-Scale Hybrid Mamba-Transformer Experts for Long-Short Range Time Series Forecasting
Xiongxiao Xu, Canyu Chen, Yueqing Liang, Baixiang Huang, Guangji Bai, Liang Zhao, Kai Shu.
Proceedings of the 34th ACM International Conference on Information and Knowledge Management (CIKM 2025) [arXiv] [GitHub]
Who's Your Judge? On the Detectability of LLM-Generated Judgments
Dawei Li, Zhen Tan, Chengshuai Zhao, Bohan Jiang, Baixiang Huang, Pingchuan Ma, Abdullah Alnaibari, Kai Shu, Huan Liu.
arXiv preprint (2025) [arXiv] [GitHub] [Website]
Privacy-Aware Decoding: Mitigating Privacy Leakage of Large Language Models in Retrieval-Augmented Generation
Haoran Wang, Xiongxiao Xu, Baixiang Huang, and Kai Shu.
arXiv preprint (2025) [arXiv] [GitHub]
Can Editing LLMs Inject Harm?
Canyu Chen*, Baixiang Huang*, Zekun Li, Zhaorun Chen, Shiyang Lai, Xiongxiao Xu, Jia-Chen Gu, Jindong Gu, Huaxiu Yao, Chaowei Xiao, Xifeng Yan, William Wang, Philip Torr, Dawn Song, and Kai Shu.
arXiv preprint (2024) [arXiv] [GitHub] [Website]
Authorship Attribution in the Era of LLMs: Problems, Methodologies, and Challenges
Baixiang Huang, Canyu Chen, and Kai Shu.
ACM SIGKDD Exploration (2024) [arXiv] [Paper List] [Website]
Can Large Language Models Identify Authorship?
Baixiang Huang, Canyu Chen, and Kai Shu.
Empirical Methods in Natural Language Processing (EMNLP 2024 Findings) [arXiv] [GitHub]
TAP: A Comprehensive Data Repository for Traffic Accident Prediction in Road Networks
Baixiang Huang, Bryan Hooi, and Kai Shu.
ACM SIGSPATIAL (2023) [arXiv] [GitHub]
* Equal Contribution
Tools and Resources
BibTeX to Markdown Converter: Convert BibTeX files into clean Markdown paper lists with PDF links
Medium Blog
How to Visualize Street Networks
Can Editing LLMs Inject Harm? A Deep Dive into New Safety Threats