About

Hi there! I’m Baixiang, a PhD student at Emory University, where I have the privilege of working under the guidance of Professor Kai Shu. My work centers on machine learning and NLP, with a particular focus on improving the factuality, safety, and robustness of foundation models.

In my free time, I enjoy spending time in nature and staying active through various outdoor sports. Iโ€™m an avid runner and swimmer, and Iโ€™ve recently taken up weightlifting. I also find joy in playing the piano and expanding my reading list.

[LinkedIn] [Google Scholar] [GitHub] [Email] [Twitter]

Publications and Preprints

Can Knowledge Editing Really Correct Hallucinations?
Baixiang Huang*, Canyu Chen*, Xiongxiao Xu, Ali Payani, Kai Shu.
International Conference on Learning Representations (ICLR 2025) [arXiv] [GitHub] [Website]

Can Editing LLMs Inject Harm?
Canyu Chen*, Baixiang Huang*, Zekun Li, Zhaorun Chen, Shiyang Lai, Xiongxiao Xu, Jia-Chen Gu, Jindong Gu, Huaxiu Yao, Chaowei Xiao, Xifeng Yan, William Wang, Philip Torr, Dawn Song, Kai Shu.
arXiv preprint (2024) [arXiv] [GitHub] [Website]

Authorship Attribution in the Era of LLMs: Problems, Methodologies, and Challenges
Baixiang Huang, Canyu Chen, and Kai Shu.
ACM SIGKDD Exploration (2024) [arXiv] [Paper List] [Website]

Can Large Language Models Identify Authorship?
Baixiang Huang, Canyu Chen, and Kai Shu. 
Empirical Methods in Natural Language Processing (EMNLP 2024 Findings) [arXiv] [Github]

TAP: A Comprehensive Data Repository for Traffic Accident Prediction in Road Networks
Baixiang Huang, Bryan Hooi, and Kai Shu. 
ACM SIGSPATIAL (2023) [arXiv] [Github]

* Equal Contribution

Medium Blog

How to Visualize Street Networks

Can Editing LLMs Inject Harm? A Deep Dive into New Safety Threats

Authorship Attribution: Why Identifying Who Wrote What is More Important Than Ever in the Age of LLMs