

I am a PhD candidate in Computational Linguistics at Stony Brook University, affiliated with Institute for Advanced Computational Science (IACS). I am fortunate to be advised by Dr. Owen Rambow.
Prior to coming to Stony Brook, I completed a bachelor's degree in Chinese Language and Literature from Hunan University, and a master's degree in Applied linguistics from University of Saskatchewan.
My research spans large-scale studies of language use, data augmentation, text clustering, explainable AI, and foundation model evaluation. Currently, I focus on on evaluating Large (Vision/Speech) Language Models with a human-centric perspective. I aim to bridge technical performance with real-world impact.
News
-
Aug, 2025: I started working for Amazon as an Applied Science intern. I research in Large Speech Language Model for the Alexa Central Analytics and Research Science team.
-
Aug, 2025: My first-author paper LVLMs are Bad at Overhearing Human Referential Communication has been accepted to EMNLP 2025 (Main).
-
Aug, 2025: My first-author paper Catch Me If You Can? Not Yet: LLMs Still Struggle to Imitate the Implicit Writing Styles of Everyday Authors has been accepted to EMNLP 2025 (Findings).
-
May, 2025: I started working for Meta as a Software Engineer (Machine Learning) intern. Using Hack (PHP), I built LVLM-based agentic workflow for trend detection, validation, magnitude labeling, and post content quality rating.
-
May, 2025: My first-authored paper LLMs can Perform Multi-Dimensional Analytic Writing Assessments: A Case Study of L2 Graduate-Level Academic English Writing has been accepted to ACL 2025 (Main).
-
Feb, 2025: I advanced to PhD candidacy after passing my second qualifying paper.
-
Aug, 2024: I received the Junior Researcher Award from the Institute for Advanced Computational Science at Stony Brook University.
-
May, 2024: I started working for the Home Depot as a Data Science intern where I developed various LLM-based systems for topic modeling, classification, and validation.
-
June, 2023: I became a trainee for the Bias-NRT (National Science Foundation Research Traineeship) program at Stony Brook University,
-
Aug, 2022: I started my PhD in Computational Linguistics at Stony Brook University.
Research
For a full and up-to-date list of publications, please check my Google Scholar page.



Clustering Document Parts: Detecting and Characterizing Influence Campaigns from Documents
Zhengxiang Wang, Owen Rambow,
the 6th workshop on NLP+CSS at NAACL 2024
 
 

Learning Transductions and Alignments with RNN Seq2seq models
Zhengxiang Wang, ICGI 2023
 
 

Developing literature review writing skills through an online writing tutorial series: Corpus-based evidence
Zhi Li, Makarova Veronika, Zhengxiang Wang, Frontiers in Communication, 2023

Random Text Perturbations Work, but not Always
Zhengxiang Wang, AACL-IJCNLP 2022 Workshop Eval4NLP

Thirty-Two Years of IEEE VIS: Authors, Fields of Study and Citations
Hongtao Hao, Yumian Cui, Zhengxiang Wang, Yea-Seul Kim,
IEEE Transactions on Visualization and Computer Graphics, 2022
 

Linguistic Knowledge in Data Augmentation for Natural Language Processing: An Example on Chinese Question Matching
Zhengxiang Wang, ICNLSP 2022
 
