I am a PhD candidate in Computational Linguistics at Stony Brook University, affiliated with Institute for Advanced Computational Science (IACS). I am fortunate to be advised by Dr. Owen Rambow.

Prior to coming to Stony Brook, I completed a bachelor's degree in Chinese Language and Literature from Hunan University, and a master's degree in Applied linguistics from University of Saskatchewan.

My research spans large-scale studies of language use, data augmentation, text clustering, explainable AI, and foundation model evaluation. Currently, I focus on on evaluating Large (Vision/Speech) Language Models with a human-centric perspective. I aim to bridge technical performance with real-world impact. I am a longlife learner at heart, so my research topics keep evolving over time.

News

  • Aug, 2025: I started working for Amazon as an Applied Sciencist intern. I research in Large Speech Language Model for the Central Analytics and Research Science team under Alexa AI.

  • Aug, 2025: My first-author paper LVLMs are Bad at Overhearing Human Referential Communication has been accepted to EMNLP 2025 (Main).

  • Aug, 2025: My first-author paper Catch Me If You Can? Not Yet: LLMs Still Struggle to Imitate the Implicit Writing Styles of Everyday Authors has been accepted to EMNLP 2025 (Findings).

  • May, 2025: I started working for Meta as a Software Engineer (Machine Learning) intern. Using Hack (PHP), I built LVLM-based agentic workflow for trend detection, validation, magnitude labeling, and post content quality rating.

  • May, 2025: My first-authored paper LLMs can Perform Multi-Dimensional Analytic Writing Assessments: A Case Study of L2 Graduate-Level Academic English Writing has been accepted to ACL 2025 (Main).

  • Feb, 2025: I advanced to PhD candidacy after passing my second qualifying paper.

  • Aug, 2024: I received the Junior Researcher Award from the Institute for Advanced Computational Science at Stony Brook University.

  • May, 2024: I started working for the Home Depot as a Data Science intern where I developed various LLM-based systems for topic modeling, classification, and validation.

  • June, 2023: I became a trainee for the Bias-NRT (National Science Foundation Research Traineeship) program at Stony Brook University,

  • Aug, 2022: I started my PhD in Computational Linguistics at Stony Brook University.

Research

For a full and up-to-date list of publications, please check my Google Scholar page.

LLMs can Perform Multi-Dimensional Analytic Writing Assessments: A Case Study of L2 Graduate-Level Academic English Writing
Zhengxiang Wang, Veronika Makarova, Zhi Li, Jordan Kodner, Owen Rambow, ACL 2025 (Main)
   

Evaluating LLMs with Multiple Problems at once
Zhengxiang Wang, Jordan Kodner, Owen Rambow, GEM2 2025
   

Learning Transductions and Alignments with RNN Seq2seq models
Zhengxiang Wang, ICGI 2023
   

Developing literature review writing skills through an online writing tutorial series: Corpus-based evidence
Zhi Li, Makarova Veronika, Zhengxiang Wang, Frontiers in Communication, 2023

Linguistic Knowledge in Data Augmentation for Natural Language Processing: An Example on Chinese Question Matching
Zhengxiang Wang, ICNLSP 2022
 

A macroscopic re-examination of language and gender: a corpus-based case study in the university classroom setting
Zhengxiang Wang, MA thesis, University of Saskatchewan, 2021