From db1d8fdbace536ae7167fa488d375d60d59d5aa5 Mon Sep 17 00:00:00 2001 From: Jindong Wang Date: Thu, 16 Jan 2025 14:44:07 -0500 Subject: [PATCH] new prints --- _pages/about.md | 4 ++-- _pages/publications.md | 18 ++++++++++++++++-- 2 files changed, 18 insertions(+), 4 deletions(-) diff --git a/_pages/about.md b/_pages/about.md index dbb4d35..df27fe0 100644 --- a/_pages/about.md +++ b/_pages/about.md @@ -18,9 +18,9 @@ social: false # includes social icons at the bottom of the page Assistant Professor, William & Mary
jwang80 [at] wm.edu, jindongwang [at] outlook.com
Integrated Science Center 2273, Williamsburg, VA
-[Google scholar](https://scholar.google.com/citations?&user=hBZ_tKsAAAAJ&view_op=list_works&sortby=pubdate) | [DBLP](https://dblp.org/pid/19/2969-1.html) | [Github](https://github.com/jindongwang) || [Twitter/X](https://twitter.com/jd92wang) | [Zhihu](https://www.zhihu.com/people/jindongwang) | [Wechat](http://jd92.wang/assets/img/wechat_public_account.jpg) | [Bilibili](https://space.bilibili.com/477087194) || [CV](https://go.jd92.wang/cv) [CV (Chinese)](https://go.jd92.wang/cvchinese) +[Google scholar](https://scholar.google.com/citations?&user=hBZ_tKsAAAAJ&view_op=list_works&sortby=pubdate) | [DBLP](https://dblp.org/pid/19/2969-1.html) | [Github](https://github.com/jindongwang) | [Twitter/X](https://twitter.com/jd92wang) | [LinkedIn](https://www.linkedin.com/in/jindong-wang/) | [Zhihu](https://www.zhihu.com/people/jindongwang) | [Bilibili](https://space.bilibili.com/477087194) || [CV](https://go.jd92.wang/cv) [CV (Chinese)](https://go.jd92.wang/cvchinese) -Dr. Jindong Wang is a Tenure-Track Assistant Professor at William & Mary since 2025. Previously, he has been a Senior Researcher in Microsoft Research Asia for 5.5 years. His research interest includes machine learning, large language and foundation models, and AI for social science. He serves as the associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS), guest editor for ACM Transactions on Intelligent Systems and Technology (TIST), area chair for ICML, NeurIPS, ICLR, KDD, ACMMM, and ACML, SPC of IJCAI and AAAI. He has published over 60 papers with 15000+ citations at leading conferences and journals such as ICML, ICLR, NeurIPS, TPAMI, IJCV etc. His research is reported by [Forbes](https://www.forbes.com/sites/lanceeliot/2023/11/11/the-answer-to-why-emotionally-worded-prompts-can-goose-generative-ai-into-better-answers-and-how-to-spur-a-decidedly-positive-rise-out-of-ai/?sh=38038fb137e5), [MIT Technology Review](https://www.mittrchina.com/news/detail/13596), and other international media. Since 2022, he has been selected by Stanford University as one of the [World's Top 2% Scientists](https://ecebm.com/2023/10/04/stanford-university-names-worlds-top-2-scientists-2023/) and one of the [Most Influential AI Scholars](https://www.aminer.cn/ai2000?domain_ids=5dc122672ebaa6faa962c2a4) by AMiner. He received best and outstanding papers awards at several internation conferences and workshops. He published a book [Introduction to Transfer Learning](http://jd92.wang/tlbook). He gave tutorials at [IJCAI'22](https://dgresearch.github.io/), [WSDM'23](https://dgresearch.github.io/), [KDD'23](https://mltrust.github.io/), [AAAI'24](https://ood-timeseries.github.io/), and AAAI'25. He leads several impactful open-source projects, including [transferlearning](https://github.com/jindongwang/transferlearning), [PromptBench](https://github.com/microsoft/promptbench), [torchSSL](https://github.com/torchssl/torchssl), and [USB](https://github.com/microsoft/Semi-superised-learning), which received over 16K stars on Github. +Dr. Jindong Wang is a Tenure-Track Assistant Professor at William & Mary since 2025. Previously, he has been a Senior Researcher in Microsoft Research Asia for 5.5 years. His research interest includes machine learning, large language and foundation models, and AI for social science. Since 2022, he has been selected by Stanford University as one of the [World's Top 2% Scientists](https://ecebm.com/2023/10/04/stanford-university-names-worlds-top-2-scientists-2023/) and one of the [Most Influential AI Scholars](https://www.aminer.cn/ai2000?domain_ids=5dc122672ebaa6faa962c2a4) by AMiner. He serves as the associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS), guest editor for ACM Transactions on Intelligent Systems and Technology (TIST), area chair for ICML, NeurIPS, ICLR, KDD, ACMMM, and ACML, SPC of IJCAI and AAAI. He has published over 60 papers with 15000+ citations at leading conferences and journals such as ICML, ICLR, NeurIPS, TPAMI, IJCV etc. His research is reported by [Forbes](https://www.forbes.com/sites/lanceeliot/2023/11/11/the-answer-to-why-emotionally-worded-prompts-can-goose-generative-ai-into-better-answers-and-how-to-spur-a-decidedly-positive-rise-out-of-ai/?sh=38038fb137e5), [MIT Technology Review](https://www.mittrchina.com/news/detail/13596), and other international media. He received best and outstanding papers awards at several internation conferences and workshops. He published a book [Introduction to Transfer Learning](http://jd92.wang/tlbook). He gave tutorials at [IJCAI'22](https://dgresearch.github.io/), [WSDM'23](https://dgresearch.github.io/), [KDD'23](https://mltrust.github.io/), [AAAI'24](https://ood-timeseries.github.io/), and AAAI'25. He leads several impactful open-source projects, including [transferlearning](https://github.com/jindongwang/transferlearning), [PromptBench](https://github.com/microsoft/promptbench), [torchSSL](https://github.com/torchssl/torchssl), and [USB](https://github.com/microsoft/Semi-superised-learning), which received over 16K stars on Github. He obtained his Ph.D from University of Chinese Academy of Sciences in 2019 with the excellent PhD thesis award and a bachelor's degree from North China University of Technology in 2014. **PhD application in 25 Fall:** [[Visit this page](https://jd92wang.notion.site/Professor-Jindong-Wang-from-William-Mary-is-Recruiting-Fully-Funded-PhD-Students-Interns-for-Fall-12eb4ea70d8e803cadebd1a9b75fd739?pvs=4)] [[中文版](https://zhuanlan.zhihu.com/p/4827065042)] | [Internship or collaboration](https://forms.gle/zRcWP49qF9aR1VXW8) diff --git a/_pages/publications.md b/_pages/publications.md index 9463fb4..fafdd12 100644 --- a/_pages/publications.md +++ b/_pages/publications.md @@ -9,15 +9,29 @@ nav: true [[Google scholar](https://scholar.google.com/citations?user=hBZ_tKsAAAAJ)] | [[DBLP](https://dblp.org/pid/19/2969-1.html)] | [[View by topic](https://jd92.wang/research/)] -#### Preprints - +#### Recent Preprints + +- CultureVLM: Characterizing and Improving Cultural Understanding of Vision-Language Models for over 100 Countries. Shudong Liu, Yiqiao Jin, Cheng Li, Derek F. Wong, Qingsong Wen, Lichao Sun, Haipeng Chen, Xing Xie, Jindong Wang. [[arxiv](https://arxiv.org/abs/2501.01282)] +- MentalArena: Self-play Training of Language Models for Diagnosis and Treatment of Mental Health Disorders. Cheng Li, May Fung, Qingyun Wang, Chi Han, Manling Li, Jindong Wang, Heng Ji. [[arxiv](https://arxiv.org/abs/2410.06845)] +- StringLLM: Understanding the String Processing Capability of Large Language Models. Xilong Wang, Hao Fu, Jindong Wang, Neil Zhenqiang Gong. [[arxiv](https://arxiv.org/abs/2410.01208)] +- On the Diversity of Synthetic Data and its Impact on Training Large Language Models. Hao Chen, Abdul Waheed, Xiang Li, Yidong Wang, Jindong Wang, Bhiksha Raj, Marah I. Abdin. [[arxiv](https://arxiv.org/abs/2410.15226)] +- SoftVQ-VAE: Efficient 1-Dimensional Continuous Tokenizer. Hao Chen, Ze Wang, Xiang Li, Ximeng Sun, Fangyi Chen, Jiang Liu, Jindong Wang, Bhiksha Raj, Zicheng Liu, Emad Barsoum. [[arxiv](https://arxiv.org/abs/2412.10958)] +- Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist. Zihao Zhou, Shudong Liu, Maizhen Ning, Wei Liu, Jindong Wang, Derek F. Wong, Xiaowei Huang, Qiufeng Wang, Kaizhu Huang. [[arxiv](https://arxiv.org/abs/2407.08733)] +- Social Science Meets LLMs: How Reliable Are Large Language Models in Social Simulations? Yue Huang, Zhengqing Yuan, Yujun Zhou, Kehan Guo, Xiangqi Wang, Haomin Zhuang, Weixiang Sun, Lichao Sun, Jindong Wang, Yanfang Ye, Xiangliang Zhang. [[arxiv](https://arxiv.org/abs/2410.23426)] - CycleResearcher: Improving Automated Research via Automated Review. Yixuan Weng, Minjun Zhu, Guangsheng Bao, Hongbo Zhang, Jindong Wang, Yue Zhang, Linyi Yang. [[arxiv](https://arxiv.org/abs/2411.00816)] - Scito2M: A 2 Million, 30-Year Cross-disciplinary Dataset for Temporal Scientometric Analysis. Yiqiao Jin, Yijia Xiao, Yiyang Wang, Jindong Wang. [[arxiv](https://arxiv.org/abs/2410.09510)] +- Can I understand what I create? Self-Knowledge Evaluation of Large Language Models. Zhiquan Tan, Lai Wei, Jindong Wang, Xing Xie, Weiran Huang. [[arxiv](https://arxiv.org/abs/2406.06140)] +- Learning with noisy foundation models. Hao Chen, Jindong Wang, Zihan Wang, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj. [[arxiv](https://arxiv.org/abs/2403.06869)] + +
+More preprints - Meta Semantic Template for Evaluation of Large Language Models. Yachuan Liu, Liang Chen, Jindong Wang, Qiaozhu Mei, Xing Xie. [[arxiv](https://arxiv.org/abs/2310.01448)] - Frustratingly Easy Model Generalization by Dummy Risk Minimization. Juncheng Wang, Jindong Wang, Xixu Hu, Shujun Wang, Xing Xie. [[arxiv](https://arxiv.org/abs/2308.02287)] - Equivariant Disentangled Transformation for Domain Generalization under Combination Shift. Yivan Zhang, Jindong Wang, Xing Xie, and Masashi Sugiyama. [[arxiv](https://arxiv.org/abs/2208.02011)] - Learning Invariant Representations across Domains and Tasks. Jindong Wang, Wenjie Feng, Chang Liu, Chaohui Yu, Mingxuan Du, Renjun Xu, Tao Qin, and Tie-Yan Liu. [[arxiv](https://arxiv.org/abs/2103.05114)] - Learning to match distributions for domain adaptation. Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng, Yiqiang Chen, and Tie-Yan Liu. [[arxiv](https://arxiv.org/abs/2007.10791)] +
+ #### Books