A curated paper list of misinformation research using (multi-modal) large language models, i.e., (M)LLMs.
An LLM can be seen as a (sometimes not reliable) knowledge provider, an experienced expert in specific areas, and a relatively cheap data generator (compared with collecting from the real world). For example, LLMs could be a good analyzer of social commonsense/conventions.
- Cheap-fake Detection with LLM using Prompt Engineering[paper]
- Faking Fake News for Real Fake News Detection: Propaganda-Loaded Training Data Generation[paper]
- Bad Actor, Good Advisor: Exploring the Role of Large Language Models in Fake News Detection[paper]
- Analysis of Disinformation and Fake News Detection Using Fine-Tuned Large Language Model[paper]
- Detecting Misinformation with LLM-Predicted Credibility Signals and Weak Supervision[paper]
- FakeGPT: Fake News Generation, Explanation and Detection of Large Language Model[paper]
- Fighting Fire with Fire: The Dual Role of LLMs in Crafting and Detecting Elusive Disinformation[paper]
- Language Models Hallucinate, but May Excel at Fact Verification[paper]
- Clean-label Poisoning Attack against Fake News Detection Models[paper]
- Rumor Detection on Social Media with Crowd Intelligence and ChatGPT-Assisted Networks[paper]
- LLMs are Superior Feedback Providers: Bootstrapping Reasoning for Lie Detection with Self-Generated Feedback[paper]
- Can Large Language Models Detect Rumors on Social Media?[paper]
- TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection[paper]
- DELL: Generating Reactions and Explanations for LLM-Based Misinformation Detection[paper]
- Enhancing large language model capabilities for rumor detection with Knowledge-Powered Prompting[paper]
- An Implicit Semantic Enhanced Fine-Grained Fake News Detection Method Based on Large Language Model[paper]
- RumorLLM: A Rumor Large Language Model-Based Fake-News-Detection Data-Augmentation Approach[paper]
- Explainable Fake News Detection With Large Language Model via Defense Among Competing Wisdom[paper]
- Message Injection Attack on Rumor Detection under the Black-Box Evasion Setting Using Large Language Model[paper]
- Towards Robust Evidence-Aware Fake News Detection via Improving Semantic Perception[paper]
- Let Silence Speak: Enhancing Fake News Detection with Generated Comments from Large Language Models[paper]
- RAEmoLLM: Retrieval Augmented LLMs for Cross-Domain Misinformation Detection Using In-Context Learning based on Emotional Information[paper]
- Adversarial Style Augmentation via Large Language Model for Robust Fake News Detection[paper]
- Zero-Shot Fact Verification via Natural Logic and Large Language Models[paper]
- RAGAR, Your Falsehood Radar: RAG-Augmented Reasoning for Political Fact-Checking using Multimodal Large Language Models[paper]
- FramedTruth: A Frame-Based Model Utilising Large Language Models for Misinformation Detection[paper]
- Enhancing Fake News Detection through Dataset Augmentation Using Large Language Models[Thesis]
- DAAD: Dynamic Analysis and Adaptive Discriminator for Fake News Detection [paper]
- CoVLM: Leveraging Consensus from Vision-Language Models for Semi-supervised Multimodal Fake News Detection [paper]
Let an LLM be an agent having access to external tools like search engines, deepfake detectors, etc.
- Fact-Checking Complex Claims with Program-Guided Reasoning[paper]
- Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models[paper]
- FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios[paper]
- FactLLaMA: Optimizing Instruction-Following Language Models with External Knowledge for Automated Fact-Checking[paper]
- Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models[paper]
- Language Models Hallucinate, but May Excel at Fact Verification[paper]
- Towards LLM-based Fact Verification on News Claims with a Hierarchical Step-by-Step Prompting Method[paper]
- Evidence-based Interpretable Open-domain Fact-checking with Large Language Models[paper]
- TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection[paper]
- LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation[paper]
- Can Large Language Models Detect Misinformation in Scientific News Reporting?[paper]
- The Perils and Promises of Fact-Checking with Large Language Models[paper]
- SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection[paper]
- Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors[paper]
- MMIDR: Teaching Large Language Model to Interpret Multimodal Misinformation via Knowledge Distillation[paper]
- TrumorGPT: Query Optimization and Semantic Reasoning over Networks for Automated Fact-Checking[paper]
- Reinforcement Retrieval Leveraging Fine-grained Feedback for Fact Checking News Claims with Black-Box LLM[paper]
- Large Language Model Agent for Fake News Detection[paper]
- Argumentative Large Language Models for Explainable and Contestable Decision-Making[paper]
- RAEmoLLM: Retrieval Augmented LLMs for Cross-Domain Misinformation Detection Using In-Context Learning based on Emotional Information[paper]
- RAGAR, Your Falsehood Radar: RAG-Augmented Reasoning for Political Fact-Checking using Multimodal Large Language Models[paper]
- Multimodal Misinformation Detection using Large Vision-Language Models[paper]
- Detect, Investigate, Judge and Determine: A Novel LLM-based Framework for Few-shot Fake News Detection [paper]
- LLM-Driven External Knowledge Integration Network for Rumor Detection[paper]
- Evidence-backed Fact Checking using RAG and Few-Shot In-Context Learning with LLMs[paper]
- Web Retrieval Agents for Evidence-Based Misinformation Detection[paper]
- Real-time Fake News from Adversarial Feedback[paper]
- Resolving Unseen Rumors with Retrieval-Augmented Large Language Models[paper]
- Do not wait: Preemptive rumor detection with cooperative LLMs and accessible social context[paper]
An LLM can directly output the final prediction and (optional) explanations.
- A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity[paper]
- Large Language Models Can Rate News Outlet Credibility[paper]
- Fact-Checking Complex Claims with Program-Guided Reasoning[paper]
- Towards Reliable Misinformation Mitigation: Generalization, Uncertainty, and GPT-4[paper]
- Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models[paper]
- News Verifiers Showdown: A Comparative Performance Evaluation of ChatGPT 3.5, ChatGPT 4.0, Bing AI, and Bard in News Fact-Checking[paper]
- Analysis of Disinformation and Fake News Detection Using Fine-Tuned Large Language Model[paper]
- Explainable Claim Verification via Knowledge-Grounded Reasoning withLarge Language Models[paper]
- Language Models Hallucinate, but May Excel at Fact Verification[paper]
- FakeGPT: Fake News Generation, Explanation and Detection of Large Language Model[paper]
- Can Large Language Models Understand Content and Propagation for Misinformation Detection: An Empirical Study[paper]
- Are Large Language Models Good Fact Checkers: A Preliminary Study[paper]
- A Revisit of Fake News Dataset with Augmented Fact-checking by ChatGPT[paper]
- Can Large Language Models Detect Rumors on Social Media?[paper]
- DELL: Generating Reactions and Explanations for LLM-Based Misinformation Detection[paper]
- Assessing the Reasoning Abilities of ChatGPT in the Context of Claim Verification[paper]
- LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation[paper]
- SoMeLVLM: A Large Vision Language Model for Social Media Processing[paper][project]
- Can Large Language Models Detect Misinformation in Scientific News Reporting?[paper]
- The Perils and Promises of Fact-Checking with Large Language Models[paper]
- Potential of Large Language Models as Tools Against Medical Disinformation[paper]
- FakeNewsGPT4: Advancing Multimodal Fake News Detection through Knowledge-Augmented LVLMs[paper]
- SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection[paper]
- Multimodal Large Language Models to Support Real-World Fact-Checking[paper]
- MMIDR: Teaching Large Language Model to Interpret Multimodal Misinformation via Knowledge Distillation[paper]
- An Implicit Semantic Enhanced Fine-Grained Fake News Detection Method Based on Large Language Model[paper]
- Explaining Misinformation Detection Using Large Language Models[paper]
- Rumour Evaluation with Very Large Language Models[paper]
- Argumentative Large Language Models for Explainable and Contestable Decision-Making[paper]
- Exploring the Potential of the Large Language Models (LLMs) in Identifying Misleading News Headlines[paper]
- Tell Me Why: Explainable Public Health Fact-Checking with Large Language Models[paper]
- Mining the Explainability and Generalization: Fact Verification Based on Self-Instruction[paper]
- Reinforcement Tuning for Detecting Stances and Debunking Rumors Jointly with Large Language Models[paper]
- RAEmoLLM: Retrieval Augmented LLMs for Cross-Domain Misinformation Detection Using In-Context Learning based on Emotional Information[paper]
- RAGAR, Your Falsehood Radar: RAG-Augmented Reasoning for Political Fact-Checking using Multimodal Large Language Models[paper]
- Multilingual Fact-Checking using LLM[paper]
- Multimodal Misinformation Detection using Large Vision-Language Models[paper]
- Detect, Investigate, Judge and Determine: A Novel LLM-based Framework for Few-shot Fake News Detection [paper]
- Large Visual-Language Models Are Also Good Classifiers: A Study of In-Context Multimodal Fake News Detection [paper]
- Silver Lining in the Fake News Cloud: Can Large Language Models Help Detect Misinformation?[paper]
- Evidence-backed Fact Checking using RAG and Few-Shot In-Context Learning with LLMs[paper]
- CoVLM: Leveraging Consensus from Vision-Language Models for Semi-supervised Multimodal Fake News Detection [paper]
- Are Large Language Models Good Fact Checkers: A Preliminary Study[paper]
- Claim Check-Worthiness Detection: How Well do LLMs Grasp Annotation Guidelines?[paper]
- Automated Claim Matching with Large Language Models: Empowering Fact-Checkers in the Fight Against Misinformation[paper]
- SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation Tasks[paper]
- Are Large Language Models Good Fact Checkers: A Preliminary Study[paper]
- JustiLM: Few-shot Justification Generation for Explainable Fact-Checking of Real-world Claims[paper]
- Can LLMs Produce Faithful Explanations For Fact-checking? Towards Faithful Explainable Fact-Checking via Multi-Agent Debate[paper]
- [Fake News Propagation Simulation] From Skepticism to Acceptance: Simulating the Attitude Dynamics Toward Fake News[paper]
- [Misinformation Correction] Correcting Misinformation on Social Media with A Large Language Model[paper]
- [Fake News Data Annoatation] Enhancing Text Classification through LLM-Driven Active Learning and Human Annotation[paper]
- [Assisting Human Fact-Checking] On the Role of Large Language Models in Crowdsourcing Misinformation Assessment[paper]
- [Attacking Misinformation Detection] Fake News in Sheep's Clothing: Robust Fake News Detection Against LLM-Empowered Style Attacks[paper]
- [Attacking Misinformation Detection] Attacking Misinformation Detection Using Adversarial Examples Generated by Language Models[paper]
- Preventing and Detecting Misinformation Generated by Large Language Models: A tutorial about prevention and detection techniques of LLM-generated misinformation, including an introduction of recent advances of LLM-based misinformation detection. [Webpage] [Slides]
- Large-Language-Model-Powered Agent-Based Framework for Misinformation and Disinformation Research: Opportunities and Open Challenges: A research framework to generate customized agent-based social networks for disinformation simulations that would enable understanding and evaluating the phenomena whilst discussing open challenges.[paper]
- Combating Misinformation in the Age of LLMs: Opportunities and Challenges: A survey of the opportunities (can we utilize LLMs to combat misinformation) and challenges (how to combat LLM-generated misinformation) of combating misinformation in the age of LLMs. [Project Webpage][paper]