“I Won the Election!”: An Empirical Analysis of Soft Moderation Interventions on Twitter
Researchers wanted to study how users interact with soft moderation on Twitter, which they accomplished through a mixed-methods approach and qualitative analysis. They noted that previous research, although they made insightful discoveries such as the implied truth effect, backfire effect, and illusory truth effect, were done in hypothetical settings. Hence in this study, researchers targeted tweets associated with the 2020 presidential election to answer the following research questions:
- What are the various warning labels used on Twitter during the 2020 US elections and what kind of users have their tweets flagged more frequently? Are there differences across political leanings?
- Is the engagement of content that includes warning labels significantly different from the content without warning labels?
- How do users on Twitter interact with content that includes warning labels?
0
1
Tags
CSCW (Computer-supported cooperative work)
Computing Sciences
Related
Social Media COVID-19 Misinformation Interventions Viewed Positively, But Have Limited Impact
The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings
Two Truths and a Lie: Exploring Soft Moderation of COVID-19 Misinformation with Amazon Alexa
“I Won the Election!”: An Empirical Analysis of Soft Moderation Interventions on Twitter
You’re definitely wrong, maybe: Correction style has minimal effect on corrections of misinformation online
Mobilizing Users: Does Exposure to Misinformation and Its Correction Affect Users’ Responses to a Health Misinformation Post?
Trustworthy misinformation mitigation with soft information nudging
Community-Based Fact-Checking on Twitter's Birdwatch Platform
Nudge Effect of Fact-Check Alerts: Source Influence and Media Skepticism on Sharing of News Misinformation in Social Media
Fighting COVID-19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy-Nudge Intervention
FeedReflect: A Tool for Nudging Users to Assess News Credibility on Twitter
Exploring lightweight interventions at posting time to reduce the sharing of misinformation on social media
Understanding and Reducing the Spread of Misinformation Online
Diffusion and persistence of false rumors in social media networks: implications of searchability on rumor self-correction on Twitter
Shifting attention to accuracy can reduce misinformation online
Privacy Nudges for Social Media: An Exploratory Facebook Study
Field experiments on social media
Political fact-checking on Twitter: When do corrections have an effect?
Social correction across party lines in a Twitter field experiment
Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment
SMS advertising: How message relevance is linked to the attitude toward the brand?