Two Truths and a Lie: Exploring Soft Moderation of COVID-19 Misinformation with Amazon Alexa
- Researchers studied how Twitter users’ perceptions would change as a result of spoken soft moderation through Amazon Alexa because people generally do not question Alexa’s knowledge
- Alexa has this function where it can distort information and tell you something that is completely different than the intended truth as a way to mislead users. The original intent of this skill was to provide two different perspectives on government regulations.
- The study seeks to understand whether Alexa’s skills can also be used to reduce and diverge user attention from misinformation through the execution of 3 situations – Alexa informs users of the warning label and repeats the misinformation tweet after; Alexa first repeats the misinformation tweet then reads the warning label; Alexa does not say anything about the warning label and simply reads the tweet containing misinformation to the user
0
1
Tags
CSCW (Computer-supported cooperative work)
Computing Sciences
Related
Social Media COVID-19 Misinformation Interventions Viewed Positively, But Have Limited Impact
The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings
Two Truths and a Lie: Exploring Soft Moderation of COVID-19 Misinformation with Amazon Alexa
“I Won the Election!”: An Empirical Analysis of Soft Moderation Interventions on Twitter
You’re definitely wrong, maybe: Correction style has minimal effect on corrections of misinformation online
Mobilizing Users: Does Exposure to Misinformation and Its Correction Affect Users’ Responses to a Health Misinformation Post?
Trustworthy misinformation mitigation with soft information nudging
Community-Based Fact-Checking on Twitter's Birdwatch Platform
Nudge Effect of Fact-Check Alerts: Source Influence and Media Skepticism on Sharing of News Misinformation in Social Media
Fighting COVID-19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy-Nudge Intervention
FeedReflect: A Tool for Nudging Users to Assess News Credibility on Twitter
Exploring lightweight interventions at posting time to reduce the sharing of misinformation on social media
Understanding and Reducing the Spread of Misinformation Online
Diffusion and persistence of false rumors in social media networks: implications of searchability on rumor self-correction on Twitter
Shifting attention to accuracy can reduce misinformation online
Privacy Nudges for Social Media: An Exploratory Facebook Study
Field experiments on social media
Political fact-checking on Twitter: When do corrections have an effect?
Social correction across party lines in a Twitter field experiment
Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment
SMS advertising: How message relevance is linked to the attitude toward the brand?
Learn After
Two Truths and a Lie: Exploring Soft Moderation of COVID-19 Misinformation with Amazon Alexa
Two Truths and a Lie: Exploring Soft Moderation of COVID-19 Misinformation with Amazon Alexa - Methodology
Two Truths and a Lie: Exploring Soft Moderation of COVID-19 Misinformation with Amazon Alexa - Results
Two Truths and a Lie: Exploring Soft Moderation of COVID-19 Misinformation with Amazon Alexa - Discussion