When Elon Musk took control of X, formerly known as Twitter, one of his first steps was to restructure the company’s content policies. When it comes to misinformation, Musk relies on X users to tackle the problem through the Community Notes program, which allows approved users to add context to posts that contain inaccuracies or lies.
X turns content moderation into Community Notes
The issue of misinformation on X in recent months has been so prominent that the European Union threatened to take action against the company, opening an investigation that could lead to X facing massive fines.
Last month, X announced several shifts in Community Notes in an attempt to appease EU officials. X’s CEO, Linda Yaccarino, posted dozens of non-replyable and non-retweetable posts in October. Nine of those posts were about updates to the Community Notes program. In the same month, X’s executive director, Joe Benaroch, reached out to Mashable to provide a press release about the Community Notes updates, marking the first time Mashable heard from X since Elon Musk joined the company last year.
It has become clear from all of the above that X is relying on Community Notes to combat misinformation. But Community Notes is failing.
Misinformation gets more views than validation
On the night of October 17, user @MichalSabra on X posted a video claiming to show chanting from pro-Palestinian students protesting at UPenn consisting of the phrase “We want Jewish genocide.”
However, users who watched the video could hear that this was not true. The protesters were actually chanting “We accuse you of genocide,” which is an accusatory slogan. The ADL also confirmed this in its own denial of the claim.
When Mashable began tracking the post for the first time, it had already garnered over half a million views within 12 hours. After nearly 20 hours, the first approved Community Notes was submitted to categorize it, meaning it was only visible to Community Notes program members—not to the wider platform user base.
On October 19, two days after @MichalSabra posted the false claim, Mashable received a notification from X stating that the community note had been approved on the post and had been viewed 100,000 times.
As of November 28, @MichalSabra’s post had not been deleted. The post achieved 3.9 million views. The two community notes that were approved to appear on the post have a total of only 185,000 views. The community note was viewed by only 4.7 percent of users who saw the original claim.
Additionally, according to X, these two community notes also appear on 22 other posts containing @MichalSabra’s video and include views from those posts in their cumulative metrics as well.
Disappearance of Community Notes
User @elikowaz posted on X on November 18 an image depicting “I love Hamas,” which they claimed had been printed and circulated by the Social Justice Center at the University of British Columbia on campus.
Within hours of posting, the Social Justice Center at the University of British Columbia denied the allegations. Later, Hillel BC shared that a third party linked to the Jewish college organization was actually responsible for the posters.
When Mashable began monitoring this post on November 20, it had gained over 600,000 views. The approved community note containing the fact-checking had received only just over 25,000 views.
Until
On November 28, the @elikowaz post is still up, and now it has a million views. However, Mashable noted that even after a community note was approved, the original post containing the misinformation continued to gather more views than the fact-check.
In just one week, the @elikowaz post with the attached community note gained about 400,000 additional views. This means that if community notes worked as intended, the fact-check would have received about 425,000 views – or less than half the views of the post it was attached to.
Instead, the community note only garnered 143,000 views. The community note received far fewer views compared to the original post.
X’s Community Note Metrics Raise Questions
X has touted the number of views that community notes received on the platform several times. On October 14, X announced that community notes generated “more than 85 million impressions in the past week” following the Hamas attack on October 7. Ten days later, in a press release sent to Mashable, X stated that community notes had been “viewed over 100 million times” over the past two weeks. Then on November 14, X published a web article claiming that “notes were viewed more than a hundred million times” in “the first month of the conflict.”
According to X’s own metrics, unless traffic to the platform and the number of posts made decreases, it appears that views on community notes are declining.
Furthermore, based on the spread of many of the posts that shared the misinformation considered by Mashable, 100 million views across all community notes is a very small number.
For instance, on November 9, user @Partisangirl on X posted a video claiming to show Israeli helicopters firing on its citizens at the Supernova music festival on October 7. The claim was debunked as the video showed Israeli helicopters attacking Hamas at a separate location.
Yet, the post by @Partisangirl with the false claim achieved 30 million views by November 29. This is just one X post that is almost 20 days old with about 30 percent of the views that the community notes group received in one month. The community note received only about 244,200 views or roughly 0.8 percent of the views that the original post received. The misinformation was viewed 123 times more than the fact-check. The community note on the massive @Partisangirl post only got just over 244,000 views.
Most Users Do Not See Corrective Community Notes
Although the @Partisangirl post is one example of the oddity, Mashable found that in most cases we observed, there was a significant discrepancy between the number of views on a post and the number of views on the community note attached to the same post.
On October 19, the misinformation-peddling user @DrLoupis posted a post attributing a false quote to Turkish President Erdoğan claiming that the country would intervene in Gaza. The @DrLoupis post containing the false quote was viewed nine million times by November 29. Two separate community notes were submitted just hours after @DrLoupis shared the post and were later approved. The total between these two notes is only about 740,000 views. Only 111,000 views for this community note on a post with 6.3 million views.
In another post by @DrLoupis published on November 26, the user portrayed a child in Gaza as an Israeli boy. In three days, this false post achieved 6.3 million views. The approved community note was not submitted until nearly 27 hours later. Compared to the 6.3 million views of the post, the attached community note only garnered just over 111,000 views. The misinformation had 57 times more views compared to the fact-check.
Last
A major spreader of misinformation on X, @dom_lucre, posted on October 23 claiming that U.S. forces were attacked in Syria along with an image depicting an explosion. The attachment was, in fact, a distorted image showing an Israeli airstrike in Gaza from 2018. The misinformation from user @dom_lucre garnered nearly 800,000 views, about 10 times more than the community note that only received 82,400 views.
Verified users are major misinformation spreaders
All accounts mentioned in this report so far – @MichalSabra, @elikowaz, @Partisangirl, @DrLoupis, and @dom_lucre – are subscribers to the paid verification service X Premium, formerly known as Twitter Blue.
Mashable did not target these users. The posts we tracked were among the most widely shared on the platform. Some were discovered through the search feature on X, while others were identified through the official X account @HelpfulNotes, which shares posts that receive a community note. However, X provides X Premium subscribers with an algorithmic boost, helping their posts spread across the platform.
For example, user @jacksonhinklle has become one of the most influential figures on the platform since October 7, gaining millions of followers in just a few weeks. He is also a subscriber to X Premium and a regular spreader of misinformation. The misinformation posted by @jacksonhinklle received over 10 times the views that the fact-check received.
On November 8, @jacksonhinklle posted that Israeli sniper Barib Yairiel was killed by Hamas. The post garnered 6.4 million views. However, Barib Yairiel does not exist. Four community notes were approved on the post, with the first coming 10 hours after @jacksonhinklle’s posting. All approved community notes together received about 639,300 views, or less than 10 percent of the views that @jacksonhinklle’s post received.
Many X Premium subscribers, including several individuals mentioned in this report, are profiting from the platform. This means that Musk’s company pays these individuals based on the number of other subscribers who see ads placed on their posts. However, on October 30, Musk announced that the policy would change and that posts receiving a community note would no longer be eligible for the ad revenue sharing program. Yet, as Newsguard discovered in an analysis conducted weeks after Musk’s announcement, ads from major brands like Microsoft, Pizza Hut, and Airbnb appeared on posts containing misinformation about Israel and Palestine.
While this investigation primarily focused on misinformation related to Israel and Gaza, Mashable monitored other posts and found the same issues in posts about varying issues and topics. For instance, the verified user @LaurenWitzkeDE posted a video of a “lab-grown chicken thigh” moving and pulsating on a table. @LaurenWitzkeDE claimed this was an example of “mysterious meat grown in Bill Gates’ lab.” However, the video did not depict a real piece of chicken and was actually a piece of art created by an artist on TikTok. The original post by @LaurenWitzkeDE received 1.6 million views. The community note with the fact-check only garnered just over 203,000 views.
Mashable monitored 50 posts over the past two months, excluding some from our investigation after the community note was removed or vanished from the post. Only three of the posts we tracked contained community notes with approximate view counts equivalent to half of the views on the post containing the false claim – two of which featured the same manipulated media.
In
Often, misinformation spreads on X without any notice from the community. Or in another common scenario, a community note is approved, but later removed from the post. Among posts that receive a community note, where the note remains attached to the post, the false claim in the post is often seen about 5 to 10 times more than the fact-checking. Sometimes, as noted in the mentioned examples, this difference is even larger.
“`
Leave a Reply