Meta said on Friday that it is working to develop technology and enhance its online safety team in an effort to bolster its efforts to combat online predators who use Facebook and Instagram, both of which it owns, to exploit children amid reports that the platform allowed accounts related to sexual deviancy and disseminated inappropriate content that sexualizes children.
Main facts
In a blog post, Meta stated that it takes “recent claims about the effectiveness of our work seriously,” and has created a task force to review its current policies.
The Wall Street Journal recently reported that Meta allowed networks related to sexual deviancy on Instagram, and that the platform’s video service displayed inappropriate content involving children to accounts that predominantly follow young users.
Meta said that its Child Safety Task Force has expanded its current list of terms, phrases, and symbols related to child safety for its system for identifying content, groups, and pages that violate its rules or are inappropriate.
The company is also using machine learning technology to identify terms related to current violations, and it is linking its systems across Facebook and Instagram to restrict inappropriate content on both platforms.
Meta stated that “potentially suspicious adults” on Instagram will be prevented from following and interacting with each other, using the technology it has that reviews more than 60 different signals to find these users, including if a teenager blocked or reported an adult account, or if an adult account repeatedly searched for suspicious terms.
Regarding Facebook, Meta said it is using its account review technology to improve how it finds groups, pages, and profiles connected to suspicious members or content, and has improved its reporting and enforcement systems to find and block potential predator accounts, such as using technology to find exploitative images of children, prioritizing reports that contain similar content.
Key background
The European Commission on Friday requested Meta, under the Digital Services Act, to provide details on how the company is complying with “its obligations to assess risks and take effective mitigation measures” regarding the protection of minors on Instagram. In November, Meta and Snapchat were given a deadline until December 1 to provide more information to the EU on how they are protecting children from illegal and harmful content, according to a TechCrunch report. In June, the Wall Street Journal reported that Instagram had allowed a large network of accounts related to sexual deviancy that were “openly dedicated to promoting and purchasing sex content with minors,” according to its investigation with researchers from Stanford University and the University of Massachusetts Amherst. The report stated that in addition to not removing content from its platform, Instagram’s algorithm was promoting it, connecting predators to and directing them to the content. In a follow-up report last week, the paper noted that Instagram’s video algorithm was displaying inappropriate videos for children and adult videos as well as advertisements for dating sites to accounts primarily following children and teenagers on the app, according to its testing. In response to the test of Instagram videos, Meta said that the test produced a manufactured result that does not reflect the typical experience of its billions of users.
What to watch for
Meta’s CEO, Mark Zuckerberg, will testify before the Senate Judiciary Committee on January 31 alongside other technology executives including former X CEO Linda Yaccarino and TikTok CEO Shou Zi Chew, to discuss how their platforms have “failed to protect children online,” according to Senators Dick Durbin and Lindsey Graham.
Reading
Future
Instagram Connects Vast Pedophile Network
Instagram’s Algorithm Delivers Toxic Video Mix to Adults Who Follow Children
Meta is Struggling to Boot Pedophiles Off Facebook and Instagram
Leave a Reply