Facebook said on Tuesday it was stepping up its fight against child abuse with new tools to spot this content and tougher rules on what crosses the line.
“Using our apps to harm children is heinous and unacceptable,” Global Chief Safety Officer Antigone Davis said in a blog post.
“We are developing targeted solutions, including new tools and policies to reduce the sharing of this type of content.”
The social media giant has updated its guidelines to make it clear that it will remove Facebook or Instagram accounts dedicated to sharing images of children posted with captions, hashtags or comments containing innuendos or signs of. inappropriate affection.
“We have always removed content that explicitly sexualizes children, but content that is not explicit and does not represent child nudity is more difficult to define,” said Davis.
“Under this new policy, while images alone cannot break our rules, the accompanying text can help us better determine if the content sexualizes children and if the profile, page, group or account associated must be deleted. “
Among the new tools being tested included one that triggers pop-up messages in response to search terms associated with child exploitation, warning of the consequences of viewing this material and suggesting people get help to change. their behaviour.
Facebook is also testing a security alert that informs people sharing child exploitation content of the damage it causes and the legal consequences, according to Davis.
In addition to removing content that violates Facebook’s policies, these posts are reported to the National Center for Missing and Exploited Children (NCMEC).
“We use the information from this security alert to help us identify behavioral signals from those who might be at risk for sharing this material,” Davis said.
An analysis of illegal child exploitation posts shared with NCMEC late last year found that more than 90% of them were the same or very similar to the previously reported content, according to Facebook.
Just six videos made up more than half of the content reported during that time, Davis said.
Facebook has worked with NCMEC and other groups to glean the apparent intent of people sharing such content.
It was concluded that more than 75% of the shares examined did not appear to be malicious, but were made for reasons such as expression of outrage or bad attempts at humor, according to Davis.
Facebook has raised concerns among law enforcement officials that it plans to provide end-to-end encryption on all of its messaging platforms, which police say could allow criminals to hide communications.
Is the Samsung Galaxy S21 + the Perfect Flagship for Most Indians? We discussed it on Orbital, our weekly tech podcast, which you can subscribe to through Apple Podcasts, Google Podcasts, or RSS, download the episode, or just hit the play button below.
Note: The content and images used in this article is rewritten and sourced from gadgets.ndtv.com