By David Shepardson
WASHINGTON (Reuters) – Major U.S. social media firms told a Senate panel Wednesday they are doing more to prevent to remove violent or extremist content from online platforms in the wake of several high-profile incidents, focusing on using more technological tools to act faster.
Critics say too many violent videos or posts that back extremist groups supporting terrorism are not immediately removed from social media websites.
Senator Richard Blumenthal, a Democrat, said social media firms need to do more to prevent violent content.
Facebook’s head of global policy management, Monika Bickert, told the Senate Commerce Committee its software detection systems have “reduced the average time it takes for our AI to find a violation on Facebook Live to 12 seconds, a 90% reduction in our average detection time from a few months ago.”
In May, Facebook Inc said it would temporarily block users who break its rules from broadcasting live video. That followed an international outcry after a gunman killed 51 people in New Zealand and streamed the attack live on his page.
Bickert said Facebook asked law enforcement agencies to help it access “videos that could be helpful training tools” to improve its machine learning to detect violent videos.
Earlier this month, the owner of 8chan, an online message board linked to several recent mass shootings, gave a deposition on Capitol Hill after police in Texas said they were “reasonably confident” the man who shot and killed 22 people at a Walmart in El Paso, Texas.
Facebook banned links to violent content that appeared on 8chan.
Twitter Inc public policy director Nick Pickles said the website suspended more than 1.5 million accounts for terrorism promotion violations between August 2015 and the end of 2018 with “more than 90% of these accounts are suspended through our proactive measures.”
Twitter was asked by Senator Rick Scott why the site allows Venezuelan President Nicolas Maduro to have an account given what he said were a series of brazen human rights violations. “If we remove that person’s account it will not change facts on the ground,” Pickles said, who added that Maduro’s account has not broken Twitter’s rules.
Alphabet Inc unit Google’s global director of information policy, Derek Slater, said the answer is “a combination of technology and people. Technology can get better and better at identifying patterns. People can help deal with the right nuances.”
Of 9 million videos removed in a three-month period this year by YouTube, 87% were flagged by artificial intelligence.
(Reporting by David Shepardson; Editing by Nick Zieminski)
Add Comment