Wednesday, March 24, 2021
This article was first featured in Yahoo Finance Tech, a weekly newsletter highlighting our original content on the industry. Get it sent directly to your inbox every Wednesday by 4 p.m. ET. Subscribe
Facebook (FB) CEO Mark Zuckerberg, Google (GOOG, GOOGL) CEO Sundar Pichai, and Twitter (TWTR) CEO Jack Dorsey will face another Congressional grilling on Thursday about lies on their platforms.
The hearing before the House Energy and Commerce Committee bills itself as addressing “social media’s role in promoting extremism and misinformation.” In reality, lawmakers will likely target Washington’s favorite bogeyman: Section 230 of the Communications Decency Act.
Section 230, considered the cornerstone of the modern internet, protects websites from liability for third-party content posted to their sites and allows them to moderate that content freely. While the law’s fans say it allows the internet to function as a free marketplace of ideas, it was also passed back when Mark Zuckerberg was in middle school. It’s worth re-examining in light of how the internet might have changed since 1996.
But if the hearing resembles two other Big Tech hearings in Congress from the past year, it will devolve into political theater. That’s because the law is blamed for, impossibly, doing two things at once. Many Republicans say sites use it to censor conservative content they don’t agree with. Others on the opposite side of the aisle, however, say sites use the law to allow disinformation and anger to run rampant — ultimately benefiting their bottom lines.
If either side is going to have an actual discussion about the law, they’re going to have to ask the CEOs difficult questions. They will have to probe the CEOs about how exactly they moderate their content, and ask them questions that force them to acknowledge their roles in real-world violence. Members of Congress will also have to ask themselves just what they want to change about Section 230.
How do your algorithms work?
Congress needs to kick things off by asking the CEOs serious questions about the algorithms their sites use to recommend content to their users.
Both Facebook as well as Google’s YouTube have been criticized for using algorithms that guide users to more divisive and extreme content. According to The Wall Street Journal, Facebook ignored its own internal studies showing that the company’s algorithm aggravated polarization on the platform.
In October, Facebook announced that it was suspending algorithmic recommendations for political groups ahead of the 2020 election. It made the change permanent following the Capitol attack.
During prior hearings, the CEOs have fallen back on the popular refrain that their algorithms function as black boxes. They say they feed the algorithms information about users, and the algorithms spit out content. But a better look at how those algorithms work or the kind of content they favor would provide Congress with a greater understanding of why disinformation, misinformation, and hate speech spread across these platforms.
How do you make decisions about what content to moderate?
The way Big Tech moderates platforms can often seem arbitrary, even though they supposedly outline rules of conduct in their terms of service. Facebook, in particular, has faced criticism for allowing hate speech that seemingly violates its terms of service.
India McKinney, director of federal affairs at the Electronic Frontier Foundation, wants lawmakers to probe CEOs about how they make these decisions.
“They’re not altruistic decisions…and they’re very clear about this,” McKinney said. “Their mission is to make money for their shareholders. The questions are really around transparency, and why the businesses make the decisions they make.”
Experts have been asking about the decision-making processes behind companies’ moderation practices for years, and for good reason. Facebook, for instance, has reportedly softened its stance on moderating content from professional agitators like Alex Jones, and it’s crucial to understand how those companies make those decisions.
What role do you believe social media plays in real-world violence?
Social media sites have been widely accused of allowing former President Trump’s supporters to plan and coordinate their Jan. 6 attack on the Capitol. Congress will need to bluntly ask the CEOs if they believe their sites play a role in real-world violence, and to what degree.
Facebook has also been linked to a slew of international incidents of violence including attacks on Myanmar’s Muslim Rohingya population, while misinformation on Facebook’s WhatsApp has been blamed for gang killings in India.
More recently hate speech and disinformation about the coronavirus have coalesced on social media sites like Facebook and Twitter, as well as fringe sites like 4chan, which The New York Times explains, has led to real-world violence against Asian Americans.
What is the problem with Section 230?
Even if Congress gets answers out of the CEOs, they’ll still have to determine exactly what they want to change about Section 230.
“The core question that Congress needs to answer is to define for itself what the problem is, and then ask the services what they can do to fix that problem,” Eric Goldman, associate dean for research and professor at Santa Clara University School of Law, told Yahoo Finance. “Since Congress doesn’t have a good sense about what problems it wants to fix, it can never elicit information to answer its questions.”
If Congress can’t find common ground on how to fix Section 230, this hearing and others won’t lead to any real change — no matter how many probing questions lawmakers ask.
By Daniel Howley, tech editor. Follow him at @DanielHowley
Add Comment