2020 hopefuls seek crackdown on white nationalism - GulfToday

2020 hopefuls seek crackdown on white nationalism

Facebook-Conference

A conference worker passes by a demo booth at Facebook’s annual F8 developer conference in San Jose, California. File/AP

Casey Tolan, Tribune News Service

The gunman who killed 20 people in El Paso, Texas, posted his white supremacist manifesto on the right-wing forum 8chan. The man who massacred 51 people at a Christchurch, New Zealand, mosque streamed video of the attack live on Facebook. And the shooter who gunned down three people at the garlic festival in Gilroy, California, appeared to upload an Instagram post referencing a white nationalist author just hours before he opened fire.

As the death toll rises from shootings carried out by people who’ve espoused white nationalist ideas, some Democratic presidential candidates are calling for social media companies to more forcefully crack down on hateful content on their platforms. Their goal is to make it harder for groups or individuals to coordinate their activities and circulate content targeting racial, ethnic and religious minorities.

But the candidates’ farthest-reaching proposals to stop extremists from weaponizing the internet have also sparked constitutional concerns over how far the government can go in policing online speech. Social media has long been a crucial tool for extremist groups to disseminate their rhetoric and recruit new supporters — both on fringe right-wing platforms like 8chan and in dark corners of more mainstream networks.

The organizers of white nationalist protests in Charlottesville, Va., in August 2017 that left one woman dead organized their event with Facebook group chats, as well as on the chat website Discord. And the man who killed 11 people at a Pittsburgh synagogue last October had a history of sharing content and interacting with high-profile white nationalists on the right-wing social media network Gab in the year before his attack, an analysis by the Southern Poverty Law Center found.

The El Paso gunman cited the Christchurch massacre in a manifesto he posted online, while the Christchurch shooter, in turn, cited a 2011 mass murder in Norway carried out by an Islamophobic extremist — a sign of how one attack can inspire others in a deadly cycle.

“White supremacists are using social media to connect and spread their hate and evil to others,” said Farah Pandith, a former State Department official who focused on fighting violent extremism and has written a book on the subject. “The tech companies have been slow to act and limited in their scope — we have to be realistic about the importance and seriousness of this threat.”

And it is a serious threat. Domestic right-wing terrorism is responsible for more deaths on US soil since 9/11 than jihadism, according to statistics compiled by the New America think tank. Experts say many of those attacks appear to be fueled by strands of the same racist ideology that white people are being “replaced” by minorities or foreigners.

Former Texas Rep. Beto O’Rourke, an El Paso native, would go furthest of the presidential candidates in rethinking legal protections for social networks.

Currently, social media companies are insulated from lawsuits about content posted by users, under Section 230 of the Communications Decency Act — a provision that’s been called “the 26 words that created the internet.”

O’Rourke’s proposal would strip that legal immunity from large companies that don’t set policies to block content that incites violence, intimidation, harassment, threats or defamation based on traits such as race, sex or religion. And all internet companies could be held liable for knowingly promoting content that incites violence.

“This is a matter of life and death, and tech executives have a moral obligation to play an active role in banning online activities that incite violence and acts of domestic terrorism,” O’Rourke spokeswoman Aleigha Cavalier said in an email.

Most experts believe the First Amendment allows private companies to block content on their platforms. But it’s questionable whether the government can tell social media companies what speech they should block and what they should allow, said Jeff Kosseff, a cybersecurity law professor at the United States Naval Academy who wrote a book on legal protections in the digital age.

Kosseff said it’s a legal issue that hasn’t been tested in the courts, and the outcome would depend on the exact language of the law O’Rourke is proposing. “There are certain types of speech that the government can regulate,” such as imminent incitement of violence, or literal threats, he said. “But hate speech standing alone is really tricky.”

Many of the mainstream social giants already have voluntarily set terms of service that seek to block white nationalist content. Earlier this summer, YouTube said it would remove videos that promote white nationalism and other forms of hate speech, as well as videos denying the Holocaust.

But putting those policies into effect can be tricky. Sometimes, white nationalists just create new accounts after they’re banned, forcing platforms to play a game of whack-a-mole. “This isn’t something you can throw a bunch of A.I. at to fix,” Kosseff said. “The types of threats are evolving so rapidly.”

The law protecting social media companies from lawsuits, Section 230, has already faced criticism from both sides of the aisle in recent months, including House Speaker Nancy Pelosi of California and Texas Sen. Ted Cruz. Increased attention on how extremist thought spreads on the platform could give critics of the law more ammunition.

To combat online extremism, South Bend, Ind., Mayor Pete Buttigieg is proposing the government work with social media companies to help them fight the spread of hateful ideology, such as providing federal funding to improve software tools to identify extremism. That work would be “within the boundaries of internet companies’ terms of service and consistent with the First Amendment,” Buttigieg’s plan states.

He also wants to invest $1 billion to increase the FBI’s domestic counterterrorism field staff and reverse Trump administration cuts to programs. “As an intelligence officer in the United States military who specialized in counterterrorism, I’ve seen firsthand what a concerted, coordinated effort to fight terror can do and what it will take to fully confront this threat,” Buttigieg wrote in his plan.

Meanwhile, both Buttigieg and California Sen. Kamala Harris have called for federal law enforcement to more actively monitor websites that are known as hotbeds of white nationalist thought. Harris wants to expand the mandate of the National Counterterrorism Center to include domestic, white-nationalist terrorism, not just foreign threats. She’d also work to pass legislation letting courts temporarily take firearms away from “a suspected terrorist or individual who may imminently perpetrate a hate crime.”

The debate over online extremism is just the latest example of Democratic candidates taking a hard line on Silicon Valley, with issues such as breaking up big tech companies. Sen. Elizabeth Warren, who’s championed that idea, says the growing influence of white supremacist groups on social media shows more needs to be done. Warren has argued that splitting up Facebook and other large companies would force them to be more accountable for their content, a campaign spokesperson said.

Former Vice President Joe Biden and Sen. Bernie Sanders have called for a renewed focus by law enforcement on white nationalism, but haven’t released specific plans on the issue.

Related articles