Tech Execs Say They're Acting Faster on Extremist Content - NBC 10 Philadelphia
National & International News
The day’s top national and international news

Tech Execs Say They're Acting Faster on Extremist Content

U.S. lawmakers want to know what the companies are doing to remove hate speech from their platforms and how they are coordinating with law enforcement



    Tech Execs Say They're Acting Faster on Extremist Content
    J. Scott Applewhite/AP
    From left: Monika Bickert, head of global policy management at Facebook; Nick Pickles, public policy director for Twitter; and Derek Slater, global director of information policy at Google, testify before the Senate Commerce, Science and Transportation Committee on how internet and social media companies are prepared to thwart terrorism and extremism, Sept. 18, 2019, on Capitol Hill in Washington.

    Executives of Facebook, Google and Twitter told Congress on Wednesday that they've gotten better and faster at detecting and removing violent extremist content on their social media platforms in the face of mass shootings fueled by hatred.

    Questioned at a hearing by the Senate Commerce Committee, the executives said they are spending money on technology to improve their ability to flag extremist content and taking the initiative to reach out to law enforcement authorities to try to head off potential violent incidents.

    "We will continue to invest in the people and technology to meet the challenge," said Derek Slater, Google's director of information policy.

    The lawmakers want to know what the companies are doing to remove hate speech from their platforms and how they are coordinating with law enforcement.

    Pelosi Invites Trump to Testify in Impeachment Hearings

    [NATL] Pelosi Invites Trump to Testify in Impeachment Hearings

    Eight witnesses are expected to testify over three days as public impeachment hearings continue in Washington, with House Speaker Nancy Pelosi extending an invitation to President Donald Trump to join. 

    (Published Monday, Nov. 18, 2019)

    "We are experiencing a surge of hate. ... Social media is used to amplify that hate," said Sen. Maria Cantwell of Washington state, the panel's senior Democrat.

    The company executives testified that their technology is improving for identifying and taking down suspect content faster.

    Of the 9 million videos removed from Google's YouTube in the second quarter of the year, 87% were flagged by a machine using artificial intelligence, and many of them were taken down before they got a single view, Slater said.

    After the February 2018 high school shooting in Florida that killed 17 people, Google began to proactively reach out to law enforcement authorities to see how they can better coordinate, Slater said. Nikolas Cruz, the shooting suspect, had posted on a YouTube page beforehand, "I'm going to be a professional school shooter," authorities said.

    Word came this week from Facebook that it will work with law enforcement organizations to train its AI systems to recognize videos of violent events as part of a broader effort to crack down on extremism. Facebook's AI systems were unable to detect livestreamed video of the mosque shootings in New Zealand in March that killed 50 people. The self-professed white supremacist accused of the shootings had livestreamed the attack.

    The effort will use bodycam footage of firearms training provided by U.S. and U.K. government and law enforcement agencies.

    Amb. Yovanovitch Responds to Trump’s Tweet During Testimony, Calls It ‘Very Intimidating’

    [NATL] Amb. Yovanovitch Responds to Trump’s Tweet During Testimony, Calls It ‘Very Intimidating’

    Marie Yovanovitch, former U.S. Ambassador to Ukraine, responds to a tweet President Donald Trump published about her record as an ambassador.

    (Published Friday, Nov. 15, 2019)

    Facebook also is expanding its definition of terrorism to include not just acts of violence intended to achieve a political or ideological aim, but also attempts at violence, especially when aimed at civilians with the intent to coerce and intimidate. The company has had mixed success in its efforts to limit the spread of extremist material on its service.

    Facebook appears to have made little progress, for example, on its automated systems for removing prohibited content glorifying groups like the Islamic State in the four months since The Associated Press detailed how Facebook pages auto-generated for businesses are aiding Middle East extremists and white supremacists in the U.S. The new details come from an update of a complaint to the Securities and Exchange Commission that the National Whistleblower Center plans to file this week.

    Facebook said in response that it removes any auto-generated pages "that violate our policies. While we cannot catch every one, we remain vigilant in this effort."

    Monika Bickert, Facebook's head of global policy management, said at the Senate hearing that the company has increased its ability to detect terror, violence and hate speech much sooner. "We know that people need to be safe," she said. Bickert noted that Facebook removes any content that promotes violence, white supremacy or nationalism as well as indicating suicide, and disables accounts when threats are detected.

    Twitter's director of public policy strategy, Nick Pickles, said the service suspended more than 1.5 million accounts for promoting terrorism between Aug. 1, 2015, and Dec. 31, 2018. More than 90% of the accounts are suspended through Twitter's proactive measures, he said, not waiting for reports from government and law enforcement.

    Sen. Rick Scott, R-Fla., asked Pickles why Twitter hadn't suspended the account of Venezuelan socialist leader Nicolas Maduro, who has presided over a deepening economic and political crisis and has threatened opposition politicians with criminal prosecution.

    Key Moments From Impeachment Hearing

    [NATL] Key Moments From Impeachment Hearing

    Here are key moments as they unfolded during Wednesday’s public impeachment hearings, which included testimony from U.S. diplomat William Taylor and State Department official George Kent.

    (Published Thursday, Nov. 14, 2019)

    If Twitter removed Maduro's account, "it would not change facts on the ground," Pickles said.

    Scott said he disagreed because Maduro's account with some 3.7 million followers provides him with legitimacy as a world leader.