Posts Tagged: violent

Twitter updates violent speech policy to ban ‘wishes of harm’

Twitter is once again tightening its rules around what users are permitted to say on the platform. The company introduced an updated “violent speech” policy, which contains some notable additions compared with previous versions of the rules.

Interestingly, the new policy prohibits users from expressing “wishes of harm” and similar sentiments. “This includes (but is not limited to) hoping for others to die, suffer illnesses, tragic incidents, or experience other physically harmful consequences,” the rules state. That’s a reversal from Twitter’s previous policy, which explicitly said that “statements that express a wish or hope that someone experiences physical harm" were not against the company’s rules.

“Statements that express a wish or hope that someone experiences physical harm, making vague or indirect threats, or threatening actions that are unlikely to cause serious or lasting injury are not actionable under this policy,” Twitter’s previous policy stated, according to the Wayback Machine.

That change isn't the only addition to the policy. Twitter’s rules now also explicitly protects “infrastructure that is essential to daily, civic, or business activities” from threats of damage. From the rules:

You may not threaten to inflict physical harm on others, which includes (but is not limited to) threatening to kill, torture, sexually assault, or otherwise hurt someone. This also includes threatening to damage civilian homes and shelters, or infrastructure that is essential to daily, civic, or business activities.

These may not seem like particularly eyebrow-raising changes, but they are notable given Elon Musk’s previous statements about how speech should be handled on Twitter. Prior to taking over the company, the Tesla CEO stated that his preference would be to allow all speech that is legal. “I think we would want to err on the side of, if in doubt, let the speech exist,” he said at the time.

It’s also not the first time Twitter’s rules have become more restrictive since Musk’s takeover. The company’s rules around doxxing changed following his dustup with the (now suspended) @elonjet account, which shared the whereabouts of Musk’s private jet.

Twitter didn’t explain its rationale for the changes, but noted in a series of tweets that it may suspend accounts breaking the rules or force them to delete the tweets in question. The company no longer has a communications team to respond to requests for comment.

This article originally appeared on Engadget at https://www.engadget.com/twitter-updates-violent-speech-policy-to-ban-wishes-of-harm-214320985.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

An update on our commitment to fight violent extremist content online

In June, we announced four steps we’re taking to combat terrorist content on YouTube:

  1. Better detection and faster removal powered by machine learning;
  2. More expert partners to help identify violative content;
  3. Tougher standards for videos that are controversial but do not violate our policies; and
  4. Amplified voices speaking out against hate and extremism.

We shared our progress across these steps in August and wanted to update you again on where things are today.

Better detection and faster removal

We’ve always used a mix of human flagging and human review together with technology to address controversial content on YouTube. In June, we introduced machine learning to flag violent extremism content and escalate it for human review. We continue to get faster here:

  • Over 83 percent of the videos we removed for violent extremism in the last month were taken down before receiving a single human flag, up 8 percentage points since August.
  • Our teams have manually reviewed over a million videos to improve this flagging technology by providing large volumes of training examples.

Inevitably, both humans and machines make mistakes, and as we have increased the volume of videos for review by our teams, we have made some errors. We know we can get better and we are committed to making sure our teams are taking action on the right content. We are working on ways to educate those who share video meant to document or expose violence on how to add necessary context.

More experts

Outside experts are essential to advising us on our policies and flagging content for additional inputs that better train our systems. Our partner NGOs bring expert knowledge of complex issues like hate speech, radicalization, and terrorism.

We have added 35 NGOs to our Trusted Flagger program, which is 70 percent of the way towards our goal. These new partner NGOs represent 20 different countries and include NGOs like the International Center for the Study of Radicalization at King’s College London and The Wahid Institute in Indonesia, which is dedicated to promoting religious freedom and tolerance.

Tougher standards

We started applying tougher treatment to videos that aren’t illegal and don’t violate our Guidelines, but contain controversial religious or supremacist content. These videos remain on YouTube, but they are behind a warning interstitial, aren’t recommended, monetized, and don’t have key features including comments, suggested videos, and likes. This is working as intended and helping us strike a balance between upholding free expression, by providing a historical record of content in the public interest, while also keeping these videos from being widely spread or recommended to others.

Amplify voices speaking out against hate and extremism

We continue to support programs that counter extremist messages. We are researching expansion for Jigsaw’s Redirect Method to apply this model to new languages and search terms. We’re heavily investing in our YouTube Creators for Change program to support Creators who are using YouTube to tackle social issues and promote awareness, tolerance and empathy. Every month these Creators release exciting and engaging new videos and campaigns to counter hate and social divisiveness:

  • In September, three of our fellows, from Australia, the U.K., and the U.S., debuted their videos on the big screen at the Tribeca TV festival, tackling topics like racism, xenophobia, and experiences of first generation immigrants.
  • Local YouTube Creators in Indonesia partnered with the MAARIF Institute and YouTube Creators for Change Ambassador, Cameo Project, to visit ten different cities and train thousands of high school students on promoting tolerance and speaking out against hate speech and extremism.
  • We’re adding two new local Creators for Change chapters, in Israel and Spain, to the network of chapters around the world.

In addition to this work supporting voices to counter hate and extremism, last month Google.org announced a $ 5 million innovation fund to counter hate and extremism. This funding will support technology-driven solutions, as well as grassroots efforts like community youth projects that help build communities and promote resistance to radicalization.

Terrorist and violent extremist material should not be spread online. We will continue to heavily invest to fight the spread of this content, provide updates to governments, and collaborate with other companies through the Global Internet Forum to Counter Terrorism. There remains more to do so we look forward to continuing to share our progress with you.

The YouTube Team


YouTube Blog

A police robot disarmed a violent suspect in Los Angeles County

Last week, on September 8th, the Los Angeles County Sheriff's department successfully used a remote-controlled bomb squad robot to snatch a rifle out from under an armed and violent suspect. The standoff between the suspect and an armored SWAT team l…
Engadget RSS Feed