Posts Tagged: misinformation

Twitter merges misinformation and spam teams following whistleblower claims

Twitter is making a major change to its organization after former security head Peiter "Mudge" Zatko accused the company of having lax security and bot problems. According to Reuters, Twitter is merging its health experience team, which is in charge of clamping down on misinformation and harmful content on the website, with its service team. The latter reviews profiles when they're reported and takes down spam accounts. Together, the combined group will be called Health Products and Services (HPS). 

The group will be led by Ella Irwin, who joined the company in June and had previously worked for Amazon and Google. Reuters says Irwin sent a memo to staff members, telling them that HPS with "ruthlessly prioritize" its projects. "We need teams to focus on specific problems, working together as one team and no longer operating in silos," Irwin reportedly wrote. 

In a statement sent to Reuters, a Twitter spokesperson said the reshuffling "reflects [the company's] continued commitment to prioritize, and focus [its] teams in pursuit of [its] goals." A source also told the news organization that the teams dealing with harmful and toxic content have had major staff departures recently. Merging these two teams may be the best way to ensure that all important roles are filled going forward. 

This news comes on the heels of the revelation that Zatko filed a whistleblower complaint against his former employer. In it, he said Twitter has "extreme, egregious deficiencies" when it comes to security and that it prioritizes user growth over cleaning up spam. Shortly after The Washington Post reported on Zatko's complaint, which also raises concerns about national security, lawmakers from both sides of the aisle announced that they're looking into his claims

In an email to employees, Twitter CEO Parag Agrawal defended the company and echoed its spokesperson's statement that Zatko's complaint is a "false narrative that is riddled with inconsistencies and inaccuracies." You can read the whole memo, obtained by Bloomberg, below:

"Team,

There are news reports outlining claims about Twitter’s privacy, security, and data protection practices that were made by Mudge Zatko, a former Twitter executive who was terminated in January 2022 for ineffective leadership and poor performance. We are reviewing the redacted claims that have been published, but what we’ve seen so far is a false narrative that is riddled with inconsistencies and inaccuracies, and presented without important context.

I know this is frustrating and confusing to read, given Mudge was accountable for many aspects of this work he is now inaccurately portraying more than six months after his termination. But none of this takes away from the important work you have done and continue to do to safeguard the privacy and security of our customers and their data. This year alone, we have meaningfully accelerated our progress through increased focus and incredible leadership from Lea Kissner, Damien Kieran, and Nick Caldwell. This work continues to be an important priority for us, and if you want to read more about our approach, you can find a summary here.

Given the spotlight on Twitter at the moment, we can assume that we will continue to see more headlines in the coming days – this will only make our work harder. I know that all of you take a lot of pride in the work we do together and in the values that guide us. We will pursue all paths to defend our integrity as a company and set the record straight.

See you all at #OneTeam tomorrow,

Parag"

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Facebook labeled 180 million posts for election misinformation

Facebook just offered its first look at the scale of its fight against election misinformation. In the lead-up to the 2020 presidential election, Facebook slapped warning labels on more than 180 million posts that shared misinformation. And it remove…
Engadget

Twitter has labeled 300,000 tweets for election misinformation

A little more than a week after the election, Twitter is giving some additional insight into the effectiveness of its efforts to curb the spread of election misinformation. Between October 27, and November 11, the company labeled about 300,000 tweets…
Engadget

Google will ban coronavirus conspiracy ads to fight misinformation

Google is amping up its fight against coronavirus—related misinformation by banning ads that “[contradict] authoritative scientific consensus” about the pandemic. That means websites and apps can no longer make money from running advertisements promo…
Engadget RSS Feed

The North Face pulls Facebook ads over hate and misinformation policies

Criticism of Facebook’s approaches to hate speech and misinformation may hit the social network where it hurts the most: its finances. CNN reports that clothing brand The North Face has become the most recognizable company yet to join an advocacy gro…
Engadget RSS Feed

WHO joins TikTok to fight coronavirus misinformation

The World Health Organization clearly has an interest in putting a stop to coronavirus misinformation, and that's leading it to online destinations it wouldn't have considered before. The WHO has joined TikTok, and its first videos are, unsurprising…
Engadget RSS Feed

The Four Rs of Responsibility, Part 2: Raising authoritative content and reducing borderline content and harmful misinformation

YouTube is an open video platform, where anyone can upload a video and share it with the world. And with this openness comes incredible opportunities as well as challenges. That’s why we’re always working to balance creative expression with our responsibility to protect the community from harmful content.

Our community guidelines set the rules of the road on YouTube, and a combination of people and machines help us remove more violative content than ever before. That said, there will always be content on YouTube that brushes up against our policies, but doesn’t quite cross the line. So over the past couple of years, we’ve been working to raise authoritative voices on YouTube and reduce the spread of borderline content and harmful misinformation. And we are already seeing great progress. Authoritative news is thriving on our site. And since January 2019, we’ve launched over 30 different changes to reduce recommendations of borderline content and harmful misinformation. The result is a 70% average drop in watch time of this content coming from non-subscribed recommendations in the U.S.1

Raising authoritative voices on YouTube

More and more people turn to YouTube to catch up on the latest news or simply learn more about the topics they’re curious about — whether it’s climate change or a natural disaster. For topics like music or entertainment, relevance, newness and popularity are most helpful to understand what people are interested in. But for subjects such as news, science and historical events, where accuracy and authoritativeness are key, the quality of information and context matter most — much more than engagement. That’s why we’ve re-doubled our efforts to raise authoritative sources to the top and introduced a suite of features to tackle this challenge holistically:

  • Elevating authoritative sources in our systems: In 2017, we started to prioritize authoritative voices, including news sources like CNN, Fox News, Jovem Pan, India Today and the Guardian, for news and information queries in search results and “watch next” panels. Let’s say you’re looking to learn more about a newsworthy event. For example, try searching for “Brexit.” While there will be slight variations, on average, 93% of the videos in global top 10 results come from high-authority channels. Authoritativeness is also important for evergreen topics prone to misinformation, such as videos about vaccines. In these cases, we aim to surface videos from experts, like public health institutions, in search results. Millions of search queries are getting this treatment today and we’re continually expanding to more topics and countries.
  • Providing reliable information faster for breaking news: Reliable information becomes especially critical as news is breaking. But as events are unfolding, it can take time to produce high-quality videos containing verified facts. So we’ve started providing short previews of text-based news articles in search results on YouTube, along with a reminder that breaking and developing news can rapidly change. We’ve also introduced Top News and Breaking News sections to highlight quality journalism. In fact, this year alone, we’ve seen that consumption on authoritative news partners’ channels has grown by 60 percent.
  • Providing context to users: Sometimes a video alone does not provide enough context to viewers about what they are watching. We want to make sure that people who watch videos about topics prone to misinformation are provided additional information while viewing. To that end, we’ve designed a variety of information panels that target different types of context, such as general topics and recent news prone to misinformation, or about publishers themselves. For example, when people watch videos that encourage viewers to skip the MMR vaccine, we show information panels to provide more basic scientific context, linking to third-party sources. Or if people are viewing news videos uploaded by a public broadcaster or a government-funded news outlet, we show informational notices underneath the video about the news outlet. Collectively, we’ve delivered more than 3.5 billion impressions across all of these information panels since June 2018 and we’re expanding these panels to more and more countries.

Reducing borderline content and harmful misinformation

Content that comes close to — but doesn’t quite cross the line of — violating our Community Guidelines is a fraction of 1% of what’s watched on YouTube in the U.S. To give a quick comparison, meditation videos (a fairly narrow category) have more daily watch time than borderline and harmful misinformation combined. That said, even a fraction of a percent is too much. So this past January, we announced we’d begin reducing recommendations of borderline content or videos that could misinform users in harmful ways. This work is still ramping up and we’ve expanded to more countries outside of the U.S., including the UK, Ireland, South Africa and other English-language markets. And we have begun expanding this effort to non-English-language markets, starting with Brazil, France, Germany, Mexico and Spain.

So how does this actually work? Determining what is harmful misinformation or borderline is tricky, especially for the wide variety of videos that are on YouTube. We rely on external evaluators located around the world to provide critical input on the quality of a video. And these evaluators use public guidelines to guide their work. Each evaluated video receives up to 9 different opinions and some critical areas require certified experts. For example, medical doctors provide guidance on the validity of videos about specific medical treatments to limit the spread of medical misinformation. Based on the consensus input from the evaluators, we use well-tested machine learning systems to build models. These models help review hundreds of thousands of hours of videos every day in order to find and limit the spread of borderline content. And over time, the accuracy of these systems will continue to improve.

Our work continues. We are exploring options to bring in external researchers to study our systems and we will continue to invest in more teams and new features. Nothing is more important to us than ensuring we are living up to our responsibility. We remain focused on maintaining that delicate balance which allows diverse voices to flourish on YouTube — including those that others will disagree with — while also protecting viewers, creators and the wider ecosystem from harmful content.

[Read more] The Four Rs of Responsibility, Part 1: Removing harmful content


1Based on the 28-day average from 9/17/19 – 10/14/19, compared to when we first started taking action on this type of content in January 2019.

From the timeline:


July 27, 2015; https://youtube.googleblog.com/2015/07/youtube-comments.html

Sept 14, 2016; https://www.blog.google/outreach-initiatives/google-news-initiative/digital-news-initiative-introducing/

July 20, 2017; https://youtube.googleblog.com/2017/07/bringing-new-redirect-method-features.html

Feb 2, 2018; https://youtube.googleblog.com/2018/02/greater-transparency-for-users-around.html

July 9, 2018; https://youtube.googleblog.com/2018/07/building-better-news-experience-on.html

July 9, 2018; https://youtube.googleblog.com/2018/07/building-better-news-experience-on.html

July 9, 2018; https://youtube.googleblog.com/2018/07/building-better-news-experience-on.html

March 7, 2019; https://india.googleblog.com/2019/04/bringing-greater-transparency-and.html

June 3, 2019; https://youtube.googleblog.com/2019/06/an-update-on-our-efforts-to-protect.html

June 5, 2019; https://youtube.googleblog.com/2019/06/our-ongoing-work-to-tackle-hate.html

July 8, 2019; https://youtube-creators.googleblog.com/2019/08/preserving-openness-through-responsibility.html


YouTube Blog