Posts Tagged: Part

Sony indefinitely delays ‘The Last of Us Part II’

Naughty Dog has just about finished work on The Last Of Us Part II. It's taking care of the last few bugs and then the long-awaited sequel should be all set. However, Sony has delayed the game for a second time. It's pushing back the release…
Engadget RSS Feed

The Four Rs of Responsibility, Part 2: Raising authoritative content and reducing borderline content and harmful misinformation

YouTube is an open video platform, where anyone can upload a video and share it with the world. And with this openness comes incredible opportunities as well as challenges. That’s why we’re always working to balance creative expression with our responsibility to protect the community from harmful content.

Our community guidelines set the rules of the road on YouTube, and a combination of people and machines help us remove more violative content than ever before. That said, there will always be content on YouTube that brushes up against our policies, but doesn’t quite cross the line. So over the past couple of years, we’ve been working to raise authoritative voices on YouTube and reduce the spread of borderline content and harmful misinformation. And we are already seeing great progress. Authoritative news is thriving on our site. And since January 2019, we’ve launched over 30 different changes to reduce recommendations of borderline content and harmful misinformation. The result is a 70% average drop in watch time of this content coming from non-subscribed recommendations in the U.S.1

Raising authoritative voices on YouTube

More and more people turn to YouTube to catch up on the latest news or simply learn more about the topics they’re curious about — whether it’s climate change or a natural disaster. For topics like music or entertainment, relevance, newness and popularity are most helpful to understand what people are interested in. But for subjects such as news, science and historical events, where accuracy and authoritativeness are key, the quality of information and context matter most — much more than engagement. That’s why we’ve re-doubled our efforts to raise authoritative sources to the top and introduced a suite of features to tackle this challenge holistically:

  • Elevating authoritative sources in our systems: In 2017, we started to prioritize authoritative voices, including news sources like CNN, Fox News, Jovem Pan, India Today and the Guardian, for news and information queries in search results and “watch next” panels. Let’s say you’re looking to learn more about a newsworthy event. For example, try searching for “Brexit.” While there will be slight variations, on average, 93% of the videos in global top 10 results come from high-authority channels. Authoritativeness is also important for evergreen topics prone to misinformation, such as videos about vaccines. In these cases, we aim to surface videos from experts, like public health institutions, in search results. Millions of search queries are getting this treatment today and we’re continually expanding to more topics and countries.
  • Providing reliable information faster for breaking news: Reliable information becomes especially critical as news is breaking. But as events are unfolding, it can take time to produce high-quality videos containing verified facts. So we’ve started providing short previews of text-based news articles in search results on YouTube, along with a reminder that breaking and developing news can rapidly change. We’ve also introduced Top News and Breaking News sections to highlight quality journalism. In fact, this year alone, we’ve seen that consumption on authoritative news partners’ channels has grown by 60 percent.
  • Providing context to users: Sometimes a video alone does not provide enough context to viewers about what they are watching. We want to make sure that people who watch videos about topics prone to misinformation are provided additional information while viewing. To that end, we’ve designed a variety of information panels that target different types of context, such as general topics and recent news prone to misinformation, or about publishers themselves. For example, when people watch videos that encourage viewers to skip the MMR vaccine, we show information panels to provide more basic scientific context, linking to third-party sources. Or if people are viewing news videos uploaded by a public broadcaster or a government-funded news outlet, we show informational notices underneath the video about the news outlet. Collectively, we’ve delivered more than 3.5 billion impressions across all of these information panels since June 2018 and we’re expanding these panels to more and more countries.

Reducing borderline content and harmful misinformation

Content that comes close to — but doesn’t quite cross the line of — violating our Community Guidelines is a fraction of 1% of what’s watched on YouTube in the U.S. To give a quick comparison, meditation videos (a fairly narrow category) have more daily watch time than borderline and harmful misinformation combined. That said, even a fraction of a percent is too much. So this past January, we announced we’d begin reducing recommendations of borderline content or videos that could misinform users in harmful ways. This work is still ramping up and we’ve expanded to more countries outside of the U.S., including the UK, Ireland, South Africa and other English-language markets. And we have begun expanding this effort to non-English-language markets, starting with Brazil, France, Germany, Mexico and Spain.

So how does this actually work? Determining what is harmful misinformation or borderline is tricky, especially for the wide variety of videos that are on YouTube. We rely on external evaluators located around the world to provide critical input on the quality of a video. And these evaluators use public guidelines to guide their work. Each evaluated video receives up to 9 different opinions and some critical areas require certified experts. For example, medical doctors provide guidance on the validity of videos about specific medical treatments to limit the spread of medical misinformation. Based on the consensus input from the evaluators, we use well-tested machine learning systems to build models. These models help review hundreds of thousands of hours of videos every day in order to find and limit the spread of borderline content. And over time, the accuracy of these systems will continue to improve.

Our work continues. We are exploring options to bring in external researchers to study our systems and we will continue to invest in more teams and new features. Nothing is more important to us than ensuring we are living up to our responsibility. We remain focused on maintaining that delicate balance which allows diverse voices to flourish on YouTube — including those that others will disagree with — while also protecting viewers, creators and the wider ecosystem from harmful content.

[Read more] The Four Rs of Responsibility, Part 1: Removing harmful content


1Based on the 28-day average from 9/17/19 – 10/14/19, compared to when we first started taking action on this type of content in January 2019.

From the timeline:


July 27, 2015; https://youtube.googleblog.com/2015/07/youtube-comments.html

Sept 14, 2016; https://www.blog.google/outreach-initiatives/google-news-initiative/digital-news-initiative-introducing/

July 20, 2017; https://youtube.googleblog.com/2017/07/bringing-new-redirect-method-features.html

Feb 2, 2018; https://youtube.googleblog.com/2018/02/greater-transparency-for-users-around.html

July 9, 2018; https://youtube.googleblog.com/2018/07/building-better-news-experience-on.html

July 9, 2018; https://youtube.googleblog.com/2018/07/building-better-news-experience-on.html

July 9, 2018; https://youtube.googleblog.com/2018/07/building-better-news-experience-on.html

March 7, 2019; https://india.googleblog.com/2019/04/bringing-greater-transparency-and.html

June 3, 2019; https://youtube.googleblog.com/2019/06/an-update-on-our-efforts-to-protect.html

June 5, 2019; https://youtube.googleblog.com/2019/06/our-ongoing-work-to-tackle-hate.html

July 8, 2019; https://youtube-creators.googleblog.com/2019/08/preserving-openness-through-responsibility.html


YouTube Blog

Redbox will stop selling Disney movie codes as part of settlement

Disney's lawsuit against Redbox is over, and it's not great news for Redbox. The two sides have agreed to a settlement that will have Redbox stop the sale of movie download codes from Disney disc packs. Attorneys for Disney had accused Redbox of vi…
Engadget RSS Feed

The Four Rs of Responsibility, Part 1: Removing harmful content

Over the past several years, we’ve redoubled our efforts to live up to our responsibility while preserving the power of an open platform. Our work has been organized around four principles:

Over the next several months, we’ll provide more detail on the work supporting each of these principles. This first installment will focus on “Remove.” We’ve been removing harmful content since YouTube started, but our investment in this work has accelerated in recent years. Below is a snapshot of our most notable improvements since 2016. Because of this ongoing work, over the last 18 months we’ve reduced views on videos that are later removed for violating our policies by 80%, and we’re continuously working to reduce this number further.1

Developing policies for a global platform

Before we do the work of removing content that violates our policies, we have to make sure the line between what we remove and what we allow is drawn in the right place — with a goal of preserving free expression, while also protecting and promoting a vibrant community. To that end, we have a dedicated policy development team that systematically reviews all of our policies to ensure that they are current, keep our community safe, and do not stifle YouTube’s openness.

After reviewing a policy, we often discover that fundamental changes aren’t needed, but still uncover areas that are vague or confusing to the community. As a result, many updates are actually clarifications to our existing guidelines. For example, earlier this year we provided more detail about when we consider a “challenge” to be too dangerous for YouTube. Since 2018, we’ve made dozens of updates to our enforcement guidelines, many of them minor clarifications but some more substantive.

For particularly complex issues, we may spend several months developing a new policy. During this time we consult outside experts and YouTube creators to understand how our current policy is falling short, and consider regional differences to make sure proposed changes can be applied fairly around the world.

Our hate speech update represented one such fundamental shift in our policies. We spent months carefully developing the policy and working with our teams to create the necessary trainings and tools required to enforce it. The policy was launched in early June, and as our teams review and remove more content in line with the new policy, our machine detection will improve in tandem. Though it can take months for us to ramp up enforcement of a new policy, the profound impact of our hate speech policy update is already evident in the data released in this quarter’s Community Guidelines Enforcement Report:

The spikes in removal numbers are in part due to the removal of older comments, videos and channels that were previously permitted. In April 2019, we announced that we are also working to update our harassment policy, including creator-on-creator harassment. We’ll share our progress on this work in the coming months.

Using machines to flag bad content

Once we’ve defined a policy, we rely on a combination of people and technology to flag content for our review teams. We sometimes use hashes (or “digital fingerprints”) to catch copies of known violative content before they are ever made available to view. For some content, like child sexual abuse images (CSAI) and terrorist recruitment videos, we contribute to shared industry databases of hashes to increase the volume of content our machines can catch at upload.

In 2017, we expanded our use of machine learning technology to help detect potentially violative content and send it for human review. Machine learning is well-suited to detect patterns, which helps us to find content similar (but not exactly the same) to other content we’ve already removed, even before it’s ever viewed. These systems are particularly effective at flagging content that often looks the same — such as spam or adult content. Machines also can help to flag hate speech and other violative content, but these categories are highly dependent on context and highlight the importance of human review to make nuanced decisions. Still, over 87% of the 9 million videos we removed in the second quarter of 2019 were first flagged by our automated systems.

We’re investing significantly in these automated detection systems, and our engineering teams continue to update and improve them month by month. For example, an update to our spam detection systems in the second quarter of 2019 lead to a more than 50% increase in the number of channels we terminated for violating our spam policies.

Removing content before it’s widely viewed

We go to great lengths to make sure content that breaks our rules isn’t widely viewed, or even viewed at all, before it’s removed. As noted above, improvements in our automated flagging systems have helped us detect and review content even before it’s flagged by our community, and consequently more than 80% of those auto-flagged videos were removed before they received a single view in the second quarter of 2019.

We also recognize that the best way to quickly remove content is to anticipate problems before they emerge. In January of 2018 we launched our Intelligence Desk, a team that monitors the news, social media and user reports in order to detect new trends surrounding inappropriate content, and works to make sure our teams are prepared to address them before they can become a larger issue.

We’re determined to continue reducing exposure to videos that violate our policies. That’s why, across Google, we’ve tasked over 10,000 people with detecting, reviewing, and removing content that violates our guidelines.

For example, the nearly 30,000 videos we removed for hate speech over the last month generated just 3% of the views that knitting videos did over the same time period.

Last week we updated our Community Guidelines Enforcement Report, a quarterly report that provides additional insight into the amount of content we remove from YouTube, why it was removed, and how it was first detected. That report demonstrates how technology deployed over the last several years has helped us to remove harmful content from YouTube more quickly than ever before. It also highlights how human expertise is still a critical component of our enforcement efforts, as we work to develop thoughtful policies, review content with care, and responsibly deploy our machine learning technology.



1 From January, 2018 – June, 2019


2 Nov 16, 2016; https://youtube.googleblog.com/2016/11/more-parental-controls-available-in.html

2 June 18, 2017; https://www.blog.google/around-the-globe/google-europe/four-steps-were-taking-today-fight-online-terror/

2 July 31, 2017; https://youtube.googleblog.com/2017/07/global-internet-forum-to-counter.html

2 Aug 1, 2017; https://youtube.googleblog.com/2017/08/an-update-on-our-commitment-to-fight.html

2 Dec 4, 2017; https://youtube.googleblog.com/2017/12/expanding-our-work-against-abuse-of-our.html

2 April 23, 2018; https://youtube.googleblog.com/2018/04/more-information-faster-removals-more.html

2 Dec 1, 2018; https://youtube.googleblog.com/2019/06/an-update-on-our-efforts-to-protect.html

2 Jan 15, 2019; https://support.google.com/youtube/thread/1063296?hl=en

2 Feb 19, 2019; https://youtube-creators.googleblog.com/2019/02/making-our-strikes-system-clear-and.html

2 Feb 28, 2019; https://youtube-creators.googleblog.com/2019/02/more-updates-on-our-actions-related-to.html

2 June 5, 2019; https://youtube.googleblog.com/2019/06/our-ongoing-work-to-tackle-hate.html

2 July 1, 2019; https://support.google.com/youtube/thread/8830320

2 Aug 21, 2019; https://support.google.com/youtube/thread/12506319?hl=en

2 Coming soon; https://youtube.googleblog.com/2019/06/taking-harder-look-at-harassment.html


YouTube Blog

The Morning After: Unlimited MoviePass part two

Hey, good morning! You look fabulous. Where were you when we defeated the robots? If you missed yesterday's Starcraft II stream, we'll fill you in on the details of DeepMind's latest gaming exploits. Also, MoviePass is ready to try unlimited tickets…
Engadget RSS Feed

Ben Heck’s Atari 800 handheld, part 2

Will Pie Face be defeated at last? Does the Atari 800 portable work? Do we get to see more soldering? Find out in this episode of The Ben Heck Show where Felix and Ben put the finishing touches to the custom printed circuit board and design a las…
Engadget RSS Feed

Samsung, OnePlus to be part of historic ‘Avengers: Infinity War’ promo campaign

With the third installment in the Avengers series from Marvel ready to hit the big screen later this month, advertisers are preparing to let loose their promotional campaigns tied into the latest Marvel Comics Universe theatrical release. According to sources, the advertising campaign budget for Avengers: Infinity War will be the biggest one ever for […]

Come comment on this article: Samsung, OnePlus to be part of historic ‘Avengers: Infinity War’ promo campaign

Visit TalkAndroid


TalkAndroid

Ben Heck’s smart bike, part 2

The team is on the road with the IoT on Wheels design challenge, trying their best to make their smart bike, well, smarter. Using ST Microelectronics Nucleo hardware with Bluetooth Low Energy means the device can pair with other devices, such as a…
Engadget RSS Feed

Google doubles down on the news part of its ‘News & Weather’ app

Folks looking for a quick look at day's forecast and stories have always been able to turn to Google's News and Weather app for an overview — but apparently, it didn't offer enough. According to Google News' Anand Paka, users routinely hit the botto…
Engadget RSS Feed

Ben Heck’s Atari junk keyboard, part 2

We're not so sure about Ben and Atari making beautiful music together, though the Ben Heck Show team certainly builds good circuits. Previously, they took apart a keyboard and made a manually activated switch matrix to read the piano keys. Now…
Engadget RSS Feed

A NASA probe is headed to a nearby asteroid, and will bring part of it back to Earth

The OSIRIS-REx mission is heading to the asteroid Bennu, and in July 2020 would scoop up some asteroid dust to be returned to earth three years later. The hope is to understand more about the formation of the solar system.

The post A NASA probe is headed to a nearby asteroid, and will bring part of it back to Earth appeared first on Digital Trends.

Cool Tech–Digital Trends

New Apple patent filing suggests wireless charging is still a part of the iPhone’s future

While Apple has repeatedly said that it isn’t sure wireless charging adds convenience for consumers, that’s not stopping internal development by any means. Could it finally arrive in iPhone 8?

The post New Apple patent filing suggests wireless charging is still a part of the iPhone’s future appeared first on Digital Trends.

Mobile–Digital Trends

More NFL Action Coming to Snapchat as Part of Exclusive Partnership

The NFL has extended its partnership with Snapchat to include additional Explorer footage. As part of the deal, even more official and user-created game clips will now be available on the video-sharing app.

The post More NFL Action Coming to Snapchat as Part of Exclusive Partnership appeared first on Digital Trends.

Mobile»Digital Trends