Over the past few years, we’ve been investing in the policies, resources and products needed to live up to our responsibility and protect the YouTube community from harmful content. This work has focused on four pillars: removing violative content, raising up authoritative content, reducing the spread of borderline content and rewarding trusted creators. Thanks to these investments, videos that violate our policies are removed faster than ever and users are seeing less borderline content and harmful misinformation. As we do this, we’re partnering closely with lawmakers and civil society around the globe to limit the spread of violent extremist content online.
We review our policies on an ongoing basis to make sure we are drawing the line in the right place: In 2018 alone, we made more than 30 policy updates. One of the most complex and constantly evolving areas we deal with is hate speech. We’ve been taking a close look at our approach towards hateful content in consultation with dozens of experts in subjects like violent extremism, supremacism, civil rights, and free speech. Based on those learnings, we are making several updates:
YouTube has always had rules of the road, including a longstanding policy against hate speech. In 2017, we introduced a tougher stance towards videos with supremacist content, including limiting recommendations and features like comments and the ability to share the video. This step dramatically reduced views to these videos (on average 80%). Today, we’re taking another step in our hate speech policy by specifically prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status. This would include, for example, videos that promote or glorify Nazi ideology, which is inherently discriminatory. Finally, we will remove content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place.
We recognize some of this content has value to researchers and NGOs looking to understand hate in order to combat it, and we are exploring options to make it available to them in the future. And as always, context matters, so some videos could remain up because they discuss topics like pending legislation, aim to condemn or expose hate, or provide analysis of current events. We will begin enforcing this updated policy today; however, it will take time for our systems to fully ramp up and we’ll be gradually expanding coverage over the next several months.
In addition to removing videos that violate our policies, we also want to reduce the spread of content that comes right up to the line. In January, we piloted an update of our systems in the U.S. to limit recommendations of borderline content and harmful misinformation, such as videos promoting a phony miracle cure for a serious illness, or claiming the earth is flat. We’re looking to bring this updated system to more countries by the end of 2019. Thanks to this change, the number of views this type of content gets from recommendations has dropped by over 50% in the U.S. Our systems are also getting smarter about what types of videos should get this treatment, and we’ll be able to apply it to even more borderline videos moving forward. As we do this, we’ll also start raising up more authoritative content in recommendations, building on the changes we made to news last year. For example, if a user is watching a video that comes close to violating our policies, our systems may include more videos from authoritative sources (like top news channels) in the “watch next” panel.
Finally, it’s critical that our monetization systems reward trusted creators who add value to YouTube. We have longstanding advertiser-friendly guidelines that prohibit ads from running on videos that include hateful content and we enforce these rigorously. And in order to protect our ecosystem of creators, advertisers and viewers, we tightened our advertising criteria in 2017. In the case of hate speech, we are strengthening enforcement of our existing YouTube Partner Program policies. Channels that repeatedly brush up against our hate speech policies will be suspended from the YouTube Partner program, meaning they can’t run ads on their channel or use other monetization features like Super Chat.
The openness of YouTube’s platform has helped creativity and access to information thrive. It’s our responsibility to protect that, and prevent our platform from being used to incite hatred, harassment, discrimination and violence. We are committed to taking the steps needed to live up to this responsibility today, tomorrow and in the years to come.
— The YouTube Team
Pursuit of the ultimate bezelless design has been the focus of mobile design for the last few years, and Samsung has just reaffirmed their efforts to finally hide that selfie cam for good. Ever since Xiaomi‘s original Mi Mix launched at the tail-end of 2016, the mobile world has been obsessed with perfecting the bezelless […]
Come comment on this article: Samsung reaffirms work on hidden display camera
Your muscles are soft, pliable, and can resist fatigue after thousands of repetitive movements. Researchers at MIT have found a way to make synthetic hydrogels act like muscles by putting them through a vigorous workout. After being mechanically trai…
Engadget RSS Feed
You might remember that a few years ago, viewers were getting frustrated with clickbaity videos with misleading titles and descriptions (“You won’t believe what happens next!”). We responded by updating our system to focus on viewer satisfaction instead of views, including measuring likes, dislikes, surveys, and time well spent, all while recommending clickbait videos less often. More recently, people told us they were getting too many similar recommendations, like seeing endless cookie videos after watching just one recipe for snickerdoodles. We now pull in recommendations from a wider set of topics—on any given day, more than 200 million videos are recommended on the homepage alone. In fact, in the last year alone, we’ve made hundreds of changes to improve the quality of recommendations for users on YouTube.
We’ll continue that work this year, including taking a closer look at how we can reduce the spread of content that comes close to—but doesn’t quite cross the line of—violating our Community Guidelines. To that end, we’ll begin reducing recommendations of borderline content and content that could misinform users in harmful ways—such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11.
While this shift will apply to less than one percent of the content on YouTube, we believe that limiting the recommendation of these types of videos will mean a better experience for the YouTube community. To be clear, this will only affect recommendations of what videos to watch, not whether a video is available on YouTube. As always, people can still access all videos that comply with our Community Guidelines and, when relevant, these videos may appear in recommendations for channel subscribers and in search results. We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users.
This change relies on a combination of machine learning and real people. We work with human evaluators and experts from all over the United States to help train the machine learning systems that generate recommendations. These evaluators are trained using public guidelines and provide critical input on the quality of a video.
This will be a gradual change and initially will only affect recommendations of a very small set of videos in the United States. Over time, as our systems become more accurate, we’ll roll this change out to more countries. It’s just another step in an ongoing process, but it reflects our commitment and sense of responsibility to improve the recommendations experience on YouTube.
— The YouTube Team
It's almost the midterm elections in the US, and that means disinformation campaigns could be working overtime. Social networks have been introducing new features, rolling out changes and even asking the government for help to fight off trolls and fa…
Engadget RSS Feed
Google is, for the most part, a generally well-liked company, and they don’t tend to dabble in anything significantly controversial. They make free services, like Gmail and YouTube, then serve up ads to stay afloat, and aside from the minor privacy quirks here and there, they do pretty well in the court of public opinion. […]
Come comment on this article: Several Google employees walk out over military work for the US Department of Defense
Google recently released the web version of Allo, but the desktop client comes with a huge caveat; you have to keep your phone on and connected to the internet to actually use it. That pretty much completely defeats the point of a desktop chat client, but hey, it’s Google. Don’t worry, though! Google is working […]
Come comment on this article: Google working on allowing the Allo web client to work without your phone
As the CEO of YouTube, I’ve seen how our open platform has been a force for creativity, learning and access to information. I’ve seen how activists have used it to advocate for social change, mobilize protests, and document war crimes. I’ve seen how it serves as both an entertainment destination and a video library for the world. I’ve seen how it has expanded economic opportunity, allowing small businesses to market and sell their goods across borders. And I’ve seen how it has helped enlighten my children, giving them a bigger, broader understanding of our world and the billions who inhabit it.
But I’ve also seen up-close that there can be another, more troubling, side of YouTube’s openness. I’ve seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm.
In the last year, we took actions to protect our community against violent or extremist content, testing new systems to combat emerging and evolving threats. We tightened our policies on what content can appear on our platform, or earn revenue for creators. We increased our enforcement teams. And we invested in powerful new machine learning technology to scale the efforts of our human moderators to take down videos and comments that violate our policies.
Now, we are applying the lessons we’ve learned from our work fighting violent extremism content over the last year in order to tackle other problematic content. Our goal is to stay one step ahead of bad actors, making it harder for policy-violating content to surface or remain on YouTube.
More people reviewing more content
Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content. Since June, our trust and safety teams have manually reviewed nearly 2 million videos for violent extremist content, helping train our machine-learning technology to identify similar videos in the future. We are also taking aggressive action on comments, launching new comment moderation tools and in some cases shutting down comments altogether. In the last few weeks we’ve used machine learning to help human reviewers find and terminate hundreds of accounts and shut down hundreds of thousands of comments. Our teams also work closely with NCMEC, the IWF, and other child safety organizations around the world to report predatory behavior and accounts to the correct law enforcement agencies.
We will continue the significant growth of our teams into next year, with the goal of bringing the total number of people across Google working to address content that might violate our policies to over 10,000 in 2018.
At the same time, we are expanding the network of academics, industry groups and subject matter experts who we can learn from and support to help us better understand emerging issues.
Tackling issues at scale
We will use our cutting-edge machine learning more widely to allow us to quickly and efficiently remove content that violates our guidelines. In June we deployed this technology to flag violent extremist content for human review and we’ve seen tremendous progress.
Because we have seen these positive results, we have begun training machine-learning technology across other challenging content areas, including child safety and hate speech.
We understand that people want a clearer view of how we’re tackling problematic content. Our Community Guidelines give users notice about what we do not allow on our platforms and we want to share more information about how these are enforced. That’s why in 2018 we will be creating a regular report where we will provide more aggregate data about the flags we receive and the actions we take to remove videos and comments that violate our content policies. We are looking into developing additional tools to help bring even more transparency around flagged content.
A new approach to advertising on YouTube
We’re also taking actions to protect advertisers and creators from inappropriate content. We want advertisers to have peace of mind that their ads are running alongside content that reflects their brand’s values. Equally, we want to give creators confidence that their revenue won’t be hurt by the actions of bad actors.
We believe this requires a new approach to advertising on YouTube, carefully considering which channels and videos are eligible for advertising. We are planning to apply stricter criteria, conduct more manual curation, while also significantly ramping up our team of ad reviewers to ensure ads are only running where they should. This will also help vetted creators see more stability around their revenue. It’s important we get this right for both advertisers and creators, and over the next few weeks, we’ll be speaking with both to hone this approach.
We are taking these actions because it’s the right thing to do. Creators make incredible content that builds global fan bases. Fans come to YouTube to watch, share, and engage with this content. Advertisers, who want to reach those people, fund this creator economy. Each of these groups is essential to YouTube’s creative ecosystem—none can thrive on YouTube without the other—and all three deserve our best efforts.
As challenges to our platform evolve and change, our enforcement methods must and will evolve to respond to them. But no matter what challenges emerge, our commitment to combat them will be sustained and unwavering. We will take the steps necessary to protect our community and ensure that YouTube continues to be a place where creators, advertisers, and viewers can thrive.
Susan Wojcicki, CEO of YouTube
Google welcomed Honeywell into the Home family as a partner back in January, but now it's finally including both of the company's connected thermostat families, the Lyric and Total Connected Comfort.
Engadget RSS Feed
Neutrinos are notoriously difficult to understand, but work is underway to know them a little better. Researchers have officially broken ground on the Long-Baseline Neutrino Facility, the home to the international Deep Underground Neutrino Experimen…
Engadget RSS Feed
Google made significant strides in shutting down malicious apps and distributing security updates last year, but has identified areas of improvement in a year-in-review blog post.
The post Android security came a long way in 2016 but Google says there is more work appeared first on Digital Trends.
Plex has a pretty big announcement that will bring your media library into the world of connected gadgets; you’ll now be able to control your Plex setup with Alexa through any of Amazon’s compatible products. Considering how flexible and powerful Plex is, this is a game changer for home media enthusiasts. This integration goes much […]
Come comment on this article: Alexa can now work inside of your Plex library
In a recently published paper, researchers from Stanford University describe an impressive new refrigeration method for providing cooling: simply beam the heat into space.
The post ‘Cool’ tech: Innovative air conditioners could work by beaming heat into deep space appeared first on Digital Trends.
As part of a move to enhance their offerings for enterprise clients, Google announced some improvements and enhancements to their Google Cloud platform and in particular to their set of tools previously known as Google Apps for Work. Those tools are basically a subset of all of Google’s applications that are available to run on […]
Come comment on this article: Google Apps for Work gets upgrades, rebranded to G Suite
Uber says it’s interested in launching a city-based air service to give riders access to faster modes of transport. But instead of using a good ol’ fashioned helicopter, it wants to develop a much quieter machine that can fly autonomously.
The post Uber wants to fly you to work in a small autonomous aircraft appeared first on Digital Trends.
Microsoft isn't just trying out artificial intelligence through bots and voice assistants — it's going all-in. The crew in Redmond has revealed that Office 365 is wielding cloud-based AI to automate many tasks. Tap for Word and Outlook surface rel…
Engadget RSS Feed
Facebook and Oculus head engineer Mary Lou Jepsen will be leaving for start-up Open Water to develop a wearable MRI. The aim is to reduce size and price to make MRIs more accessible to help people. Next step is work brain images paired with thoughts.
The post Facebook and Oculus exec leaving to work on wearable MRI appeared first on Digital Trends.
We have been hearing talk that Google is working on a new VR headset. They already have the Google Cardboard, but this one is said to be much better.
One of the reasons this headset will be much better, and totally different from anything else out there, is the fact that it will be an all-in-one device. It won’t need a phone or a PC in order to work.
Google is actually said to be releasing two new VR headsets. One that is similar to a Samsung Gear VR, which uses a phone to be powered, and another one that doesn’t need anything else to work. The second one would be in its own class as everything else out there must have other devices to function properly.
We don’t know if these reports are true or not as we have heard that one headset is supposed to debut this year, however, we have also heard that Google might dump the idea entirely. If the rumours are true, a likely time frame to see the device would probably be at the Google I/O developer conference later this year. However, it would probably be the headset that works with phones first, and not the stand-alone one just yet.
Come comment on this article: Google might make a VR headset that doesn’t need a phone to work
Amazon vice president Paul Misener doesn't know if the company already has a pricing scheme for its Prime Air service, but he knows everything else there is to know about the delivery drones. He talked about the project at length in an interview with…
Engadget RSS Feed
It seems as if Volkswagen logistics staff are using Google Glass for work. It certainly looks that way, with the car company issuing what it describes as “3D smart glasses” to a number of employees following a successful three-month trial.
The post Volkswagen staff appear to be using Google Glass at work appeared first on Digital Trends.
Apple has announced three new studies for ResearchKit, focused on autism, epilepsy, and melanoma. The studies use the iPhone, iPad, and Apple Watch to spot early signs, and gain more info for medical research.
The post Apple sets ResearchKit to work on cures for autism, epilepsy, and melanoma appeared first on Digital Trends.
Android Wear smartwatches will now work with iPhones, Google announced in a surprise blog post. All future Android Wear smartwatches will work with iPhones, and some older models will support Apple’s OS, too.
Google has announced it’s working hard on an offline feature for Google Maps, but rather than a simplified version, it will contain many of the major features, including turn-by-turn navigation — all without a data connection.
The post Google Maps and YouTube will work offline for people in developing countries appeared first on Digital Trends.