Posts Tagged: Trusted

Meta whistleblower tells Senate the company ‘cannot be trusted with our children’

Another Meta whistleblower has testified before Congress regarding safety issues on the company’s platforms. On the same day that Frances Haugen told Congress in 2021 how Meta could fix some of its safety problems, Arturo Béjar, a former director of engineering for Protect and Care at Facebook, sent CEO Mark Zuckerberg and other executives an email regarding the harms that young people may face while using the company’s products.

Two years later, Béjar was the sole witness in a Senate Judiciary subcommittee hearing titled “Social Media and the Teen Mental Health Crisis.” In his testimony, Béjar claimed he was subpoenaed earlier this year to testify regarding emails he sent Meta higher-ups. He said he realized that since he sent them, nothing had changed at the company.

“Meta continues to publicly misrepresent the level and frequency of harm that users, especially children, experience on the platform,” Béjar told the Subcommittee on Privacy, Technology and the Law in prepared remarks. “And they have yet to establish a goal for actually reducing those harms and protecting children. It’s time that the public and parents understand the true level of harm posed by these ‘products’ and it’s time that young users have the tools to report and suppress online abuse.”

Béjar was an engineering director at Meta between 2009 and 2015, during which time he was responsible for protecting Facebook users. He supported a team that worked on “bullying tools for teens, suicide prevention, child safety and other difficult moments that people go through,” according to his LinkedIn profile.

He testified that he initially left Meta feeling “good that we had built numerous systems that made using our products easier and safer.” However, he said that, since they were 14, his daughter and her friends “repeatedly faced unwanted sexual advances, misogyny and harassment” on Instagram. According to The Wall Street Journal, which first reported on Béjar’s claims, he stated that Meta’s systems typically ignored reports they made or responded to say that the harassment they faced didn’t break the rules.

Those issues prompted him to return to Meta in 2019, where he worked with Instagram’s well-being team. “It was not a good experience. Almost all of the work that I and my colleagues had done during my earlier stint at Facebook through 2015 was gone,” Béjar said in his testimony. “The tools we had built for teenagers to get support when they were getting bullied or harassed were no longer available to them. People at the company had little or no memory of the lessons we had learned earlier.”

Béjar claimed that Instagram and internal research teams gathered data showing that younger teens dealt with “great distress and abuse.” However, “senior management was externally reporting different data that grossly understated the frequency of harm experienced by users,” he told senators.

In a 2021 email to Zuckerberg and other executives laying out some of his concerns, Béjar wrote that his then-16-year-old daughter uploaded a car-related post to Instagram only for a commenter to tell her to “get back to the kitchen.” Béjar said his daughter found this upsetting. “At the same time the comment is far from being policy violating, and our tools of blocking or deleting mean that this person will go to other profiles and continue to spread misogyny,” Béjar wrote. “I don’t think policy/reporting or having more content review are the solutions.”

Béjar said that along with his daughter’s experiences with the app, he cited data from a research team indicating that 13 percent of users aged between 13 and 15 reported that they received unwanted sexual advances on Instagram within the previous seven days. While former chief operating officer Sheryl Sandberg offered sympathy toward his daughter for her negative experiences and Instagram head Adam Mosseri asked to set up a meeting, according to Béjar, Zuckerberg never responded to the email.

“That was unusual,” Béjar said in his testimony. “It might have happened, but I don’t recall Mark ever not responding to me previously in numerous communications, either by email or by asking for an in-person meeting.”

Béjar told the Associated Press that Meta has to change its approach to moderating its platforms. This, according to Béjar, would require the company to place a greater onus on tackling harassment, unwanted sexual advances and other issues that don’t necessarily break the company’s existing rules.

He noted, for instance, that teens should be able to tell Instagram that they don’t want to receive crude sexual messages, even if those don’t violate the app’s current policies. Béjar claims it would be easy for Meta to implement a feature through which teens could flag sexual advances that were made to them. “I believe that the reason that they’re not doing this is because there’s no transparency about the harms that teenagers are experiencing on Instagram,” he told the BBC.

Béjar laid out several other steps that Meta could take to reduce harm users face on its platform that “do not require significant investments by the platforms in people to review content or in technical infrastructure.” He added that he believes adopting such measures (which primarily focus on improving safety tools and getting more feedback from users who have experienced harm) would not severely impact the revenues of Meta or other companies that adopt them. “These reforms are not designed to punish companies, but to help teenagers,” he told the subcommittee. “And over time, they will create a safer environment.”

“My experience, after sending that email and seeing what happened afterwards, is that they knew, there were things they could do about it, they chose not to do them and we cannot trust them with our children,” Béjar said during the hearing. “It’s time for Congress to act. The evidence, I believe, is overwhelming.”

“Countless people inside and outside of Meta are working on how to help keep young people safe online,” Meta spokesman Andy Stone told The Washington Post on Tuesday. “Working with parents and experts, we have also introduced over 30 tools to support teens and their families in having safe, positive experiences online. All of this work continues.”

Béjar hopes his testimony will help spur Congress to “pass the legislation that they’ve been working on” regarding the online safety of younger users. Two years ago, Haugen disclosed internal Facebook research indicating that Instagram was “harmful for a sizable percentage of teens.” Growing scrutiny led Meta to halt work on a version of Instagram for kids.

Since Haugen’s testimony, Congress has made some efforts to tackle online safety issues for kids, but those have stuttered. The Kids Online Safety Act (KOSA) twice advanced from a Senate committee (in the previous Congress and earlier this year), but it hasn’t reached a floor vote and there’s no companion bill in the House. Among other things, the bill seeks to give kids aged under 16 the ability to switch off “addictive features and algorithm-based recommendations, as well as having more protections for their data. Similar bills have stalled in Congress.

Last month, attorneys general from 41 states and the District of Columbia sued Meta over alleged harms it caused to young users. “Meta designed and deployed harmful and psychologically manipulative product features to induce young users’ compulsive and extended Platform use, while falsely assuring the public that its features were safe and suitable for young users,” according to the lawsuit. Béjar said he consulted with the attorneys general and provided them with documents to help their case.

“I’m very hopeful that your testimony, added to the lawsuit that’s been brought by state attorneys general across the country … added to the interest that I think is evidenced by the turnout of our subcommitee today, will enable us to get the Kids Online Safety Act across the finish line,” subcommittee chair Sen. Richard Blumenthal (D-CT) told Béjar. Blumenthal, one of KOSA’s original sponsors, expressed hope that other legislation “that can finally break the straitjacket that Big Tech has imposed on us” will be enacted into law.

Over the last few years and amid the rise of TikTok, Meta has once again been focusing on bringing younger users into its ecosystem, with Zuckerberg stating in 2021 (just a couple of weeks after Haugen’s testimony) that the company would refocus its “teams to make serving young adults their North Star rather than optimizing for the larger number of older people.” Recently, the company lowered the minimum age for using its Meta Quest VR headsets to 10 through the use of parent-controlled accounts.

This article originally appeared on Engadget at https://www.engadget.com/meta-whistleblower-tells-senate-the-company-cannot-be-trusted-with-our-children-185616936.html?src=rss

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Facebook’s ‘trusted’ news source survey is two simple questions

When Facebook said it would rank the trustworthiness of sources in your News Feed based on community feedback, it raised questions as to what that survey would look like. Well, we know now… and it's not terribly complicated. BuzzFeed has obtained…
Engadget RSS Feed

Google’s new Trusted Contacts app helps keep you safe, even if your phone is off

Google has just released an app called Trusted Contacts that allows you to share your location with your emergency contacts with a single touch. The company hopes it will help keep users safe.

The post Google’s new Trusted Contacts app helps keep you safe, even if your phone is off appeared first on Digital Trends.

Mobile–Digital Trends

Growing our Trusted Flagger program into YouTube Heroes

YouTube has always allowed people to report content they believe violates our Community Guidelines and we often hear questions about what happens to a video after you’ve flagged it. When a flag is received, the reported content is always reviewed by YouTube before being removed. We have internal teams from around the world who carefully evaluate reports 24 hours a day, seven days a week, 365 days a year, and these teams remove content that violates our policies or are careful to leave content up if it hasn’t crossed the line.

Back in 2012, we noticed that certain people were particularly active in reporting Community Guidelines violations with an extraordinarily high rate of accuracy. From this insight, the Trusted Flagger program was born to provide more robust tools for people or organizations who are particularly interested in and effective at notifying us of content that violates our Community Guidelines.

As part of this program, Trusted Flaggers receive access to a tool that allows for reporting multiple videos at the same time. Once content is flagged, our trained teams review them to determine whether to remove the flagged videos or not. Our Trusted Flaggers’ results around flagging content that violates our Community Guidelines speak for themselves: their reports are accurate over 90% of the time. This is three times more accurate than the average flagger.

Given the success of the Trusted Flagger program, we want to do more to empower the people who contribute to YouTube in other ways. That’s why we’re introducing YouTube Heroes, a program designed to recognize and support the global community of people who consistently help make YouTube a better experience for everyone. These “Heroes” do this in big and small ways by adding captions & subtitles to videos, reporting videos that violate our community guidelines, or sharing their knowledge with others in our help forums.

The program is now available to a select group of contributors from across the globe who have histories of high quality community contributions. People who are interested in joining the program can express interest here and we will gradually admit other top contributors into the program. 

YouTube Heroes will have access to a dedicated YouTube Heroes community site that is separate from the main YouTube site, where participants can learn from one another. Through the program, participants will be able to earn points and unlock rewards to help them reach the next level. For example, Level 2 Heroes get access to training through exclusive workshops and Hero hangouts, while Level 3 Heroes who have demonstrated their proficiency will be able to flag multiple videos at a time (something Trusted Flaggers can already do) and help moderate content strictly within the YouTube Heroes Community site.

A sneak peek at the new YouTube Heroes Community site

YouTube Heroes will also be able to track their own contributions and see their overall impact. They can easily find out when a video they reported has been removed by YouTube for violation of our policies, a subtitle they contributed has been approved by the creator, or a help forum answer they’ve posted has been marked as best answer.

A look at how YouTube Heroes can track their contributions

To kick the program off, we brought our first class of YouTube Heroes together for the first time this week at a two-day summit at YouTube HQ.
Meet our first class of YouTube Heroes!

It’s early days for the program and we will continue to roll out the details of YouTube Heroes and the dedicated YouTube Heroes Community site over the coming months. We’re excited to learn through this initial launch and to continue improving the program over time, as we’ve done with our Trusted Flagger program. We appreciate everything our community does to make YouTube vibrant and diverse, so on behalf of the team: thank you!

Jen Carter, Product Manager, YouTube Heroes, recently watched “#voteIRL – Use Your Voice. Vote in Real Life.


YouTube Blog