Posts Tagged: policy

Twitter updates violent speech policy to ban ‘wishes of harm’

Twitter is once again tightening its rules around what users are permitted to say on the platform. The company introduced an updated “violent speech” policy, which contains some notable additions compared with previous versions of the rules.

Interestingly, the new policy prohibits users from expressing “wishes of harm” and similar sentiments. “This includes (but is not limited to) hoping for others to die, suffer illnesses, tragic incidents, or experience other physically harmful consequences,” the rules state. That’s a reversal from Twitter’s previous policy, which explicitly said that “statements that express a wish or hope that someone experiences physical harm" were not against the company’s rules.

“Statements that express a wish or hope that someone experiences physical harm, making vague or indirect threats, or threatening actions that are unlikely to cause serious or lasting injury are not actionable under this policy,” Twitter’s previous policy stated, according to the Wayback Machine.

That change isn't the only addition to the policy. Twitter’s rules now also explicitly protects “infrastructure that is essential to daily, civic, or business activities” from threats of damage. From the rules:

You may not threaten to inflict physical harm on others, which includes (but is not limited to) threatening to kill, torture, sexually assault, or otherwise hurt someone. This also includes threatening to damage civilian homes and shelters, or infrastructure that is essential to daily, civic, or business activities.

These may not seem like particularly eyebrow-raising changes, but they are notable given Elon Musk’s previous statements about how speech should be handled on Twitter. Prior to taking over the company, the Tesla CEO stated that his preference would be to allow all speech that is legal. “I think we would want to err on the side of, if in doubt, let the speech exist,” he said at the time.

It’s also not the first time Twitter’s rules have become more restrictive since Musk’s takeover. The company’s rules around doxxing changed following his dustup with the (now suspended) @elonjet account, which shared the whereabouts of Musk’s private jet.

Twitter didn’t explain its rationale for the changes, but noted in a series of tweets that it may suspend accounts breaking the rules or force them to delete the tweets in question. The company no longer has a communications team to respond to requests for comment.

This article originally appeared on Engadget at https://www.engadget.com/twitter-updates-violent-speech-policy-to-ban-wishes-of-harm-214320985.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Twitter conveniently reveals a location sharing policy amid Elonjet controversy

In November, as an example of his commitment to free speech, Elon Musk promised that he wouldn't ban an account that tracked his private jet despite claiming it was a "direct personal safety risk." Today, that account was suspended. Then restored. Then suspended again. It’s not yet clear what the future holds for @ElonJet, but its fate is probably tied to a new set of rules from Twitter Safety about how it handles accounts sharing location information for other people.

According to a series of tweets outlining the new policy, sharing the live location of another person is now prohibited unless it is related to a "public engagement or event," like a concert or a political event. "When someone shares an individual's live location on Twitter, there is an increased risk of physical harm," the announcement reads. "Moving forward, we'll remove Tweets that share this information, and accounts dedicated to sharing someone else's live location will be suspended." The thread goes on to clarify that these rules only apply to the location of "someone else." You can still Tweet your own whereabouts.

Historical location information is allowed, however, so long as "a reasonable time has elapsed, so that the individual is no longer at risk for physical harm." That part of the policy could leave room for an account like @Elonjet — and while the account was briefly restored this afternoon, at the time of this writing it is once again suspended, as are the personal accounts of Jack Sweeney, the college student who runs @Elonjet. Musk has also said that "legal action" would be taken against Sweeney and "organizations who supported harm to my family" following a recent incident with a stalker and the billionaire's son. 

UPDATE 12/14 5:08PM: Added a statement from Elon Musk that legal action would be taken against Sweeney.

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

San Francisco reverses approval of killer robot policy

In late November, San Francisco's Board of Supervisors has approved a proposal that would allow the city's police force to use remote-controlled robots as a deadly force option when faced with violent or armed suspects. The supervisors voted 8-to-3 in favor of making it a new policy despite opposition by civil rights groups, but now they seem to have had a change of heart. During the second of two required votes before a policy can be sent to the mayor's office for final approval, the board voted 8-to-3 to explicitly ban the use of lethal force by police robots. As San Francisco Chronicle notes, this about-face is pretty unusual, as the board's second votes are typically just formalities that echo the first ones' results.

The San Francisco Police Department made the proposal after a law came into effect requiring California officials to define the authorized uses of their military-grade equipment. It would have allowed cops to equip robots with explosives "to contact, incapacitate, or disorient violent, armed, or dangerous suspects." Authorities could only use the robots for lethal force after they've exhausted all other possibilities, and a high-ranking official would have to approve their deployment. However, critics are concerned that the machines could be abused. 

Dean Preston, one of the supervisors who oppose the use of robots as a deadly force option, said the policy will "place Black and brown people in disproportionate danger of harm or death." In a newer statement made after the board's second vote, Preston said: "There have been more killings at the hands of police than any other year on record nationwide. We should be working on ways to decrease the use of force by local law enforcement, not giving them new tools to kill people."

While the supervisors voted to ban the use of lethal force by police robots — for now, anyway — they also sent the original policy proposing the use of killer robots back for review. The board's Rules Committee could now amend it further to have stricter rules for use of bomb-equipped robots, or it could scrap the old proposal altogether.

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Meta will close a loophole in its doxxing policy in response to the Oversight Board

Meta has agreed to change some of its rules around doxxing in response to recommendations from the Oversight Board. The company had first asked the Oversight Board to help shape its rules last June, saying the policy was “significant and difficult.” The board followed up with 17 recommendations for the company in February, which Meta has now weighed in on.

Unlike decisions around whether specific posts should be taken down or left up, Meta is free to completely disregard policy proposals from the Oversight Board, but it is required to respond to each recommendation individually.

One of the most notable changes is that Meta agreed to end an exception to its existing rules that allowed users to post private residential information if it was “publicly available” elsewhere. The Oversight Board had pointed out that there was a significant difference between obtaining data from a public records request and a viral social media post.

In its response Friday, Meta agreed to remove the exception from its policy. “As the board notes in this recommendation, removing the exception for ‘publicly available’ private residential information may limit the availability of this information on Facebook and Instagram when it is still publicly available elsewhere,” the company wrote. “However, we recognize that implementing this recommendation can strengthen privacy protections on our platforms.” Meta added that the policy change would be implemented “by the end of the year.”

While the company ended one exception, it agreed to relax its policy on another issue. Meta said users would be able to share photos of the exterior of private homes “when the property depicted is the focus of the news story, except when shared in the context of organizing protests against the resident.” Likewise, the company also agreed that it would allow users to share addresses of “high ranking” government officials if the property is a publicly-owned official residence, like those used by heads of state and ambassadors.

The policy changes could have a significant impact for people facing harassment, while also allowing some information to be shared in the context of news stories or protests against elected officials.

The board had also recommended Meta revamp the way that privacy violations are reported by users and how reports are handled internally. On the reporting front, Meta said it has already started experimenting with a simpler method for reporting privacy intrusions. Previously, users had to “click through two menus” and manually search for “privacy violation,” but now the option will appear without the extra search. Meta said it will have results from the experiment “later this month" when it will decide whether to make the change permanent.

Notably, Meta declined to make another change that could make it easier for doxxing victims to get help more quickly. The company said that it would not act on a recommendation that it “create a specific channel of communications for victims of doxing” regardless of whether they are Facebook users. Meta noted that it’s already piloting some live chat help features, but said it “cannot commit to building a doxing-specific channel.”

Meta was also non-committal on a board recommendation that doxxing should be categorized as “severe” violation resulting in a temporary suspension. The company said it was “assessing the feasibility” of the suggestion and “exploring ways to incorporate elements of this recommendation.”

In addition to the substance of the policy changes, Meta’s response to the Oversight Board in this case is notable because it represents the first time the company had asked for a policy advisory opinion, received recommendations and issued a response. Typically, the board weighs in on specific moderation decisions, which can then impact the underlying policies. But Meta can also ask for help shaping broader rules, like it did with doxxing. The company has also asked for help in creating rules around its controversial“cross check” system.

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Sens. Sanders and Warren urge investigation into Amazon’s ‘no-fault’ attendance policy

A group of Democratic lawmakers led by Sens. Elizabeth Warren (D-MA) and Bernie Sanders (I-VT) want regulators to take a closer look at Amazon’s points-based attendance policy, which they believe may be punishing workers for taking legally protected time off. First reported by Vice, the letter to the Department of Labor and Equal Employment Opportunity Commission focuses on Amazon’s “no-fault” approach to absences, which adds points every time an employee misses work without giving advance notice, regardless of the reason. If workers reach a certain number of points, they are automatically reviewed for termination.

Under the company’s attendance policy, an employee whose child has suddenly fallen ill or who suffers a medical emergency would still be penalized. Employees who don’t report absences at least 16 hours before the start of shift receive two points on their record. If they give notice less than two hours before a shift, they receive two points and an “absence submission infraction”. If workers receive three absence submission infractions and eight attendance points, Amazon will consider firing them.

Lawmakers believe that Amazon’s attendance policy could violate current laws that allow workers to take sick, family, medical and pregnancy leave without advance notice. For example, the Family and Medical Leave Act (FMLA) guarantees eligible workers unpaid leave for a variety of circumstances, including pregnancy or the need to take care of a sick family member.

“We field numerous calls from Amazon employees; while many workers know about Amazon’s punitive attendance policies, they describe never receiving information about the federal, state, and local laws that entitle them to legally protected time off—much less understanding how such laws apply in practice in their own lives,” noted labor rights group Better Balance in a letter to Congress.

Other companies with "no-fault" attendance policies have run into legal troubles in the past. Back in 2011, Verizon was ordered to pay $ 20 million after the EEOC found that the company's no-fault attendance policy made no exceptions for disabled workers. 

Many warehouse workers have complained that Amazon neglected to inform them of their rights under FMLA or disability laws. The company has had a poor track record with how it treats workers at its many warehouses and fulfillment centers. A number of warehouses, in response to poor working conditions at the e-commerce giant, are currently pushing to unionize.

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Twitter updates its ‘Hacked Materials’ policy after NY Post controversy

In response to a New York Post article this week about Hunter Biden that used emails of dubious sourcing, Twitter blocked links to it, eventually citing the company’s existing policies around hacked materials. These policies have come under scrutiny…
Engadget RSS Feed

Microsoft says Apple’s game streaming policy will lead to ‘a bad experience’

Earlier today, Apple revised its App Store guidelines to give companies such as Microsoft and Google a way to offer their video game streaming platforms on iOS, but did so with a major caveat. Apple said those companies could release catalog-style ap…
Engadget RSS Feed

An update to our harassment policy

Over the last several years we have worked to improve the way we manage content on YouTube by quickly removing it when it violates our Community Guidelines, reducing the spread of borderline content, raising up authoritative voices when people are looking for breaking news and information and rewarding trusted creators and artists that make YouTube a special place. Today we are announcing a series of policy and product changes that update how we tackle harassment on YouTube. We systematically review all our policies to make sure the line between what we remove and what we allow is drawn in the right place, and recognized earlier this year that for harassment, there is more we can do to protect our creators and community.

Harassment hurts our community by making people less inclined to share their opinions and engage with each other. We heard this time and again from creators, including those who met with us during the development of this policy update. We also met with a number of experts who shared their perspective and informed our process, from organizations that study online bullying or advocate on behalf of journalists, to free speech proponents and policy organizations from all sides of the political spectrum.

We remain committed to our openness as a platform and to ensuring that spirited debate and a vigorous exchange of ideas continue to thrive here. However, we will not tolerate harassment and we believe the steps outlined below will contribute to our mission by making YouTube a better place for anyone to share their story or opinion.

A stronger stance against threats and personal attacks

We’ve always removed videos that explicitly threaten someone, reveal confidential personal information, or encourage people to harass someone else. Moving forward, our policies will go a step further and not only prohibit explicit threats, but also veiled or implied threats. This includes content simulating violence toward an individual or language suggesting physical violence may occur. No individual should be subject to harassment that suggests violence.

Beyond threatening someone, there is also demeaning language that goes too far. To establish a consistent criteria for what type of content is not allowed on YouTube, we’re building upon the framework we use for our hate speech policy. We will no longer allow content that maliciously insults someone based on protected attributes such as their race, gender expression, or sexual orientation. This applies to everyone, from private individuals, to YouTube creators, to public officials.

Consequences for a pattern of harassing behavior

Something we heard from our creators is that harassment sometimes takes the shape of a pattern of repeated behavior across multiple videos or comments, even if any individual video doesn’t cross our policy line. To address this, we’re tightening our policies for the YouTube Partner Program (YPP) to get even tougher on those who engage in harassing behavior and to ensure we reward only trusted creators. Channels that repeatedly brush up against our harassment policy will be suspended from YPP, eliminating their ability to make money on YouTube. We may also remove content from channels if they repeatedly harass someone. If this behavior continues, we’ll take more severe action including issuing strikes or terminating a channel altogether.

Addressing toxic comments

We know that the comment section is an important place for fans to engage with creators and each other. At the same time, we heard feedback that comments are often where creators and viewers encounter harassment. This behavior not only impacts the person targeted by the harassment, but can also have a chilling effect on the entire conversation.

To combat this we remove comments that clearly violate our policies – over 16 million in the third quarter of this year, specifically due to harassment.The policy updates we’ve outlined above will also apply to comments, so we expect this number to increase in future quarters.

Beyond comments that we remove, we also empower creators to further shape the conversation on their channels and have a variety of tools that help. When we’re not sure a comment violates our policies, but it seems potentially inappropriate, we give creators the option to review it before it’s posted on their channel. Results among early adopters were promising – channels that enabled the feature saw a 75% reduction in user flags on comments. Earlier this year, we began to turn this setting on by default for most creators.

We’ve continued to fine tune our systems to make sure we catch truly toxic comments, not just anything that’s negative or critical, and feedback from creators has been positive. Last week we began turning this feature on by default for YouTube’s largest channels with the site’s most active comment sections and will roll out to most channels by the end of the year. To be clear, creators can opt-out, and if they choose to leave the feature enabled they still have ultimate control over which held comments can appear on their videos. Alternatively, creators can also ignore held comments altogether if they prefer.

All of these updates represent another step towards making sure we protect the YouTube community. We expect there will continue to be healthy debates over some of the decisions and we have an appeals process in place if creators believe we’ve made the wrong call on a video.

As we make these changes, it’s vitally important that YouTube remain a place where people can express a broad range of ideas, and we’ll continue to protect discussion on matters of public interest and artistic expression. We also believe these discussions can be had in ways that invite participation, and never make someone fear for their safety. We’re committed to continue revisiting our policies regularly to ensure that they are preserving the magic of YouTube, while also living up to the expectations of our community.

— Matt Halprin, Vice President, Global Head of Trust & Safety


YouTube Blog

Maintaining credibility and consistency on YouTube: Revisions to YouTube Music Charts and 24-hour record debut policy

From “American Bandstand” to “TRL,” every generation naturally finds its own barometer to measure the hottest songs and artists of the moment. For this generation, it’s YouTube. There is simply no better current measure of the world’s music listening than YouTube. Every day, we strive to showcase and celebrate the hottest artists, songs and music videos from around the world.

Today, we’re sharing some important changes made to YouTube Music Charts, the go-to destination to see what’s popular, what’s rising and trending both locally and globally on YouTube, and updates to how we determine videos that are eligible for 24-hour record debuts on YouTube.

YouTube Music Charts have become an indispensable source for the industry and the most accurate place for measuring the popularity of music listening behavior happening on the world’s largest music platform. In an effort to provide more transparency to the industry and align with the policies of official charting companies such as Billboard and Nielsen, we are no longer counting paid advertising views on YouTube in the YouTube Music Charts calculation. Artists will now be ranked based on view counts from organic plays.

Over the last few years, fans, artists, and their teams have touted the number of views a video receives on YouTube within the first 24 hours as the definitive representation of its instant cultural impact. It’s a great honor and one we take very seriously. As we look to maintain consistency and credibility across our platform, we’ve made some necessary revisions to our methodology for reporting 24-hour record debuts.

Our goal is to ensure YouTube remains a place where all artists are accurately recognized and celebrated for achieving success and milestones. Videos eligible for YouTube’s 24-hour record debuts are those with the highest views from organic sources within the first 24 hours of the video’s public release. This includes direct links to the video, search results, external sites that embed the video and YouTube features like the homepage, watch next and Trending. Video advertising is an effective way to reach specific audiences with a song debut, but paid advertising views on YouTube will no longer be considered when looking at a 24-hour record debut. The changes will not impact YouTube’s existing 24-hour record debut holders.

Staying true to YouTube’s overall mission of giving everyone a voice and showing them the world, we want to celebrate all artist achievements on YouTube as determined by their global fans. It’s the artists and fans that have made YouTube the best and most accurate measure of the world’s listening tastes, and we intend on keeping it that way.

Additional information on how YouTube Music Charts are calculated can be found here and additional details about YouTube Views and ads can be found here.


YouTube Blog

Epic’s updated game store refund policy matches Steam

The Epic Games Store wasn't all that refund-friendly on launch. You could only ask for two refunds in an entire year (albeit after unlimited hours of play), and you had to submit details like your IP address in a support ticket to have a hope of gett…
Engadget RSS Feed

Amazon’s discount policy is being investigated by the FTC

Amazon's purchase of Whole Foods requires a wink of blessing from the Federal Trade Commission, but that might not be a done deal. Reuters is reporting that the FTC is taking a particular interest in how Jeff Bezos' online retailer prices, and discou…
Engadget RSS Feed

EU advocacy group expresses concern over WhatsApp’s data-sharing policy

WhatsApp’s data-collection policy, which permits the messaging service to share user info with parent company Facebook, drew sharp rebuke from European Union regulators concerned over its lack of transparency.

The post EU advocacy group expresses concern over WhatsApp’s data-sharing policy appeared first on Digital Trends.

Android Army–Digital Trends

Google hires a former White House employee to help with policy issues

google search

You are a large company with a few issues, but need congress to pass a few bills, lean in your favour, or just grease a few palms. How do you do it? Easy, you hire someone who has worked at the White House before and knows all the ins-and-outs of the trade.

That is exactly what Google has done by hiring former White House deputy national security adviser, Caroline Atkinson. She will now be working for Google to lead its public policy efforts. This means she will be handling some very large and tough legal issues Google has been having such as antitrust allegationsEurope’s right to be forgotten laws, general censorship, and more.

Atkinson is a pretty great hire on paper. Other than the White House she has also worked as a journalist and an official at the International Monetary Fund. She shouldn’t have too much trouble handling Google’s issues.

Source: New York Times

Come comment on this article: Google hires a former White House employee to help with policy issues

Visit TalkAndroid for Android news, Android guides, and much more!


Android News, Rumours, and Updates