Posts Tagged: safety

The world’s leading AI companies pledge to protect the safety of children online

Leading artificial intelligence companies including OpenAI, Microsoft, Google, Meta and others have jointly pledged to prevent their AI tools from being used to exploit children and generate child sexual abuse material (CSAM). The initiative was led by child-safety group Thorn and All Tech Is Human, a non-profit focused on responsible tech.

The pledges from AI companies, Thorn said, “set a groundbreaking precedent for the industry and represent a significant leap in efforts to defend children from sexual abuse as a feature with generative AI unfolds.” The goal of the initiative is to prevent the creation of sexually explicit material involving children and take it off social media platforms and search engines. More than 104 million files of suspected child sexual abuse material were reported in the US in 2023 alone, Thorn says. In the absence of collective action, generative AI is poised to make this problem worse and overwhelm law enforcement agencies that are already struggling to identify genuine victims.

On Tuesday, Thorn and All Tech Is Human released a new paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse” that outlines strategies and lays out recommendations for companies that build AI tools, search engines, social media platforms, hosting companies and developers to take steps to prevent generative AI from being used to harm children.

One of the recommendations, for instance, asks companies to choose data sets used to train AI models carefully and avoid ones only only containing instances of CSAM but also adult sexual content altogether because of generative AI’s propensity to combine the two concepts. Thorn is also asking social media platforms and search engines to remove links to websites and apps that let people “nudity” images of children, thus creating new AI-generated child sexual abuse material online. A flood of AI-generated CSAM, according to the paper, will make identifying genuine victims of child sexual abuse more difficult by increasing the “haystack problem” — an reference to the amount of content that law enforcement agencies must current sift through.

“This project was intended to make abundantly clear that you don’t need to throw up your hands,” Thorn’s vice president of data science Rebecca Portnoff told the Wall Street Journal. “We want to be able to change the course of this technology to where the existing harms of this technology get cut off at the knees.”

Some companies, Portnoff said, had already agreed to separate images, video and audio that involved children from data sets containing adult content to prevent their models from combining the two. Others also add watermarks to identify AI-generated content, but the method isn’t foolproof — watermarks and metadata can be easily removed.

This article originally appeared on Engadget at https://www.engadget.com/the-worlds-leading-ai-companies-pledge-to-protect-the-safety-of-children-online-213558797.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

X names its third head of safety in less than two years

X has named a new head of safety nearly a year after the last executive in the position resigned. The company said Tuesday that it had promoted Kylie McRoberts to Head of Safety and hired Yale Cohen as Head of Brand Safety and Advertiser Solutions.

The two will have the unenviable task of leading X’s safety efforts, including its attempts to reassure advertisers that the platform doesn’t monetize hate speech or terrorist content. The company said earlier this year it planned to hire 100 new safety employees after previously cutting much of its safety staff.

Head of safety has been a particularly fraught position since Elon Musk took over the company previously known as Twitter. Musk has previously clashed with his safety leads and McRoberts is the third person to hold the title in less than two years. Previously, Yoel Roth resigned shortly after the disastrous rollout of Twitter Blue in 2022. Roth was replaced by Ella Irwin, who resigned last year after Musk publicly criticized employees for enforcing policies around misgendering.

Not much is known about McRoberts, but she is apparently an existing member of X’s safety team (her X account is currently private and a LinkedIn profile appears to have been recently deleted). “During her time at X, she has led initiatives to increase transparency in our moderation practices through labels, improve security with passkeys, as well as building out our new Safety Center of Excellence in Austin,” X said in a statement.

This article originally appeared on Engadget at https://www.engadget.com/x-names-its-third-head-of-safety-in-less-than-two-years-213004771.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Google Rolls Safety And Convenience Updates Out To Waze

Google’s Waze navigation app is getitng a bunch of major updates that will make your experience with the app a lot more enjoyable.
TalkAndroid

Internal memo says Sam Altman’s firing wasn’t due to ‘malfeasance’ or OpenAI safety practices

An internal memo sent to OpenAI staff on Saturday after former CEO Sam Altman’s abrupt firing reiterates that “a breakdown in communication” led to the decision, not “malfeasance or anything related to our financial, business, safety, or security/privacy practices,” according to Axios and The New York Times. The memo obtained by both publications was sent to employees by OpenAI’s Chief Operating Officer Brad Lightcap.

Speculation has been nonstop since Altman was ousted unexpectedly as CEO on Friday and dropped from the company’s board of directors, with little concrete information from OpenAI itself to go on. In its announcement of the decision, the board said only that he was not “consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” The board named Mira Murati, OpenAI’s Chief Technology Officer, as interim CEO.

In response, OpenAI’s now-former president, Greg Brockman, announced he was stepping down too, tweeting, “Sam and I are shocked and saddened by what the board did today.” Three senior researchers later resigned as well, according to The Information. Now, in another report, sources told The Information that Altman already has a “new venture” in the works, and he plans to bring Brockman and possibly others on with him. It’s as yet unclear if this venture is separate from Altman’s other known upcoming projects, including a purported collaboration with former Apple designer Jony Ive.

Numerous reports in the aftermath have attempted to provide an explanation for Altman’s firing, with some claiming there were concerns over the rapid development of the company’s AI products and, according to journalist Kara Swisher, its “profit driven direction.” In Saturday’s memo, per Axios, Lightcap wrote that the announcement “took us all by surprise,” and “we have had multiple conversations with the board to try to better understand the reasons and process behind their decision.”

The sudden shakeup could now have ramifications for the impending sale of OpenAI’s employee shares, valued at roughly $ 86 billion, The Information reported. In a cryptic tweet on Saturday, Altman quipped, “if i start going off, the openai board should go after me for the full value of my shares (sic).”

This article originally appeared on Engadget at https://www.engadget.com/internal-memo-says-sam-altmans-firing-wasnt-due-to-malfeasance-or-openai-safety-practices-205156164.html?src=rss

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

SpaceX workers face above-average injury rates as Musk prioritizes Mars over safety, report finds

A Reuters investigation into unsafe working conditions at SpaceX has uncovered more than 600 injuries going back to 2014 that have not been publicly reported until now. Current and former employees cited in the report blame CEO Elon Musk’s aggressive deadlines and hatred of bureaucracy, alleging his goal of getting humans to Mars “as fast as possible” has led the company to cut corners and eschew proper protocols.

Injury rates at some SpaceX facilities are much higher than the industry average of .8 injuries or illnesses per 100 workers, Reuters found. At its Brownsville, Texas location, the 2022 injury rate was 4.8 per 100 workers. At the Hawthorne, California manufacturing facility, it was 1.8. In McGregor, Texas, where the company conducts rocket tests, the injury rate was 2.7.

Employees have suffered broken bones, lacerations, crushed fingers, burns, electric shocks and serious head wounds — including one that blinded Brownsville worker Florentino Rios in 2021 and another that left employee Francisco Cabada in a coma since January 2022. At SpaceX’s McGregor site, one worker, Lonnie LeBlanc, was killed in 2014 when wind knocked him off the trailer of an improperly loaded truck. Yet over the years, SpaceX has only paid meager fines as a result of its safety lapses. After LeBlanc’s death, the company settled with OSHA for $ 7,000, according to Reuters.

Reuters spoke to over two dozen current or former employees, as well as others “with knowledge of SpaceX safety practices.” One SpaceX ex-manager told Reuters that “workers take care of their safety themselves,” and others said employees were even told not to wear bright-colored safety gear because Musk does not like it. SpaceX has also repeatedly failed to submit injury data to regulators for much of its history, according to Reuters.

This article originally appeared on Engadget at https://www.engadget.com/spacex-workers-face-above-average-injury-rates-as-musk-prioritizes-mars-over-safety-report-finds-224235095.html?src=rss

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Utah sues TikTok over child safety issues and its links to China

Utah has sued TikTok over child safety issues and the company’s China-based ownership, CNBC has reported. In the complaint, attorney general Sean Reyes called the app “an addictive product” and accused it of misleading users about its relationship with China-based parent company ByteDance. The state recently enacted some of the strictest social media laws in the country, requiring parental permission for teens to use social media. 

The lawsuit compares TikTok to a slot machine that provides “dopamine manipulation” trigged by swiping up on videos. That addictive nature is particularly harmful for the “not-yet-fully-developed” brain of young users and can create a dependence on the app, the state claims. It noted that the US Surgeon General has warned about mental health harms around social media, and cited excessive TikTok usage based around the company’s own (redacted) figures. 

“What these children (and their parents) do not know is that TikTok is lying to them about the safety of its app and exploiting them into checking and watching the app compulsively, no matter the terrible effects it has on their mental health, their physical development, their family, and their social life,” the complaint states. 

The lawsuit also delves into TikTok’s links to China. “To avoid scrutiny from its users (and regulators), TikTok has also misled Utah consumers about the degree to which TikTok remains enmeshed with and under the control of ByteDance, it’s China-based parent company.” 

TikTok previously said that it has dedicated more than $ 1.5 billion on data security, and has rejected allegations that it’s spying for the Chinese government. The company also recently opened a Transparency and Accountability Center in an effort to fend off regulators and potential bans.

The federal government has yet to take any concrete action against social media platforms, but states have been more active. Utah recently passed a law requiring parents to get permission before teens can create accounts on TikTok, Snap and other platforms. It also mandates curfew, parental controls and age verification features. The state didn’t go as far as Montana, however, which outright banned the use of TikTok. Tomorrow, a judge will hear arguments in TikTok’s lawsuit seeking to overturn that ban — a case that could open the company up to more scrutiny and set precedent around the US.

This article originally appeared on Engadget at https://www.engadget.com/utah-sues-tiktok-over-child-safety-issues-and-its-links-to-china-085516390.html?src=rss

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Uber safety driver involved in fatal self-driving car crash pleads guilty

The Uber safety driver at the wheel during the first known fatal self-driving car crash involving a pedestrian has pleaded guilty to and been sentenced for an endangerment charge. Rafaela Vasquez will serve three years of probation for her role in the 2018 Tempe, Arizona collision that killed Elaine Herzberg while she was jaywalking at night. The sentence honors the prosecutors’ demands and is stiffer than the six months the defense team requested.

The prosecution maintained that Vasquez was ultimately responsible. While an autonomous car was involved, Vasquez was supposed to concentrate on the road and take over if necessary. The modified Volvo XC90 in the crash was operating at Level 3 autonomy and could be hands-free in limited conditions, but required the driver to take over at a moment’s notice. It noticed Herzberg but didn’t respond to her presence.

The defense case hinged on partly blaming Uber. Executives at the company thought it was just a matter of time before a crash occurred, according to supposedly leaked conversations. The National Transportation Safety Board’s (NTSB) collision findings also noted that Uber had disabled the emergency braking system on the XC90, so the vehicle couldn’t come to an abrupt stop.

Tempe police maintained that Vasquez had been watching a show on Hulu and wasn’t paying attention during the crash. Defense attorneys have insisted that Vasquez was paying attention and had only been momentarily distracted.

The plea and sentencing could influence how other courts handle similar cases. There’s long been a question of liability surrounding mostly driverless cars — is the human responsible for a crash, or is the manufacturer at fault? This suggests humans will still face penalties if they can take control, even if the punishment isn’t as stiff for conventional situations.

Fatal crashes with autonomy involved aren’t new. Tesla has been at least partly blamed for collisions while Full Self Driving was active. The pedestrian case is unique, though, and looms in the background of more recent Level 4 (fully driverless in limited situations) offerings and tests from Waymo and GM’s Cruise.While the technology has evolved since 2018, there are still calls to freeze robotaxi rollouts over fears the machines could pose safety risks.

This article originally appeared on Engadget at https://www.engadget.com/uber-safety-driver-involved-in-fatal-self-driving-car-crash-pleads-guilty-212616187.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Bipartisan Senate group reintroduces a revised Kids Online Safety Act

US Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) reintroduced a bill today that would put the onus on social media companies to add online safeguards for children. The Kids Online Safety Act (KOSA) was first introduced last February (sponsored by the same pair) but never made it to the Senate floor after backlash from advocacy groups. The revamped legislation “provides specific tools to stop Big Tech companies from driving toxic content at kids and to hold them accountable for putting profits over safety,” said Blumenthal. It follows a separate bill introduced last month with a similar aim.

Like the original KOSA, the updated bill would require annual independent audits by “experts and academic researchers” to force regulation-averse social media companies to address the online dangers posed to children. However, the updated legislation attempts to address the concerns that led to its previous iteration’s downfall, namely that its overly broad nature could do more harm than good by requiring surveillance and censorship of young users. The EFF described the February 2022 bill as “a heavy-handed plan to force platforms to spy on young people” that “fails to properly distinguish between harmful and non-harmful content, leaving politically motivated state attorneys general with the power to define what harms children. One of the primary fears is that states could use the flimsy definitions to ban content for political gain.”

The rewritten bill adds new protections for services like the National Suicide Hotline, LGBTQ+ youth centers and substance-abuse organizations to avoid being unnecessarily harmed. In addition, it would make social platforms give minors options to safeguard their information, turn off addictive features and opt out of algorithmic recommendations. (Social platforms would have to enable the strongest settings by default.) It would also give parents “new controls to help support their children and identify harmful behaviors” while offering children “a dedicated channel to report harms” on the platform. Additionally, it would specifically ban the promotion of suicide, eating disorders, substance abuse, sexual exploitation and the use of “unlawful products for minors” like gambling, drugs and alcohol. Finally, it would require social companies to provide “academic and public interest organizations” with data to help them research social media’s effects on the safety and well-being of minors.

The American Psychological Association, Common Sense Media and other advocacy groups support the updated bill. It has 26 cosponsors from both parties, including lawmakers ranging from Dick Durbin (D-IL) and Sheldon Whitehouse (D-RI) to Chuck Grassley (R-IA) and Lindsey Graham (R-SC). Blackburn told CNBC today that Senate Majority Leader Chuck Schumer (D-NY) is “a hundred percent behind this bill and efforts to protect kids online.”

Despite the Senators’ renewed optimism about passing the bill, some organizations believe it’s still too broad to avoid a negative net impact. “The changes made to the bill do not at all address our concerns,” Evan Greer, director of digital rights advocacy group Fight For the Future, said in an emailed statement to Engadget. “If Senator Blumenthal’s office had been willing to meet with us, we could have explained why. I can see where changes were made that attempt to address the concerns, but they fail to do so. Even with the new changes, this bill will allow extreme right-wing attorneys general to dictate what content platforms can recommend to younger users.”

The ACLU also opposes the resurrected bill. “KOSA’s core approach still threatens the privacy, security and free expression of both minors and adults by deputizing platforms of all stripes to police their users and censor their content under the guise of a ‘duty of care,’” ACLU Senior Policy Counsel Cody Venzke toldCNBC. “To accomplish this, the bill would legitimize platforms’ already pervasive data collection to identify which users are minors when it should be seeking to curb those data abuses. Moreover, parental guidance in minors’ online lives is critical, but KOSA would mandate surveillance tools without regard to minors’ home situations or safety. KOSA would be a step backward in making the internet a safer place for children and minors.”

Blumenthal argues that the bill was “very purposely narrowed” to prevent harm. “I think we’ve met that kind of suggestion very directly and effectively,” he said at a press conference. “Obviously, our door remains open. We’re willing to hear and talk to other kinds of suggestions that are made. And we have talked to many of the groups that had great criticism and a number have actually dropped their opposition, as I think you’ll hear in response to today’s session. So I think our bill is clarified and improved in a way that meets some of the criticism. We’re not going to solve all of the problems of the world with a single bill. But we are making a measurable, very significant start.”

This article originally appeared on Engadget at https://www.engadget.com/bipartisan-senate-group-reintroduces-a-revised-kids-online-safety-act-212117992.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Tinder adds an incognito mode and more safety features

On Safer Internet Day (and with Valentine's Day fast approaching), Tinder is starting to roll out some new safety features and updates to some others. Users will now be able to take advantage of an incognito mode, which Tinder says is a "step up" from hiding your profile completely. Only folks that you Like will see you in their recommendations. That should give you more granular control over your visibility.

In addition, you can block profiles that pop up in your suggestions. So, that could mitigate some awkwardness if you spot an ex or someone else from your life, such as (shudder) a family member. This follows a feature that allows users to block others based on their phone number.

There's another new safety feature called long press reporting. If you receive an offensive message or unwanted picture, you can tap and hold to swiftly report it. Tinder says that it hopes this will encourage more people to report bad behavior so it can take action against users who are breaking the rules.

Meanwhile, Tinder has made some changes to features called "Are You Sure" (which asks folks to reconsider before sending a message with potentially harmful language) and "Does This Bother You," which encourages users to report inappropriate conversations. Tinder says the features will detect more language that it deems harmful or inappropriate, including hate speech as well as sexual harassment and exploitation. The company says that, since it added "Does This Bother You," it has received 46 percent more reports of messages containing harmful language.

Along with these updates, Tinder is rolling out a series of Healthy Dating Guides in collaboration with No More, a campaign to end domestic violence and sexual assault. The guides are designed to help users spot red flags and protect themselves at every stage of the relationship. Starting on February 8th, Tinder will also start running a campaign called Green Flags, which is about highlighting safety features and the steps people can take to safely date online.

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

OnStar now offers its safety features through a phone app

You no longer need to be in your car to use OnStar’s safety features. GM just launched an app for Android and iOS, Guardian, that brings OnStar to phones for the first time. So long as you have an active OnStar plan, as many as eight people can use t…
Engadget RSS Feed

Tesla defends Autopilot in first quarterly safety report

Over the past year, Tesla has received a lot of flak for being involved in crashes and accidents while Autopilot was engaged. Back in March, a Model X crashed into a median barrier, claiming the life of an Apple engineer. A few months after that, a M…
Engadget RSS Feed

Senators investigate safety procedures for autonomous cars

Just a day after the NTSB released its preliminary findings on the Uber crash in Arizona, senators Edward J. Markey and Richard Blumenthal began an investigation into safety protocols for driverless car testing. In a letter sent to major auto manufac…
Engadget RSS Feed

Tesla’s key safety contact leaves for Waymo

Tesla's executive team isn't done with turmoil following the loss of its Autopilot chief and its engineering lead's sabbatical. The electric car maker's "primary technical contact" with American safety regulators, Matthew Schwall, has left the compa…
Engadget RSS Feed

Apple supplier accused of chemical safety and overtime violations

Apple is still struggling to improve working conditions at its suppliers. Both China Labor Watch and Bloomberg report that Catcher, a key supplier for iPhone and MacBook casings, makes workers endure harsh safety conditions and unfair work terms in…
Engadget RSS Feed

Samsung’s Gear watches will help with senior care and employee safety

Samsung is taking on the world of work via three new integrations with its Gear smartwatches. SoloProtect uses the Samsung Gear S3 to keep tabs on people who work alone, like real estate agents and home healthcare workers, while Reemo integrates with…
Engadget RSS Feed

Samsung and U.S. Consumer Product Safety Commission preparing official Galaxy Note 7 recall

Samsung has had quite the troubling few weeks as their highly praised Galaxy Note 7 was met with serious issues with the battery that caused many devices to essentially go up in flames while plugged in. There has been a voluntary device replacement program going on, but now the U.S. Consumer Product Safety Commission has […]

Come comment on this article: Samsung and U.S. Consumer Product Safety Commission preparing official Galaxy Note 7 recall

Visit TalkAndroid


TalkAndroid

New smart gun technology may help with gun safety in the US

Following President Obama’s executive action announcement this past Tuesday, renewed attention is being paid to the notion of “smart guns,” with newly developed technology taking center stage.

The post New smart gun technology may help with gun safety in the US appeared first on Digital Trends.

Cool Tech»Digital Trends