Posts Tagged: pledge

The world’s leading AI companies pledge to protect the safety of children online

Leading artificial intelligence companies including OpenAI, Microsoft, Google, Meta and others have jointly pledged to prevent their AI tools from being used to exploit children and generate child sexual abuse material (CSAM). The initiative was led by child-safety group Thorn and All Tech Is Human, a non-profit focused on responsible tech.

The pledges from AI companies, Thorn said, “set a groundbreaking precedent for the industry and represent a significant leap in efforts to defend children from sexual abuse as a feature with generative AI unfolds.” The goal of the initiative is to prevent the creation of sexually explicit material involving children and take it off social media platforms and search engines. More than 104 million files of suspected child sexual abuse material were reported in the US in 2023 alone, Thorn says. In the absence of collective action, generative AI is poised to make this problem worse and overwhelm law enforcement agencies that are already struggling to identify genuine victims.

On Tuesday, Thorn and All Tech Is Human released a new paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse” that outlines strategies and lays out recommendations for companies that build AI tools, search engines, social media platforms, hosting companies and developers to take steps to prevent generative AI from being used to harm children.

One of the recommendations, for instance, asks companies to choose data sets used to train AI models carefully and avoid ones only only containing instances of CSAM but also adult sexual content altogether because of generative AI’s propensity to combine the two concepts. Thorn is also asking social media platforms and search engines to remove links to websites and apps that let people “nudity” images of children, thus creating new AI-generated child sexual abuse material online. A flood of AI-generated CSAM, according to the paper, will make identifying genuine victims of child sexual abuse more difficult by increasing the “haystack problem” — an reference to the amount of content that law enforcement agencies must current sift through.

“This project was intended to make abundantly clear that you don’t need to throw up your hands,” Thorn’s vice president of data science Rebecca Portnoff told the Wall Street Journal. “We want to be able to change the course of this technology to where the existing harms of this technology get cut off at the knees.”

Some companies, Portnoff said, had already agreed to separate images, video and audio that involved children from data sets containing adult content to prevent their models from combining the two. Others also add watermarks to identify AI-generated content, but the method isn’t foolproof — watermarks and metadata can be easily removed.

This article originally appeared on Engadget at https://www.engadget.com/the-worlds-leading-ai-companies-pledge-to-protect-the-safety-of-children-online-213558797.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Boston Dynamics and other industry heavyweights pledge not to build war robots

The days of Spot being leveraged as a weapons platform and training alongside special forces operators are already coming to an end; Atlas as a back-flipping soldier of fortune will never come to pass. Their maker, Boston Dynamics, along with five other industry leaders announced on Thursday that they will not pursue, or allow, the weaponization of their robots, according to a non-binding, open letter they all signed.

Agility Robotics, ANYbotics, Clearpath Robotics, Open Robotics and Unitree Robotics all joined Boston Dynamics in the agreement. "We believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work, raises new risks of harm and serious ethical issues," the group wrote. "Weaponized applications of these newly-capable robots will also harm public trust in the technology in ways that damage the tremendous benefits they will bring to society." 

The group cites "the increasing public concern in recent months caused by a small number of people who have visibly publicized their makeshift efforts to weaponize commercially available robots," such as the armed Spot from Ghost Robotics, or the Dallas PD's use of an EOD bomb disposal robot as an IED as to why they felt the need to take this stand. 

To that end, the industry group pledges to "not weaponize our advanced-mobility general-purpose robots or the software we develop that enables advanced robotics and we will not support others to do so." Nor will they allow their customers to subsequently weaponize any platforms they were sold, when possible. That's a big caveat given the long and storied history of such weapons as the Toyota Technical, former Hilux pickups converted into DIY war machines that have been a mainstay in asymmetric conflicts since the '80s.    

"We also pledge to explore the development of technological features that could mitigate or reduce these risks," the group continued, but "to be clear, we are not taking issue with existing technologies that nations and their government agencies use to defend themselves and uphold their laws." They also call on policymakers as well as the rest of the robotics development community to take up similar pledges. 

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

DeepMind, Elon Musk and more pledge not to make autonomous AI weapons

Today during the Joint Conference on Artificial Intelligence, the Future of Life Institute announced that more than 2,400 individuals and 160 companies and organizations have signed a pledge, declaring that they will "neither participate in nor suppo…
Engadget RSS Feed