The Biden Administration unveiled its ambitious next steps in addressing and regulating artificial intelligence development on Monday. Its expansive new executive order seeks to establish further protections for the public as well as improve best practices for federal agencies and their contractors.
"The President several months ago directed his team to pull every lever," a senior administration official told reporters on a recent press call. "That's what this order does, bringing the power of the federal government to bear in a wide range of areas to manage AI's risk and harness its benefits … It stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world and like all executive orders, this one has the force of law."
These actions will be introduced over the next year with smaller safety and security changes happening in around 90 days and with more involved reporting and data transparency schemes requiring 9 to 12 months to fully deploy. The administration is also creating an “AI council,” chaired by White House Deputy Chief of Staff Bruce Reed, who will meet with federal agency heads to ensure that the actions are being executed on schedule.
"In response to the President's leadership on the subject, 15 major American technology companies have begun their voluntary commitments to ensure that AI technology is safe, secure and trustworthy before releasing it to the public," the senior administration official said. "That is not enough."
The EO directs the establishment of new standards for AI safety and security, including reporting requirements for developers whose foundation models might impact national or economic security. Those requirements will also apply in developing AI tools to autonomously implement security fixes on critical software infrastructure.
By leveraging the Defense Production Act, this EO will "require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests," per a White House press release. That information must be shared prior to the model being made available to to the public, which could help reduce the rate at which companies unleash half-baked and potentially deadly machine learning products.
In addition to the sharing of red team test results, the EO also requires disclosure of the system’s training runs (essentially, its iterative development history). “What that does is that creates a space prior to the release… to verify that the system is safe and secure,” officials said.
Administration officials were quick to point out that this reporting requirement will not impact any AI models currently available on the market, nor will it impact independent or small- to medium-size AI companies moving forward, as the threshold for enforcement is quite high. It's geared specifically for the next generation of AI systems that the likes of Google, Meta and OpenAI are already working on with enforcement on models starting at 10^26 petaflops, a capacity currently beyond the limits of existing AI models. "This is not going to catch AI systems trained by graduate students, or even professors,” the administration official said.
What's more, the EO will encourage the Departments of Energy and Homeland Security to address AI threats "to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks," per the release. "Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI." In short, any developers found in violation of the EO can likely expect a prompt and unpleasant visit from the DoE, FDA, EPA or other applicable regulatory agency, regardless of their AI model’s age or processing speed.
In an effort to proactively address the decrepit state of America's digital infrastructure, the order also seeks to establish a cybersecurity program, based loosely on the administration's existing AI Cyber Challenge, to develop AI tools that can autonomously root out and shore up security vulnerabilities in critical software infrastructure. It remains to be seen whether those systems will be able to address the concerns of misbehaving models that SEC head Gary Gensler recently raised.
We're already seeing the normalization of deepfake trickery and AI-empowered disinformation on the campaign trail. So, the White House is taking steps to ensure that the public can trust the text, audio and video content that it publishes on its official channels. The public must be able to easily validate whether the content they see is AI-generated or not, argued White House officials on the press call.
The Department of Commerce is in charge of the latter effort and is expected to work closely with existing industry advocacy groups like the C2PA and its sister organization, the CAI, to develop and implement a watermarking system for federal agencies. “We aim to support and facilitate and help standardize that work [by the C2PA],” administration officials said. “We see ourselves as plugging into that ecosystem.”
Officials further explained that the government is supporting the underlying technical standards and practices that will lead to digital watermarking’ wider adoption — similar to the work it did around developing the HTTPS ecosystem and in getting both developers and the public on-board with it. This will help federal officials achieve their other goal of ensuring that the government's official messaging can be relied upon.
The first Blueprint for an AI Bill of Rights that the White House released last October directed agencies to “combat algorithmic discrimination while enforcing existing authorities to protect people's rights and safety,” the administration official said. “But there's more to do.”
The new EO will require guidance be extended to “landlords, federal benefits programs and federal contractors” to prevent AI systems from exacerbating discrimination within their spheres of influence. It will also direct the Department of Justice to develop best practices for investigating and prosecuting civil rights violations related to AI, as well as, per the announcement, “the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis."
Additionally, the EO calls for prioritizing federal support to accelerate development of privacy-preserving techniques that would enable future LLMs to be trained on large datasets without the current risk of leaking personal details that those datasets might contain. These solutions could include “cryptographic tools that preserve individuals’ privacy,” per the White House release, developed with assistance from the Research Coordination Network and National Science Foundation. The executive order also reiterates its calls for bipartisan legislation from Congress addressing the broader privacy issues that AI systems present for consumers.
In terms of healthcare, the EO states that the Department of Health and Human Services will establish a safety program that tracks and remedies unsafe, AI-based medical practices. Educators will also see support from the federal government in using AI-based educational tools like personalized chatbot tutoring.
The Biden administration concedes that while the AI revolution is a decided boon for business, its capabilities make it a threat to worker security through job displacement and intrusive workplace surveillance. The EO seeks to address these issues with “the development of principles and employer best practices that mitigate the harms and maximize the benefit of AI for workers,” an administration official said. “We encourage federal agencies to adopt these guidelines in the administration of their programs.”
The EO will also direct the Department of Labor and the Council of Economic Advisors to both study how AI might impact the labor market and how the federal government might better support workers “facing labor disruption” moving forward. Administration officials also pointed to the potential benefits that AI might bring to the federal bureaucracy including cutting costs, and increasing cybersecurity efficacy. “There's a lot of opportunity here, but we have to to ensure the responsible government development and deployment of AI,” an administration official said.
To that end, the administration is launching on Monday a new federal jobs portal, AI.gov, which will offer information and guidance on available fellowship programs for folks looking for work with the federal government. “We're trying to get more AI talent across the board,” an administration official said. “Programs like the US Digital Service, the Presidential Innovation Fellowship and USA jobs — doing as much as we can to get talent in the door.” The White House is also looking to expand existing immigration rules to streamline visa criteria, interviews and reviews for folks trying to move to and work in the US in these advanced industries.
The White House reportedly did not preview the industry on this particular swath of radical policy changes, though administration officials did note that they had already been collaborating extensively with AI companies on many of these issues. The Senate held its second AI Insight Forum event last week on Capitol Hill, while Vice President Kamala Harris is scheduled to speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak on Tuesday.
At a Washington Post event on Thursday, Senate Majority Leader Charles Schumer (D-NY) was already arguing that the executive order did not go far enough and could not be considered an effective replacement for congressional action, which to date, has been slow in coming.
“There’s probably a limit to what you can do by executive order,” Schumer told WaPo, “They [the Biden Administration] are concerned, and they’re doing a lot regulatorily, but everyone admits the only real answer is legislative.”
This article originally appeared on Engadget at https://www.engadget.com/sweeping-white-house-ai-executive-order-takes-aim-at-the-technologys-toughest-challenges-090008655.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics
A federal judge has ruled that Google violated a court order requiring it to preserve employee chat messages relevant to Epic's antitrust case, according to Bloomberg and CNBC. San Francisco US District Judge James Donato said the tech giant "adopted a 'don't ask, don't tell' policy for keeping messages, at the expense of its preservation duties," and must be sanctioned for its actions. Donato has yet to decide on what sanctions and penalties Google should face, but he ordered the company to cover Epic's attorney's fees related to this particular issue.
Donato said in his decision that Google gave almost 360 employees the complete freedom to decide whether or not to preserve chat histories. In a separate filing by the Department of Justice over the same complaint, the agency explained that the tech giant's internal chatroom, which is used to discuss "substantive and sensitive business," is set to delete chat messages within 24 hours by default. The agency expected Google to change its chat history setting in 2019 when it "reasonably anticipated [the] litigation," but it still allegedly left the decision to individual employees.
Epic Games, to support its case, recently submitted exhibits to show how Google employees tend to switch off chat history. In one example from 2021, Google CEO Sundar Pichai allegedly wrote: "…also can we change the setting of this group to history off." He attempted to delete that message a few seconds later, according to the filing. Google employees also reportedly switch off chat histories when discussing topics, such as revenue sharing and mobile app distribution agreements, as well as a project that involves changing commission rates for Google Play.
In a statement, a Google spokesperson said the company has worked with Epic and investigators over the years and has handed over millions of documents: "Our teams have conscientiously worked, for years, to respond to Epic and the state AGs’ discovery requests and we have produced over three million documents, including thousands of chats. We'll continue to show the court how choice, security, and openness are built into Android and Google Play," they said.
The judge will hold further proceedings to finalize the sanctions Google must face. Donato said he'd like to see the evidence available "at the end of fact discovery," so that Epic would be better positioned to "tell the Court what might have been lost in the Chat communications."
This article originally appeared on Engadget at https://www.engadget.com/google-found-to-have-violated-order-to-save-chat-evidence-in-epic-antitrust-case-052711779.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics
Dunkin' Donuts wants to be an even bigger part of your morning routine. The bakery chain is adding voice-ordering via Alexa to its futuristic repertoire. Assuming you're a DD Perks Rewards member and have linked the mobile app with Amazon, you can pl…
Engadget RSS Feed
Although we are still a week away from the big reveal of the Samsung Galaxy Note9, Samsung appears to recognize there are many potential buyers out there who are likely ready to order without knowing anything more about the device than what has been leaked or rumored to this point. To make that happen, Samsung […]
Come comment on this article: Get a jump on the Samsung Galaxy Note9 order process
Essential has had a rough launch, to say the least. The Essential Phone was intended to launch much earlier than it did, and there were four color variants shown off at its announcement. So far we’ve only been able to get the phone in black, but as of today, you can order the device in […]
Come comment on this article: You can now order the Essential Phone in Pure White
Step into the future in Pizza Hut’s new smart shoes that, what else, order a pizza for you. Apparently, shoes are no longer made for walking you over to the pizza joint — they’re made for ensuring the pizza comes to you.
The post Food at your feet: Pie Tops shoes let you place an order from Pizza Hut appeared first on Digital Trends.
DinnerCall, a mobile marketplace app for family dinners, is allowing users to order pre-made or ready-to-eat meals from their local grocery stores. This app is also designed to help grocers meet their quota of selling fresh food.
The post DinnerCall lets you order pre-made meals from your grocery store appeared first on Digital Trends.
Hotels are evolving their mobile apps to become virtual concierges – to the point where, some day, you may never need to talk to a real person.
The post These next-gen hotel apps let you pick your room, unlock it, order meals and more appeared first on Digital Trends.