Kamala Harris Meets With AI CEOs To Discuss AI Risks

Categorized under Trending Topics
Kamala Harris Meets With AI CEOs To Discuss AI Risks

Vice President Kamala Harris met with the CEOs of Google, Microsoft, and two other companies developing artificial intelligence on Thursday, as the Biden administration launches initiatives to ensure that rapidly evolving technology improves people's lives without jeopardising their rights and safety.

The Democratic administration pledged a $140 million commitment to construct seven new AI research centres.

Furthermore, the White House Office of Management and Budget is anticipated to release instructions on how government agencies might use AI capabilities in the coming months. Top AI developers have also agreed to participate in a public examination of their systems in August at the DEF CON hacker convention in Las Vegas.

Harris and administration officials met with the CEOs of Google, Microsoft, and two important companies they sponsor on Thursday to discuss the risks they perceive in current AI development. Anthropic AI, funded by Microsoft, and OpenAI, backed by Google The message from government officials to businesses is that they have a role to play in lowering risks and that they can collaborate with the government.

Authorities in the United Kingdom also stated on Thursday that they are investigating the risks involved with artificial intelligence. Britain's competition authority has announced the launch of an investigation into the AI sector, with a focus on the technology behind chatbots such as OpenAI's ChatGPT.

President Joe Biden stated this month that while AI can help combat disease and climate change, it can also threaten national security and destabilise the economy. On Thursday, Biden also attended the gathering. According to a White House official, he has been "extensively briefed" on ChatGPT, has seen how it works, and has even tried with the programme.

The release of ChatGPT late last year has led to increased debate about AI and the government’s role with the technology. The potential of new "generative AI" tools to generate human-like prose and fictitious visuals has heightened ethical and cultural concerns regarding automated systems.

Some of the companies, notably OpenAI, have been coy about the data used to train their AI systems. This has made it more difficult to explain why a chatbot provides biassed or deceptive responses to requests, or to address concerns about whether it is stealing from copyrighted material.

Companies concerned about being held accountable for something in their training data may lack incentives to track it thoroughly, according to Margaret Mitchell, chief ethics scientist of AI company Hugging Face.

"I think it might not be possible for OpenAI to actually detail all of its training data at a level of detail that would be really useful in terms of some of the concerns around consent, privacy, and licencing," Mitchell said in an interview on Tuesday. "From what I know of tech culture, that just isn't done."

In theory, a disclosure rule may compel AI vendors to open up their systems to more third-party scrutiny. However, because AI systems are built on earlier models, firms will find it difficult to provide greater transparency after the fact.

"I think it will really be up to the governments to decide whether this means you have to trash all of your work or not," Mitchell added. "Of course, I imagine that, at least in the United States, the decisions will favour corporations and be supportive of the fact that it has already been done." It would have far-reaching consequences if all of these companies were forced to throw out all of their work and start over."

While the White House announced a collaborative approach with the industry on Thursday, companies that design or utilise AI are also under increased scrutiny from US agencies such as the Federal Trade Commission, which enforces consumer protection and antitrust laws.

Companies may also face stricter laws in the European Union, where negotiators are finalising AI standards first proposed two years ago. The guidelines could propel the 27-nation union to the forefront of the worldwide quest to establish technology standards.

When the EU first proposed AI laws in 2021, the emphasis was on limiting high-risk applications that endanger people's safety or rights, such as live facial scanning or government social scoring systems that judge people based on their actions. Chatbots received little attention.

However, given how quickly AI technology has evolved, negotiators in Brussels have been hurrying to amend their ideas to include general-purpose AI systems. According to a recent partial drafting of the legislation seen by The Associated Press, provisions added to the bill would force so-called foundation AI models to divulge copyright material used to train the systems.

Foundation models, which include systems like ChatGPT, are a subcategory of general-purpose AI. Their algorithms are trained on massive data sets.

The plan is set to be voted on by a European Parliament committee next week, but it might be years before it becomes law.

In other news, Italy temporarily banned ChatGPT for violating rigorous European privacy standards, while the European Data Protection Board established an AI task group as a possible first step towards developing common AI privacy rules.

In the United States, exposing AI systems to public scrutiny at the DEF CON hacker conference could be a novel way to test risks, though the one-time event may not be as thorough as a longer audit, according to Heather Frase, a senior fellow at Georgetown University's Centre for Security and Emerging Technology.

Along with Google, Microsoft, OpenAI, and Anthropic, the White House claims that Hugging Face, chipmaker Nvidia, and Stability AI, known for their picture generator Stable Diffusion, have agreed to participate.

"This would be a way for very skilled and creative people to do it all in one big burst," Frase explained.

Related Posts