Leaders of the Group of Seven (G7) nations on Saturday called for the creation of technical standards to keep artificial intelligence (AI) in check, saying AI has outpaced oversight for safety and security.
Meeting in Hiroshima, Japan, the leaders said nations must come together on a common vision and goal of trustworthy AI, even while those solutions may vary. But any solution for digital technologies such as AI should be “in line with our shared democratic values,” they said in a statement.
The G7, which includes the US, Japan, Germany, Britain, France, Italy, Canada, and the EU, stressed that efforts to create trustworthy AI need to include “governance, safeguard of intellectual property rights including copyrights, promotion of transparency, [and] response to foreign information manipulation, including disinformation.
“We recognize the need to immediately take stock of the opportunities and challenges of generative AI, which is increasingly prominent across countries and sectors,” the G7 leaders said. More specifically, they called for the creation of a G7 working group by the end of the year to tackle possible generative AI solutions.
The G7 summit followed a “digital ministers” meeting last month, where members called for “risk-based” AI rules.
AI threats abound
AI poses a number of threats to humanity, so it’s important to ensure it continues to serve humans and not the other way around, according to Avivah Litan, a vice president and distinguished analyst at Gartner Research.
Everyday threats include a lack of transparency in generative AI models, which makes them unpredictable; even vendor “don’t understand everything about how they work internally,” Litan said in a blog post last week. And, because there’s no verifiable data governance or protection assurances, generative AI can steal content at will and reproduce it, violating intellectual property and copyright laws.
Additionally, chatbots and other AI-based tools can produce inaccurate or fabricated “hallucinations” because their output is only as good as the data input, and that ingestion process is often tied to the internet. The result: disinformation, “malinformation” and misinformation, Litan noted.
“Regulators should set timeframes by which AI model vendors must use standards to authenticate provenance of content, software, and other digital assets used in their systems. See standards from C2PA, Scitt.io, IETF for examples,” Litan said.
“We just need to act, and act soon,” she said.
Even AI experts such asMax Tegmark, MIT physicist, cosmologist and machine learning researcher, and Geoffrey Hinton, the so-called “the godfather of AI,” are stumped to find a workable solution to the existential threat to humanity, Litan said.
At an AI conference at MIT earlier this month, Hinton warned that because AI can be self-learning, it will become exponentially smarter over time and will begin thinking for itself. Once that happens, there’s little to stop what Hinton believes is inevitable — the extinction of humans.
“These things will have learned from us by reading all the novels that ever where and everything Machiavelli ever wrote [about] how to manipulate people,” Hinton told a packed house during a Q&A exchange. “And if they’re much smarter than us, they’ll be very good at manipulating us. You won’t realize what’s going on. You’ll be like a two-year-old who’s being asked, ‘Do you want the peas or the cauliflower,’ and doesn’t realize you don’t have to have either. And you’ll be that easy to manipulate.”
Europe moves to slow AI
The G7 statement came after the European Union agreed on the creation of the AI Act, which would rein in generative tools such as ChatGPT, DALL-E, and Midjourney in terms of design and deployment, to align with EU law and fundamental rights, including the need for AI makers to disclose any copyrighted material used to develop their systems.
“We want AI systems to be accurate, reliable, safe and non-discriminatory, regardless of their origin,” European Commission President Ursula von der Leyen said Friday.
Earlier this month, the White House also unveiled AI rules to address safety and privacy. The latest effort by the Biden Administration built on previous attempts to promote some form of responsible innovation, but to date Congress has not advanced any laws that would regulate AI. Last October, the administration unveiled a blueprint for an “AI Bill of Rights” as well as an AI Risk Management Framework; more recently, it pushed for a roadmap for standing up a National AI Research Resource.
The measures, however, don’t have any legal teeth “and they’re not what we need now,” according to Litan.
The United States has been something of a follower in developing AI rules. China has led the world in rolling out several initiatives for AI governance, though most of those initiatives relate to citizen privacy and not necessarily safety.
“We need clear guidelines on development of safe, fair and responsible AI from the US regulators,” Litan said in an earlier interview. “We need meaningful regulations such as we see being developed in the EU with the AI Act. While they are not getting it all perfect at once, at least they are moving forward and are willing to iterate. US regulators need to step up their game and pace.”
In March, Apple co-founder and form chief engineer Steve Wozniak, SpaceX CEO Elon Musk, hundreds of AI experts and thousands of others put their names on an open letter calling for a six-month pause in developing more powerful AI systems, citing potential risks to society. A month later, EU lawmakers urged world leaders to find ways to control AI technologies, saying it is developing faster than expected.
Open AI’s Sam Altman on AI: ‘I’m nervous’
Last week, the US Senate held two separate hearings during which members and experts who testified said they see AI as a clear and present danger to security, privacy and copyrights. Generative AI technology, such as ChatGPT can and does use data and information from any number of sometimes unchecked sources.
Sam Altman, CEO of ChatGPT-creator OpenAI, was joined by IBM executive Christina Montgomery and New York University professor emeritus Gary Marcus in testifying before the Senate on the threats and opportunities chatbots present. “It’s one of my areas of greatest concern,” Altman said. “The more general ability of these models to manipulate, persuade, to provide one-on-one interactive disinformation — given we’re going to face an election next year and these models are getting better, I think this is a significant area of concern.”
Regulation, Altman said, would be “wise” because people need to know if they’re talking to an AI system or looking at content — images, videos or documents — generated by a chatbot. “I think we’ll also need rules and guidelines about what is expected in terms of disclosure from a company providing a model that could have these sorts of abilities we’re talking about. So, I’m nervous about it.”
Altman suggested the US government craft a three-point AI oversight plan:
- Form a government agency charged to license large AI models and revoke those that don’t meet government standards.
- Create large language model (LLM) safety standards that include the ability to evaluate whether they’re dangerous or not. LLMs would have to pass safety tests such as not being able to “self-replicate,” go rogue, and start acting on their own.
- Create an independent AI-audit framework overseen by independent experts.
The Senate also heard testimony that the use of “watermarks” could help users identify where content generate from chatbots comes from. Lynne Parker, director of the AI Tennessee Initiative at the University of Tennessee, said requiring AI creators to insert metadata breadcrumbs in content would allow users to better understand the content’s provenance.
The senate plans a future hearing on the topic of watermarking AI content.