Lawmakers from the EU and US have said they are drawing up a code of conduct for AI, as industry leaders and politicians across the world continue to debate the threats that the technology poses.
Although both jurisdictions are currently working on formal legislation intended to regulate AI, it could take years for those rules to be finalized and implemented. By comparison, this proposed draft is expected within weeks and would bridge the gap until any legislation has been passed.
To date, the European Commission has published the first draft of its AI act, which forbids the use of AI when it could become a threat to safety or human rights, with stipulations around the use of artificial intelligence becoming less restrictive based on the risk it might pose — for example, interaction with a chatbot in a customer service setting would be considered low risk.
While lawmakers have agreed in principle on the language of the act, it won’t be voted on by the European parliament until June.
The US government is currently undertaking a consultation into what its AI regulatory framework should look like, with President Joe Biden and Vice President Kamala Harris having recently met with executives from leading AI companies to meet discuss the potential dangers of the technology.
In addition, last month, two Senate committees also met with industry experts, including OpenAI CEO Sam Altman, IBM executive Christina Montgomery, and New York University professor emeritus Gary Marcus.
Artificial intelligence must be accountable
On May 30, hundreds of tech industry leaders, academics, and others public figures signed an open letter warning that AI evolution could lead to an extinction event, saying that controlling the tech should be a top global priority.
“We need accountable artificial intelligence. Generative AI is a complete game changer,” said European Commission Vice President Margrethe Vestager at a meeting of the EU-US Trade and Technology Council (TTC) in Sweden a day after the open letter was published.
Vestager, who is also responsible for the European Union’s competition and digital strategy, added that lawmakers should make the new code of conduct being hammered out with the US a question of “absolute urgency” and encouraged other global partners to come on board to help ensure as many jurisdictions as possible are covered.
In order to inform the code of conduct, Vestager said officials will seek feedback from industry players and invite parties to sign up.
“Very, very soon, a final proposal for industry to commit to voluntarily,” she said.
Established in 2021, the TTC was founded to coordinate technology and trade policy between the US and the EU. The council is composed of 10 working groups, each focusing on specific policy areas including technology standards, data governance and technology platforms, and misuse of technology threatening security and human rights.
Last month, OpenAI’s CEO Sam Altman angered some European lawmakers by telling reporters he had “many concerns” about the EU’s AI Act, accusing the bloc of “over-regulating” and implying that the company might have to cease operations in Europe if the EU’s AI Act regulations passed in their current form. He eventually rolled back on his comments, Tweeting that OpenAI has “no plans to leave.”
Although Altman was not present at the TTC’s event on Tuesday, after the session, Vestager met virtually with him to discuss the voluntary code of conduct.