The UK government has announced it will provide £100 million (US$124 million) in initial funding for a Foundation Model Taskforce, set up to support the development of secure and reliable AI models that can be used in industries such as healthcare and education.
Foundation models are AI systems that have been trained on massive unlabeled data sets. They underpin large language models — like OpenAI’s GPT-4 and Google’s PaLM — for generative AI applications like ChatGPT, and can be used for a wide range of tasks such as translating text to analyzing medical images.
In a statement announcing the funding, the Department for Science, Innovation and Technology (DSIT) said that AI technology is predicted to raise global GDP by 7% over a decade, making its adoption a “vital opportunity” to grow the UK economy. The news follows the announcements Chancellor Jeremey Hunt made in his budget last month, which included a new AI research award which will offer £1 million per year to the company that has achieved the “most groundbreaking British AI research.”
This was in addition to an AI sandbox, or test environment, to help innovators get cutting edge products to market and a promise to work with the Intellectual Property Office to provide clarity on IP rules so Generative AI companies can access the material they need.
The DIST said the aim of the task force is to develop the safe and reliable use of artificial intelligence to ensure the UK becomes globally competitive. The group, which will be made up of government and industry experts, will also work with different industries, regulators and civil society groups to develop safe and reliable foundation models, both at a scientific and commercial level.
“Developed responsibly, cutting-edge AI can have a transformative impact in nearly every industry. It can revolutionize the way we develop new medical treatments, tackle climate change and improve our public services, all while growing and future-proofing our economy,” said Michelle Donelan, science, innovation and technology secretary, in comments published alongside the announcement. It is vital that the public and businesses have the trust they need to confidently adopt AI technology and fully realize its benefits, he said.
Regulating artificial intelligence
Last month, the UK government published a white paper outlining its plans to regulate general purpose AI.
The paper set out guidelines for what it calls “responsible use,”outlining five principles it wants companies to follow, including safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
However, in order to “avoid heavy-handed legislation which could stifle innovation,” the government has opted to give responsibility for AI governance to sectoral regulators who will have to rely on existing powers in the absence of any new laws.
Elsewhere, the European Data Protection Board (EDPB) announced plans to launch a dedicated task force to investigate ChatGPT after a number of European privacy watchdogs raised concerns about whether the technology is compliant with the EU’s General Data Protection Regulation (GDPR).
In a statement posted on its website, the EDPB said the task force was intended to “foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities.”