tc logo 2018 square reverse2x

Experts warn that the EU’s AI law could have a chilling effect on open source efforts

Nonpartisan think tank Brookings this week published a piece condemning the bloc’s regulation of open-source AI, arguing that it would create legal liability for general-purpose AI systems while undermining their development. According to the EU’s draft AI law, open source developers must adhere to guidelines on risk management, data governance, technical documentation and transparency, as well as accuracy and cybersecurity standards.

The author asserts that if a company implements an open-source AI system that leads to any disastrous outcome, it is not inconceivable that the company would try to deflect responsibility by suing the open-source developers who built their product.

“It could focus more energy on the future of AI at big tech companies and block research that is critical to public understanding of AI,” wrote analyst Alex Engler, who published the piece in Brookings. “Finally, the [E.U.’s] Attempting to regulate open-source can create convoluted requirements that endanger open-source AI contributors without improving general-purpose AI.

In 2021, the European Commission — the EU’s politically independent executive arm — released the text of an AI law that aims to promote the deployment of “trustworthy AI” in the EU, and they solicited input from industry before voting this fall on amendments to rules that would try to balance innovation with accountability for EU institutions. wants to do. But according to some experts, the AI ​​law as written imposes strict requirements on open efforts to develop AI systems.

Includes check-outs for the Act Some Categories of open source AI are those used exclusively for research and with controls to prevent misuse. But as Engler notes, preventing these projects from entering commercial systems where they can be abused by malicious actors is difficult – if not impossible.

In a recent example, Stable Diffusion, an open-source AI system that generates images from text prompts, was released with a license that prohibits certain types of content. But it quickly found an audience in communities that use such AI tools to create obscene deepfakes of celebrities.

Oren Etzioni, founding CEO of the Allen Institute for AI, agrees that the current draft of the AI ​​Act is problematic. In an email interview with Technology Flow, he believes the burdens introduced by Etzioni’s rules will have a chilling effect on areas such as the development of open text-generating systems, allowing developers to “catch up” to big tech companies like Google. And meta.

“The road to regulation hell is paved with the EU’s good intentions,” Etzioni said. “Open source developers should not be subject to the same burden as those who develop commercial software. Free software should always be provided ‘as is’ — consider the case of a student developing AI capability; They cannot comply with EU regulations and may be forced not to distribute their software, thus having a chilling effect on academic progress and the reproduction of scientific results.

Instead of regulating AI technologies broadly, EU regulators should focus on specific applications of AI, Etzioni argued. “There is too much uncertainty and rapid change in AI for a slow-moving regulatory process to be effective,” he said. “Instead, AI applications such as autonomous vehicles, bots or toys should be about control.”

Not every practitioner believes the AI ​​Act needs further revision. Mike Cook, an AI researcher who is part of the Knives and Paintbrushes collective, thinks it’s “absolutely fine” to regulate open source AI “a little more” than necessary. He notes that setting some sort of standard is a way to show leadership around the world – hopefully encouraging others to follow.

“A lot of the fear of ‘stifling innovation’ comes from people who want to remove all regulation and have free regulation, and that’s not a view I generally put much stock in,” Cook said. “I think it’s okay to legislate in the name of a better world, not worry about whether your neighbors are going to regulate less than you and somehow profit from it.”

Wisely, as my colleague Natasha Lomas previously noted, the EU’s risk-based approach lists many prohibited uses of AI (eg, China-style state social credit scoring) and places limits on AI systems considered “high-risk” — those associated with law enforcement. as If regulations target product types as opposed to product categories (as Etzioni argues), this may require thousands of regulations – one for each product type – leading to conflict and even greater regulatory uncertainty.

An analysis by Lillian Edwards, professor of law at Newcastle School and part-time legal adviser at the Ada Lovelace Institute, questions whether providers of systems such as open source large language models (eg, GPT-3) can be held liable. All under the AI ​​Act. The language in the law puts the responsibility for managing the uses and effects of an AI system on downstream deployers, she said — not necessarily the initial developer.

“[T]He uses the route downstream deployers [AI] And its adoption may be as important as how it is actually constructed,” she writes. “AI legislation takes some notice of this but not nearly enough, and thus fails to adequately regulate the many actors who participate in various ways ‘downstream’ in the AI ​​supply chain.”

At AI startup Hugging Face, CEO Clement DeLongue, lawyer Carlos Muñoz Ferrandis and policy expert Irene Solaiman said they welcome regulations to protect consumer safety, but the proposed AI law is too vague. For example, they say it’s unclear whether the law applies to the “pre-trained” machine learning models at the heart of AI-powered software, or just the software itself.

“This lack of clarity, coupled with ongoing community governance initiatives such as open and responsible AI licenses, may hinder upstream innovation at the top of the AI ​​value chain, leaving us with a bigger focus on face-hugging,” DeLongue, Ferrandis and Solaiman said in a joint statement. “From a competition and innovation perspective , if you place too much burden on publicly released features already upstream of the AI ​​innovation stream, you risk hindering incremental innovation, product differentiation, and dynamic competition, which are central to emerging technology markets such as AI.-Related … regulation should take into account the innovation dynamics of AI markets and Thus the main sources of innovation in these markets should be clearly identified and protected.

As for Hugging Face, the company advocates for better AI governance tools regardless of the final language of the AI ​​Act, such as “responsible” AI licenses and model cards that include information about the AI ​​system’s intended use and how it will work. DeLongue, Ferrandis and Solaiman point out that responsible licensing has become a common practice for major AI releases such as Meta’s OPT-175 language model.

“Open innovation and responsible innovation in the field of AI are not mutually exclusive, but complementary,” said DeLongue, Ferrandis, and Solaiman. “The intersection between the two should be a major focus of ongoing regulatory efforts, which is currently the case for the AI ​​community.”

That well can be achieved. Given the many moving parts involved in EU rulemaking (not to mention the stakeholders affected by it), AI regulation in the bloc may take several years to take shape.

Leave a Comment

Your email address will not be published.