Select Language

English

Down Icon

Select Country

Italy

Down Icon

Europe presents the Code of Conduct on Generative AI

Europe presents the Code of Conduct on Generative AI

The European Commission today published the final version of the General -Purpose AI Code of Practice. This voluntary tool was developed by 13 independent experts and shaped by contributions from over 1,000 stakeholders : model developers, SMEs, academics, security experts, rights holders, and civil society organizations.

The Code was created to support the industry in aligning with the rules set forth in the new AI Act , the world's first legislation designed to regulate the development and use of artificial intelligence based on risk levels. Specifically, this Code focuses on general-purpose AI models , i.e., basic models, such as GPT, Claude, or LLaMA, that can be used for a wide variety of tasks and applications.

The regulation will enter into force on August 2, 2025 , but will only be fully applicable from 2026 for new models and from 2027 for existing ones. The Code of Conduct offers advance guidance to facilitate compliance with the new rules, while reducing the administrative burden for those who decide to adopt it.

Three chapters

The document is divided into three main chapters , each with a specific focus:

Transparency. It includes a Model Documentation Form , a standardized and user-friendly form for collecting all essential information about the model (training data, capabilities, limitations, intended uses). It is designed to make the behavior of AI models more understandable and verifiable, facilitating their integration into digital products and services.

Copyright. Provides practical guidelines for complying with European copyright law, a particularly sensitive issue in the field of generative AI. The Commission urges providers to adopt active policies to identify and respect protected content used during training.

Security and risk management. This only applies to the most advanced models, those that could pose systemic risks . These include the potential use for chemical or biological weapons, or loss of control by developers. In line with the AI ​​Act, the Code proposes cutting-edge practices for identifying and mitigating such risks.

Towards adoption

The Code will now need to be formally approved by the Member States and the Commission . Once approved, providers who decide to sign up will be able to more easily demonstrate compliance with the regulation, benefiting from greater legal certainty and simplified procedures .

In parallel, the Commission will soon publish official guidelines clarifying who is subject to the AI ​​Act's rules , particularly in the general-purpose area. This will help companies determine whether, and how, they need to comply. In the meantime, here are the EU's questions and answers.

A step towards "technological sovereignty"

Henna Virkkunen, Executive Vice President for Technological Sovereignty, Security and Democracy, commented: “The publication of the Code represents a fundamental step towards making advanced AI models not only innovative, but also safe and transparent. Co-designed with stakeholders, it responds to their needs. I invite all providers to sign it to embark on a clear and collaborative path towards compliance with the AI ​​Act.” The Commission's move is part of a broader international debate on how to regulate artificial intelligence. While the United States is working with fragmented regulatory frameworks and China is adopting a centralized and statist approach, the European Union is proving to be a pioneer of a “risk-based” regulatory approach , aimed at protecting fundamental rights without hindering innovation.

La Repubblica

La Repubblica

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow