Europe forces ChatGPT to be transparent and respect copyright
The European Union's AI law continues to gain traction. After its approval last year and the first bans coming into effect in February , the regulations now restrict general-purpose artificial intelligence models, including tools such as ChatGPT , Gemini , and Grok . Starting this Saturday, August 2, developer companies will have to comply with new transparency and security criteria to avoid potential fines. These can amount to up to 7% of the offending firm's global annual turnover or up to €35 million.
From now on, any developer of an artificial intelligence tool capable of generating text, images, video, or voice must provide clear and up-to-date technical information about its technology to the companies that use its services and to European authorities, specifically the European AI Office. This information must explain how it was developed and tested.
MORE INFORMATION
More advanced developments will also have to conduct ongoing security assessments, report serious incidents, and do everything possible to ensure their products are free of vulnerabilities that could put user data at risk.
Likewise, the companies that create these tools are required to comply with European copyright regulations; to do so, they will have to share public summaries of the data they used to train their machines. They must also implement systems that allow copyright holders to opt out, denying the use of their content for the development of the technology.
These obligations must be met by any company that brings generative AI technology to market, such as Google, Meta, OpenAI, or xAI. This will directly affect, for example, GPT-5, the supercharged version of ChatGPT, which is expected to be launched before the end of August.
To make it easier for companies to comply with the regulations, the European Union shared a code of good practices in mid-July, which companies can voluntarily adhere to. Companies such as Microsoft, OpenAI, Google, Anthropic, and xAI have already confirmed their signatures, although in most cases with reluctance.
"We remain concerned that the AI Act and the Code could slow the development and deployment of AI in Europe. In particular, deviations from European copyright law, measures that slow down approvals, or requirements that expose trade secrets could slow the development and deployment of European models, thus harming European competitiveness," Google said in a statement this week.
Meta has already announced that it has no intention of signing the code, although it will still have to comply with the regulations. The company led by Mark Zuckerberg, which is burning billions of euros to create "superintelligence"—that machine that will be smarter than the smartest human—believes that the regulations will stifle innovation. "It will slow the development and implementation of frontier AI models in Europe and hinder European companies looking to build businesses on them," said Joel Kaplan, Meta's director of global affairs.
ABC.es