While European Union legislators clocked 20+ hours of negotiation time in a marathon attempt to reach agreement on how to regulate artificial intelligence a preliminary accord on how to manage a sticky element — rules for foundational models/general purpose AIs (GPAIs) — was agreed upon, according to in a leaked proposal TechCrunch reviewed.
In recent weeks there has been a concerted push, led by the French AI startup Mistral for a general regulatory framework for foundational models / GPAI. But EU lawmakers appear to be resisting a full-throttle push to let the market sort things out as the proposal retains elements of a tiered approach to regulating these advanced AIs. proposed in parliament earlier this year.
As such, there is a partial exemption from certain obligations for GPAI systems provided under free and open source licenses (which are defined to mean their weights, model architecture information, and public model usage) — with some exceptions, including for “high risk” models.
Reuters also reported partial exceptions for open source advanced AIs.
According to our source, the open source exception is further limited to commercial deployment – so if/if such an open source model is available on the market or if put into service the carving will no longer stand. “So there is a chance that the law will apply to Mistral, depending on how ‘available on the market’ or ‘putting into service’ is interpreted,” our source suggested.
The preliminary agreement that we see remains the classification of GPAIs with the so-called “systemic risk” – with the criteria for a model that gets this name is that it has “high impact capabilities” , including when the accumulated value of the calculation used for its training is measured. in floating point operations (FLOPs) is greater than 10^25.
At that level very few current models appear to meet the systemic risk threshold – suggesting that some of the more recent GPAIs need to fulfill prior obligations to assess and mitigate systemic risks. So Mistral’s lobbying seems to have softened the regulatory blow.
Under the preliminary agreement, other obligations for providers of GPAIs with systemic risk include conducting evaluations with standardized protocols and state of the art tools; documenting and reporting serious incidents “without undue delay”; conducting and documenting adversarial testing; ensuring an adequate level of cybersecurity; and reporting the actual or estimated energy consumption of the model.
In other areas there are some general obligations for providers of GPAIs, including testing and evaluating the model and drawing up and keeping technical documentation, which must be provided to the regulatory authorities. and oversight bodies on request.
They also need to provide downstream deployers of their models (aka AI app makers) with an overview of the model’s capabilities and limitations to support their ability to comply with the AI Act.
The text of the proposal also requires the creators of the foundation model to put in place a policy to respect the copyright law of the EU, including about the limitations placed by copyright holders on the mining of text and data. In addition, they provide a “sufficiently detailed” summary of the training data used to build the model and make it public – with a template for disclosure provided by the Office of AI, a regulatory body. of AI that the regulation proposes to set up.
We understand that this copyright disclosure summary still applies to open source models – standing as one of the exceptions to their carving out of the rules.
The text we have seen contains a reference to codes of practice, which the proposal says GPAIs – and GPAIs with systemic risk – can rely on to demonstrate compliance, until a “harmonized standard” is published.
It envisages the Office of AI to participate in the development of such Codes. Meanwhile, the Commission is expected to issue standardization requests starting six months after the regulation enters into force on the GPAIs – such as requests for reporting deliveries and documentation of ways to improve energy and resource use of AI systems – with regular reporting on its progress in The development of these standardized elements also includes (two years after the date of application; and then every four years).
The current trilogue on the AI Act really started yesterday afternoon but the European Commission looks determined that this will be the last knocking of heads between the European Council, the Parliament and its own staff in this contested file. (Otherwise, as we previously reported, there is a risk that the regulation will be put back on the shelf as the EU elections and new Commission appointments come next year.)
At the time of writing talks to resolve many other disputed elements of the file remain ongoing and there are many more sensitive issues on the table (such as biometric surveillance for law enforcement purposes). So whether the file made it to the line remains unclear.
Without agreement on all components there will be no agreement to secure the law so the fate of the AI Act remains up in the air. But for those who want to understand where fellow lawmakers land when it comes to responsibilities for advanced AI models, such as the massive language model behind the viral AI chatbot ChatGPT, the preliminary deal offers some steerages of where policymakers are looking to go.
In the last few minutes the EU’s internal market commissioner, Thierry Breton, tweeted to confirm that the talks had finally broken down – but only until tomorrow. The epic trilogue is set to resume at 9am in Brussels so the Commission is still looking to get the risk-based AI rulebook it proposed back in April 2021 in line this week. Of course that will depend on finding compromises acceptable to the other co-legislators, the Council and the Parliament. And with such high stakes, and such a highly sensitive file, success is uncertain.