BLUE
Ppirati.bsky.social

Commission discloses disagreements between general-purpose AI providers and other stakeholders The Commission disclosed disagreements between general-purpose model providers and other stakeholders in the first Code of Practice plenary on general-purpose artificial intelligence (GPAI) (1/2)

1
Ppirati.bsky.social

MEPs question appointment of leader for general-purpose AI code of practice Three influential MEPs are questioning how the European Commission is appointing key positions in drafting guidelines for general-purpose AI (GPAI), on the same day that the EU executive is announcing who will (1/2)

1
Aavaaz.bsky.social

How many of these 10 AI risks did you know? Understand the threats and join us advocating for a General Purpose Artificial Intelligence (GPAI) that upholds ethical standards. Let’s together raise the voices and issues that are in danger of being ignored in AI debates: avaaz.org/gpai

0

Preparing a response to the @EU_Commission's public consultation on Open Source AI, and every time I try to read this question I feel like I'm having an aneurysm

Page 1: GPAI Models: transparency & copyright

Q1: In the current state of the art, for which elements of information and  documentation by general-purpose Al model providers to providers of Al systems do practices exist that, in your view, achieve the above-mentioned purpose?
0
Rrand.org

The first primer focuses on what the EU AI Act calls "general-purpose AI" (GPAI) models—some of which may carry systemic risk. The authors explore options for the U.S. to consider regarding GPAI models in response to the EU's new regulations. www.rand.org/pubs/researc...

How the United States and EU Could Cooperate on AI Governance
How the United States and EU Could Cooperate on AI Governance

As AI applications proliferate worldwide, complex governance debates are taking place. Recent European legislation proposes regulations that will apply to U.S. companies seeking to operate powerful AI...

1
TCtiffanycli.bsky.social

Interesting paper. As more AI regulations hinge on “risk,” we need to figure out how to quantify AI risk and thresholds for compliance. This paper explains compute thresholds and argues training compute is an imperfect proxy for risk: arxiv.org/pdf/2405.10799

Training Compute Thresholds:
Features and Functions in AI Regulation
Lennart Heim*
Centre for the Governance of Al
Oxford, United Kingdom
Leonie Koessler
Centre for the Governance of Al
Oxford, United Kingdom &
European New School of Digital Studies
Frankfurt (Oder), Germany
Abstract
Regulators in the US and EU are using thresholds based on training compute-the number of computational operations used in training—to identify general-purpose artificial intelligence (GPAI) models that may pose risks of large-scale societal harm. We argue that training compute currently is the most suitable metric to identify GPAI models that deserve regulatory oversight and further scrutiny. Training compute correlates with model capabilities and risks, is quantifiable, can be measured early in the Al lifecycle, and can be verified by external actors, among other advantageous features. These features make compute thresholds considerably more suitable than other proposed metrics to serve as an initial …
0