1

The EU AI Act - dissecting the detail

 8 months ago
source link: https://diginomica.com/eu-ai-act-dissecting-detail
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

The EU AI Act - dissecting the detail

By Neil Raden

December 18, 2023

Dyslexia mode

Image of a chalk board with writing on that says ‘Know the Rules’

The European Union's efforts to finalize the AI Act raise concerns about its impact on open-source AI development. There is urgency in completing the AI Act before the end of the year, which implies the Act may go forward without substantial revision about open-source regulation. The AI Act is so comprehensive, it requires a report exceeding the capacity of an article here. This will be short and bulleted, but more to come.

The European Commission just published a FAQ, Artificial Intelligence – Questions and Answers, to provide some key questions about the AI Act, with answers. Thanks to Merve Hickok and the Center for AI and Digital Policy (CAIDP) for bringing this to my attention. It’s a fairly comprehensive document which I will analyze in an upcomarticle. 

Definition and Scope of GPAI: GPAI is characterized by performing "generally applicable functions" in various contexts, often involving deep learning models trained on large datasets. The Council's proposed requirements for GPAI developers include risk management, data governance, transparency, accuracy, and cybersecurity. 

Concerns and challenges with the GPAI model

  1. The Council's regulation interest stems from the rapid advancement and application diversity of GPAI models, including concerns about disinformation and deepfakes. The complexity and opaqueness of these models and the limited access provided through APIs pose challenges in meeting AIA requirements.
  2. Regulation of open source GPAI: The Council of the EU proposes regulating open-source GPAI, which could be harmful. This regulation might create legal liabilities for open-source models, potentially stifling their development and concentrating AI power in large tech companies, limiting public research and understanding of AI. A critical issue in the Act is imposing stringent requirements on open-source AI developers. This approach is viewed as unreasonable and potentially harmful to innovation in open-source AI. 
  3. Role of open source GPAI in responsible development: The importance of open-source GPAI in democratizing AI development and enabling critical research cannot be underestimated. Open-source models like Google’s BERT or OpenAI’s GPT-3 and initiatives like EleutherAI and HuggingFace's Bloom illustrate this. The value of open-source AI Models, exemplified by platforms like HuggingFace Bloom and Meta's Llama 2 model, offers freely accessible code and designs. This openness contrasts with closed-source models like Google's LaMDA. Open-source AI facilitates innovation, transparency, and wide use. 
  4. Regulatory burdens from the AI Act: The AI Act's one-size-fits-all approach demands extensive control over development processes, posing impractical barriers for open-source AI. It includes obligations like risk mitigation, data governance, and prolonged documentation, which are challenging for decentralized, collaborative, open-source projects. 
  5. Potential negative consequences of regulation: The regulation may impose complex requirements on open-source AI contributors and may not benefit the use of GPAI significantly. An approach that exempts open-source AI from regulation until used in high-risk applications as a means to better AI development outcomes is a far better solution.
  6. Impacts on research and public knowledge: Open-source GPAI models are crucial for understanding AI's functions and limitations, addressing bias, and advancing scientific knowledge. They also allow for external auditing of enterprise AI systems. Open-source models are vital for transparency and robust system evaluation. 
  7. Risks of disincentivizing open source GPAI: Regulating open-source GPAI might lead to over-reliance on corporate models and reduce AI development and application transparency. This could also increase large tech companies' influence over GPAI. 
  8. Compliance Challenges: The Act would require extensive documentation and potentially expensive third-party audits for open-source models. However, open-source systems already enable independent scrutiny, making some requirements redundant and costly.

Recommendation against regulating open source AI models

Instead of regulating open-source AI, the focus should be on managing risky and harmful AI applications. The current draft of the AIA risks stifling open-source contributions through its exemptions, which need to be revised or more practical.

Stanford Research on Compliance: Research shows that open source AI models generally comply better with the AI Act's requirements for training data and compute usage than closed-source models. This suggests a need for a policy shift focusing on high-risk AI applications rather than the models themselves. 

The current form of the AI Act could hinder open source AI development, affecting innovation and competition in the AI industry. There is a clear need for policy amendments to the AI Act that recognize open-source AI's unique nature and contributions. This is important to maintain the AI sector's progress and openness and ensure regulations support innovation and competition. 

My take

Preserving the distinct advantages of open-source AI in the regulatory landscape should foster continued growth and dynamism in the AI field. 


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK