3

ChatGPT Creator Sam Altman: If Compliance Becomes Impossible, We’ll Leave EU

 1 year ago
source link: https://www.theinsaneapp.com/2023/05/openai-may-leave-eu-over-chatgpt-regulation.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

ChatGPT Creator Sam Altman: If Compliance Becomes Impossible, We’ll Leave EU

OpenAI CEO Sam Altman has issued a warning that the company might withdraw its services from the European market due to the AI regulation being developed by the EU.

OpenAI May Leave EU Because Of AI Act

After the press conference in London, Altman expressed his concerns about the EU AI Act, which lawmakers are currently finalizing. The Act now includes additional obligations for creators of “foundation models,” such as OpenAI’s ChatGPT and DALL-E, which are large-scale AI systems powering various services.

Altman stated, “The details really matter,” as reported by The Financial Times. He mentioned that OpenAI would strive to comply with the regulations, but if compliance becomes impossible, the company will cease its operations.

According to Time, Altman expressed concerns about the potential classification of systems like ChatGPT as “high risk” under the EU legislation. This classification would impose various safety and transparency obligations on OpenAI. Altman acknowledged that the company would either need to address these requirements or face limitations due to technical constraints.

Apart from the technical challenges, the disclosure requirements outlined in the EU AI Act also pose potential risks to OpenAI’s business.

As per the current draft, creators of foundation models would be obligated to disclose information about their system’s design, including details like the computing power needed, training duration, and other appropriate aspects related to the model’s size and capabilities. Additionally, they would be required to provide summaries of copyrighted data utilized for training purposes.

As OpenAI’s tools have gained greater commercial value, the company has ceased sharing certain types of information that were previously disclosed. In March, Ilya Sutskever, co-founder of OpenAI, acknowledged in an interview that the company had made a mistake by disclosing extensive details in the past.

Sutskever emphasized the need to keep certain information, such as training methods and data sources, confidential to prevent rivals from replicating their work.

Furthermore, the requirement for OpenAI to disclose its use of copyrighted data not only poses a potential business threat but also exposes the company to the risk of lawsuits. Generative AI systems like ChatGPT and DALL-E rely on vast amounts of data collected from the internet, a significant portion of which is protected by copyright.

When companies disclose these data sources, it leaves them vulnerable to legal challenges. For instance, OpenAI’s competitor Stability AI is currently facing a lawsuit from Getty Images, a stock image provider, for using copyrighted data to train its AI image generator.

Altman’s recent statements offer additional insight into the company’s perspective on regulation. Altman has conveyed to US policymakers that regulations should primarily apply to future AI systems with enhanced capabilities. In contrast, the EU AI Act focuses more on the current functionalities of AI software. This distinction highlights the nuanced approach OpenAI desires when it comes to regulatory measures.

Related Stories:

🙏 Help Us By Sharing This Article 👇:

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK