Why EU AI law could hurt innovation



Will cracking down on open source AI development actually hurt the single market?

The European Union’s proposed AI law plans to restrict open source AI. But this will come at the cost of progress and innovation, says Nitish Motha of Genie AI

The proposed Artificial Intelligence Act (AIA) – which is still under discussion – from the European Union touches on the regulation of open source AI. But imposing strict limits on the sharing and distribution of open-source general-purpose artificial intelligence (GPAI) is a completely regressive move. It’s like taking the world back 30 years.

Open source culture is the only reason why humanity has been able to develop technology at this speed of light. AI researchers have only recently been able to embrace sharing their code for more transparency and verification, but placing restrictions on this movement would harm the cultural progress made by the scientific community.

Regulations are good and should be welcomed, but not at the expense of creativity and scientific progress.

It takes a lot of energy and effort to bring about a cultural shift in society – so it would be sad and frustrating to turn this around. The entire AI law needs to be considered very carefully, and the proposed changes to it have generated ripples across the AI ​​and open source technology community.

reaction “chilling effect”

counter targets

Two goals of the verb Proposed regulatory framework Especially stand out:

  • Ensuring legal certainty to facilitate investment and innovation in artificial intelligence And the
  • Facilitate the development of a single market for legitimate, safe and trustworthy AI applications and prevent market fragmentation’

The introduction of regulations on GPAI appears to contradict this data. GPAI thrives on innovation and knowledge sharing without fear of repercussions and legal costs. So, instead of creating a secure market that tolerates fragmentation, what could actually happen is a set of strict legal regulations that prevent open source development and increase the monopoly on AI development with big tech companies.

This is likely to create a market that is less open and, therefore, a market in which it is difficult to gauge whether AI applications are “legal, secure, and trustworthy.” Of course, all of this is counterproductive for GPAI. Instead, the variance that such hypotheses can generate will place more power in the hands of the monopolists, and this is a matter of growing and troubling concern.

But… do we need regulations?

It is also important to acknowledge those who may see the backlash to the changes as an attempt by companies to rid themselves of regulations. Certainly regulations are needed to prevent serious misconduct. Without regulations, wouldn’t AI fall into the wrong hands?

It’s a valid concern, and yes, of course, we need regulations (shown below). But this regulation should be created on the basis of the application, not as a broad brush stroke of all models. Each model should be evaluated based on whether it is considered malicious and regulated accordingly, rather than targeting open source at its source and thus limiting creativity.

This is a complex, complex and multifaceted work to be carried out. Even those who agree on the whole still disagree in some areas. But the main sticking point is that the public nature of GPAI allows people to access it. This open, collaborative approach is the fundamental reason for making progress, creating transparency, and developing technology for the benefit of society, collectively and individually, on business gains.

freedom to share

Open source licenses such as Massachusetts Institute of Technology They are designed to exchange knowledge and ideas, not to sell finished and tested products. Hence, they should not be treated in the same way. It is true that there is a need for the right balance between regulations. This is in particular to improve the reliability and transparency of how these AI models are built, what types of data were used to train them and whether there are any known limitations – but this cannot be at the cost of risking the freedom to share knowledge.

Currently, the AI ​​law appears to be targeting creators to openly share knowledge and ideas. The list design should be tailored for people who use open source software to be more careful and do their research and experiments before releasing it to a wide audience. This can expose bad actors who want to use creators’ work in commercial projects without investing in any additional research and quality controls on their part.

The final developer should in fact be responsible and accountable for scrutinizing and performing comprehensive quality checks before serving users. These are the people who will ultimately benefit commercially from open source projects. But, in its current form, the framework clearly does not intend to do so. The main motto of open source is to share knowledge and experience without No commercial gain.

Openly organize to openly innovate

Adding strict legal responsibilities for open source GPAI developers and researchers will only limit technical growth and innovation. It will discourage developers from sharing their ideas and learning, preventing new startups or ambitious individuals from accessing the latest technology. It will deprive them of learning and being inspired by what others have learned and built themselves.

This is not the way technology and engineering work in the modern world. Sharing and building above others is at the heart of how technology products and services are developed – and this must be maintained. Regulations are good and should be welcomed, but not at the expense of creativity and scientific progress – rather, they should be applied on the application front to ensure responsible outcomes. In the face of the changes in AIA, one thing is clear – the open source culture must be cherished.

Nitish Motha is co-founder and CTO of Jenny AI

Related:

How AI can change the rules of the game regarding data privacyAI offers multiple benefits to businesses, but it also poses data privacy risks

Most Valuable Use Cases for AI in Web ApplicationsThis article will explore how AI in web applications has helped organizations increase value

Can AI detect spam faster than humans? AI can already detect spam faster than humans, but there are limits, says Martin Wilhelm of GMX

Source

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version