AI Act and Compliance Requirements for AI System Providers

With the adoption of Regulation (EU) 2024/1689, known as the AI Act, the European Union has introduced a unified regulatory framework aimed at governing the development and use of artificial intelligence systems within the European market. This measure is part of a broader regulatory project aimed at ensuring that new technologies are consistent with the protection of fundamental rights, user safety, and trust in the digital market.

The new framework does not apply only to companies established within the European Union; it also extends to non-EU economic operators that intend to provide artificial intelligence (AI) systems where the output produced by such systems is used within the EU. In this context, understanding how the AI Act operates and the main compliance obligations it introduces becomes an essential step for technology companies seeking to operate or expand in the European market. Approximately one and a half years after its adoption, it is now possible to analyse the topic and refer to a practical case relating to its application.

Compliance requirements for AI system providers

It is well known that the AI Act introduces a regulatory regime based on a risk-based approach, classifying AI systems according to their potential impact on individuals’ rights and on society. Certain applications are deemed incompatible with European values and are therefore prohibited, while others may be used provided that they comply with specific requirements relating to transparency, safety, and human oversight.

Accordingly, for AI system providers, the first operational step is to correctly identify the risk category into which the developed or marketed system falls. This classification makes it possible to determine which obligations apply and what level of documentation and control must be implemented prior to placing the system on the market.

Furthermore, close attention is devoted to the quality of the data used to train AI systems. Incomplete or biased datasets may generate discriminatory or inaccurate outcomes, with significant legal and reputational consequences. For this reason, the Regulation requires the adoption of control measures aimed at ensuring the quality and reliability of the system throughout its entire lifecycle.

Another key element concerns cooperation between providers and users of AI systems. Providers are required to make available all information necessary to enable users to comply with their own regulatory obligations, including the assessment of potential impacts on fundamental rights. This cooperation is particularly sensitive where the provider operates from outside the EU and must adapt to regulatory standards that differ from those of its domestic legal system.

In addition, compliance with the AI Act must be coordinated with other regulatory areas, such as personal data protection, cybersecurity, and intellectual property protection, all of which are highly regulated within the European market. For many companies, this entails reviewing internal processes and contractual models, as well as implementing appropriate systems of technological governance.

For providers established outside the European Union, access to the EU market may also require the appointment of an authorised representative within the EU and engagement with European and national supervisory authorities. Failure to comply with regulatory requirements may result not only in financial penalties, but also in restrictions or bans on the marketing of AI systems within the European territory, as occurred in the past with DeepSeek, which was blocked in Italy by the Italian Data Protection Authority on the grounds that the company was transferring European users’ personal data without ensuring compliance with EU rules on transparency and privacy.

Conclusions

The AI Act represents a turning point in the regulation of artificial intelligence and requires providers to adopt a more structured approach to managing the technological and legal risks associated with their systems. Compliance cannot be treated as a merely formal exercise; rather, it requires a prior assessment of the system’s characteristics and an adjustment of the company’s organisational and technical processes.

Timely understanding of the Regulation’s scope of application and the implementation of appropriate compliance measures enable companies to reduce the risk of sanctions as well as delays in entering the European market. In an environment increasingly focused on technological accountability and user protection, a sound approach to compliance can become not only a legal obligation, but also a source of trust and a competitive advantage for companies operating in the artificial intelligence sector.

Veronica Gianola Veronica Gianola

Veronica Gianola

Partner
Veronica Gianola, an accomplished Italian lawyer, is a member of the Milan Bar Association.
Jun Jie Yang Jun Jie Yang

Jun Jie Yang

Associate
Jun Jie Yang, has developed strong expertise in the areas of TMT, Data Protection, and commercial contracts.

Contact us for a
first consultation

CONTACT US FOR A FREE CONSULTATION

This field is for validation purposes and should be left unchanged.