“The AI Act transposes European values to a new era. By focusing regulation on identifiable risks, today’s agreement will foster responsible innovation in Europe. (…) Our AI Act will make a substantial contribution to the development of global rules and principles for human-centric Artificial Intelligence”, this is how Ursula Von Der Leyen – President of the European Commission – commented on the agreement reached by the European Institutions on the AI Act on December 9th, 2023, after a negotiation that lasted nearly 36 hours.

To date, the text of the AI Act – which consists of 85 articles and 9 annexes – represents the political agreement reached by the European Commission, the Council and the European Parliament. The text has not yet entered into force and will be subject to formal approval by Parliament and the Council in the coming months; its provisions are expected to come into force within the next 2 years.

As is well known by now, artificial intelligence is the technology that makes it possible to simulate human intelligence processes using integrated algorithms; it is a rapidly developing technology that has strongly affected the way we live and work, especially in the recent years.

To read the official press release issued by the European Commission, please refer to the following link:


The AI Act was born out of the need to find a compromise between technological development and the protection of fundamental rights, trying to balance innovation and prevention. Although a series of transparency, security and accountability requirements have been provided for the protection of those who will personally use these technologies, the approach adopted has nevertheless been sympathetic towards technological innovation; with the new AI Act, an attempt to not sacrifice the competitiveness of European companies on the global market has been made.

To do this, the European Institutions have envisaged a risk-based approach, through the creation of four macro-categories: ‘Minimal Risk’, ‘High Risk’, ‘Specific Risk on Transparency’, and ‘Unacceptable Risk’; this is because different obligations are then placed on providers and users depending on the level of risk of each AI technology. The macro-category ‘Unacceptable Risk’, on the other hand, encompasses all those systems considered to be a threat to fundamental rights and, therefore, prohibited.

According to the new legislation, producers of AI technologies will be required to prove that their products do not pose a risk to people, as they have been developed in compliance with all the legal criteria; before the final technology can be placed on the market, producers will have to meet and respect strict transparency criteria.

More specifically, AI technologies falling within the ‘High Risk’ category include critical systems such as, for example, those for the provision of medical devices; such systems will only be able to enter the European market if they meet certain strict requirements, with the possibility of a European certification.

Technologies falling under ‘Specific Risk for Transparency’, on the other hand, concern systems such as chatbots, with which people often interface; in these cases, it has been decided that anyone interfacing with such systems must be made aware in advance that they are interacting with an artificial intelligence. Furthermore, all content generated by AI will have to be labelled as such.

As far as the systems falling under “Minimal risk” concern, these mainly refers to purchase recommendation systems; in these cases, manufacturers are allowed to adopt codes of conduct on a voluntary basis.

The macro-category ‘Unacceptable Risk’, finally, differs from the previous ones because it encompasses all those systems considered as a threat to fundamental rights and, therefore, prohibited.

Still on the topic of legal compliance, it will also be interesting to witness how intellectual property rights protection legislation can be applied to new AI technologies under development.

Another fundamental passage of the AI Act is certainly the one in which it is clearly outlined what is not allowed to be done through such systems and therefore which uses of AI technology are prohibited. In this regard, the biometric categorisation of sensitive data is certainly prohibited, where sensitive data means, by way of example, ethnicity, religion, state of health, or political orientation. Moreover, systems based on social scoring, those capable of manipulating human emotions and AI systems that aim to exploit personal vulnerabilities, such as social or economic situation, age or disability, are also prohibited.

In the negotiations, the most divisive issue was the one related to the use of AI systems by the police; the agreement reached stipulates that the police will only be able to use biometric recognition systems in three strict cases, namely (i) in the event of a terrorist threat, (ii) for targeted searches of crime victims and, finally, (iii) in the event of the identification of persons suspected of having committed serious crimes.

The above gives a very general picture of the main issues on which political understanding has been reached within the EU.

With the AI Act, Europe becomes the first continent to have provided for harmonised legislation in this sector, which opens the field to interesting scenarios from a market point of view. It is to be hoped that this regulation will encourage foreign investors to bring their capital assets to Europe to invest in a sector that, as of today, represents our near future and that will probably come to revolutionise our way of life, over the next few years.

For anyone deciding to invest in AI technologies, now is the right time to do so.

As mentioned above, the provisions foreseen within the AI Act – once it is passed – will come into force gradually over the next two years.

This ensures a wide time window for any non-EU foreign investors to set up a company in Europe to try to invest – with a foreign direct investment – in a fast-growing market, now finally regulated.

If you would like to be kept up to date on developments in this legislation, please send an e-mail to

D’Andrea & Partners Legal Counsel and PHC Advisory Tax & Accounting (a DP Group company) offer assistance and consultancy services in the legal and tax fields. For any enquiries, please contact us at:

The above contents are provided for information purposes only. The publication of this article does not create an attorney-client relationship between DP Group and the reader and does not constitute legal advice. Legal advice must be tailored to the specific circumstances of each case.