How can I check whether I am affected?
The AI Regulation (also known as the AI Act or AIA) takes a risk-based approach and sets out different levels of requirements for different types of AI systems and models. There are exceptions for certain AI systems and models. As a result of the risk-based approach, different levels of obligations apply depending on the type of AI system or model. In addition, the AIA contains graduated requirements for certain economic operators. Check now how your AI technology is affected and and what obligations you have with our free quick-check.
What is the AIA and what are its objectives?
The AIA is a product-related regulation that aims to establish a legal framework for a safe and transparent use of AI in the EU market for the first time. The legislative objective is to ensure protection against the harmful effects of AI in the Union while supporting innovation. For this reason, the AIA pursues a risk-based approach. Basically, the higher the risk to the EU’s fundamental values (such as freedom, democracy, security, fundamental rights) posed by an AI system or model, the higher the level of regulation. Violations of the regulatory requirements may result in regulatory action and heavy fines.
How does the AIA fit into the EU regulatory framework?
The AIA is specific product safety legislation that has a cross-sectoral (horizontal) effect and therefore covers a wide range of industries. In many areas, it supplements existing EU law and national law (e.g. in the areas of data protection or product liability) without replacing it. In addition to the AIA, other legal acts and regulations may therefore apply. This must be taken into account in practice.
What is an AI system within the meaning of the AIA?
According to Article 3(1) AIA, an AI system is a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Given the open-ended nature of the criteria employed, assessing whether the technology in question is covered by the AIA according to this definition already presents an initial hurdle in practice. For guidance, the EU Commission has published guidelines on the definition of AI systems, which companies can use as a practical aid for the assessment. Since the AIA is based on a broad understanding of the term ‘AI system‘, in cases of doubt it is advisable to assume that a system in question is classified as an AI system.
What is a general-purpose AI model (GPAI) within the meaning of the AIA?
The AI model as such is not defined in the AIA. However, it can be understood as a specific algorithm developed for a specific task, such as language processing or image recognition. The AI model forms a central but isolated component within a more comprehensive AI system that contains additional functions and structures for practical application. According to Article 3(63) AIA, an AI model is considered to be a general-purpose AI model (GPAI) if it displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. Exceptions apply to individual areas. The essential requirements for general-purpose AI models can be found in Chapter V of the AIA.
Are there exceptions for certain AI systems or general-purpose AI models?
Yes, the AIA provides for individual exceptions for certain AI systems and general-purpose AI models in specific constellations. Exceptions apply, among other things, to research, testing and development activities relating to AI systems or models before they are placed on the market or put into service, although testing under real conditions is not covered by this exclusion. AI systems and models developed and used exclusively for scientific research and development purposes are also not subject to the AIA. Certain open source AI technologies are also exempt from the AIA.
How can I determine which risk class my AI system falls under?
The AI Regulation takes a risk-based approach and sets out different levels of requirements for different types of AI systems and models. The AI Regulation distinguishes between the following risk classes:
– Prohibited AI practices;
– High-risk AI systems;
– AI systems with limited risk;
– AI systems with minimal to no risk.
AI practices that pose an unacceptable risk to EU values, including the fundamental rights and freedoms of natural persons, are prohibited. This applies from the manufacture and marketing to the use of such practices. Strict requirements apply to high-risk AI systems, including in relation to risk management, data quality, transparency and human oversight. Transparency obligations apply to certain AI systems with limited risk. For AI systems with minimal to no risk, only the general obligation of AI competence applies. General-purpose AI models are subject to separate requirements, whereby a distinction is again made between ‘simple’ general-purpose AI models and general-purpose AI models with systemic risk.

Are there separate risk classes for general-purpose AI models?
Yes, the AIA distinguishes between general-purpose AI models and general-purpose AI models with systemic risk. Depending on the classification of the model, different levels of strictness apply to its providers and authorised representatives. The decisive factor in determining when a general-purpose AI model is considered to pose a systemic risk is whether the AI model has so-called ‘high impact capabilities’.
What are the deadlines?
Even though the AIA came into force on 1 August 2024, the individual requirements will be applied gradually. The following deadlines are particularly relevant in practice:

Are there any exceptions or transitional arrangements for existing systems?
Yes, the AIA provides for individual exceptions or transitional arrangements for AI systems and general-purpose AI models that were already placed on the market or put into service before the AIA came into force (so-called existing systems). For example, the AIA applies to high-risk AI systems that were already placed on the market or put into service before 2nd August 2026 only if they are subsequently significantly modified. For general-purpose AI models that were placed on the market before 2nd August 2025, a transitional period applies until 2nd August 2027. In addition, the AIA provides for further transitional arrangements for certain types of AI systems or models, such as components of certain large-scale IT systems. No transitional arrangements or exemptions apply to the provisions on prohibited AI practices. These have been generally binding since 2nd February 2025.
What do ‘placing on the market’ and ‘putting into service’ mean?
‘Placing on the market’ is not a new legal term. It has long been used in product safety law and is also used in other legal acts such as the MDR or the CRA. There is already established ECJ case law to determine more precisely when a product has been placed on the market. This can be applied to the AIA. According to this case law, a product is placed on the market when it has left the manufacturing process established by the manufacturer and has entered a marketing process in which it is offered to the public in a condition ready for consumption or use. According to Article 3(11) AIA, ‘putting into service’ means the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.
What is the geographical scope of the AIA?
The geographical scope of the AIA is based on the so-called market location principle. According to this principle, the Regulation applies in principle if an AI system or model or its output is placed on the market or used in the Union. A company’s headquarters or a branch office in the EU is therefore not a mandatory requirement.
What is the personal scope of application of the AIA?
The AIA covers the entire AI value chain. The field of operators addressed is broad and includes not only providers and deployers, but also authorised representatives, product manufacturers, importers and distributors of AI systems, as well as providers of general-purpose AI models and their authorised representatives. In certain situations, actors may play more than one role at the same time and must therefore fulfil all relevant obligations cumulatively. Companies must carefully examine their role in order to determine the obligations that apply to them. Use our free quick check for an initial assessment.
What are the obligations?
The obligations that economic operators must fulfil depend both on the risk classification of the AI system and on the role of the operator. As a general rule, the higher the risk, the greater the regulatory requirements. In addition, the AIA contains specific requirements for general-purpose AI models (GPAI) that may apply in addition to the requirements for AI systems.

What does having AI literacy entail?
Regardless of the risk class, all companies that use or operate AI systems are obliged to promote AI literacy in accordance with Article 4 AIA. Measures must be taken to ensure that employees develop a basic understanding of the functioning, risks, limitations and legal and ethical implications of AI systems. This also applies to competencies in areas such as data protection and cybersecurity. The development of internal training and further education programmes and suitable governance structures is therefore essential in order to meet this obligation. In line with the risk-based approach of the AIA, the specific measures for AI literacy must be adapted to the respective context and the existing knowledge of the users.
What are the consequences of non-compliance?
Market surveillance authorities have extensive powers of investigation, remedy and sanction. Violations may result in product warnings and heavy fines, among other things. In relation to AI systems, fines of up to 35 million euros or 7% of the previous year’s global turnover may be imposed, depending on the nature and severity of the violation. Violations of the regulations on general-purpose AI models can be punished with fines of up to 15 million euros or 3% of the previous year’s global turnover. In addition to official measures, private law actions for damages and associated reputational damage may also be brought by affected parties or competitors.
What should companies bear in mind when using AI?
Before AI systems are used in a company, they must be assessed in accordance with the AIA. This involves determining
– whether the system is an AI system within the meaning of the regulation,
– which risk class it belongs to in the specific usage scenario,
– what role the company plays (e.g. as a provider or deployer) and
– what obligations this entails for the company.
Use our free quick check for an initial assessment.
What should companies do now?
Given the wide range of requirements, companies should check whether their products or systems are affected by the AIA and clarify what role they play in the value chain. This can be done quickly and easily with our quick check. A gap analysis should then be carried out to check which requirements relating to the product or system have already been met and what still needs to be implemented. It is also advisable to review contracts with suppliers and adapt them to the new requirements.