AI Act 2024 overview for mechanical engineers and product manufacturers

The new EU AI Act explained in plain language. Learn what the AI regulation means for your products and how to implement the requirements successfully.

Introduction and background

The European Union has set a milestone in the regulation of artificial intelligence with the AI Act (Artificial Intelligence Act). As the world’s first comprehensive AI regulation, the AI Act marks a decisive step in Europe’s digital transformation. In a global competition increasingly shaped by AI technologies, the EU positions itself between the more restrained regulatory approach of the United States and state-directed AI development in China.

The development of the AI Act reflects the EU’s effort to promote innovation while creating a clear framework for the safe and trustworthy use of AI technologies. After intensive negotiations, the AI Act was adopted by the European Parliament on 13 March 2024. The regulation foresees a staged rollout that gives companies time to adapt: banned practices become effective six months after entry into force, obligations for high-risk AI systems apply after twelve months, and the full application of all provisions occurs after 24 months.

The goals of the AI Act are far-reaching: in addition to ensuring safety and compliance with fundamental rights, the regulation aims to provide legal certainty for investments. It also intends to improve governance of AI systems and support effective enforcement of existing laws. A central concern is the development of a single market for lawful, safe, and trustworthy AI applications in the EU.

Scope and definitions

Understanding the exact scope of the AI Act is crucial for practical application. The regulation follows a twofold approach: it defines where — in which geographic and legal space — the rules apply, and it specifies what counts as an AI system under the regulation. This clear delimitation is particularly important for mechanical engineering, where AI technologies are often embedded in complex systems and the boundary between classic automation and AI can be fluid.

Geographical scope

The reach of the AI Act is deliberately broad and follows an approach similar to the General Data Protection Regulation (GDPR). The regulation applies not only within EU borders but has a significant extraterritorial dimension. It covers all providers who place or put AI systems into service in the EU, regardless of their company headquarters. Users of AI systems in the EU are also subject to the regulation. The scope is particularly extensive for providers and users whose AI systems produce outputs that are used within the EU.

This broad interpretation has major consequences for international companies: even if they have no establishment in the EU, they must comply with the AI Act as soon as their AI systems or their outputs reach the EU market. This represents a complex compliance challenge for globally active companies.

Material scope

The AI Act’s definition of AI systems is technology-neutral and future-oriented. AI systems are considered software solutions developed using various techniques to achieve specific goals set by humans. The regulation explicitly names machine learning, logic- and knowledge-based approaches, as well as statistical methods such as Bayesian estimation and optimization procedures.

For mechanical engineering, this covers many use cases: predictive maintenance systems often use machine learning to predict maintenance needs. In quality control, AI-supported image processing systems are used. Autonomous robotic systems and AI-driven process optimization are further examples that fall within the scope.

The regulation also provides important exemptions: pure development tools without productive deployment are excluded, as are legacy systems placed on the market before the regulation’s entry into force. Open-source AI systems without a specific intended use also do not fall under the regulation.

Risk-based regulatory approach

The AI Act follows a risk-based approach that categorises AI systems by their potential for harm. This method enables proportionate regulation, imposing stricter requirements for higher-risk applications.

Prohibited practices

At the top of the risk pyramid are AI applications deemed to pose an unacceptable risk and therefore fundamentally prohibited. These include systems for manipulation via subliminal techniques or those that intentionally exploit vulnerabilities of specific groups. Social scoring by public authorities and biometric real-time remote identification in public spaces are also generally prohibited, although the latter may be permitted for law enforcement under strict conditions.

These prohibitions are strongly aligned with the EU Charter of Fundamental Rights and aim to protect fundamental rights and freedoms. For mechanical engineering these bans are generally less relevant, but they underscore the EU’s human-centric regulatory approach.

High-risk AI systems

The area of high-risk AI systems, particularly relevant for mechanical engineering, comprises two main categories: AI systems that serve as safety components of products subject to an EU conformity assessment, and systems that pose significant risks to health, safety, or fundamental rights.

In the mechanical engineering context this especially concerns safety-relevant control systems, quality-critical process monitoring, autonomous robotic systems and human-machine collaboration systems. Classification must be considered in close connection with the Machinery Regulation Machinery Regulation, which sets specific safety requirements for machines.

Limited-risk systems

The third category in the AI Act’s risk-based approach covers limited-risk systems. These are mainly subject to transparency requirements to ensure users can interact in an informed manner. Key is the recognisability of AI use: when people interact with AI systems, they must be informed. This concerns, for example, chatbots in customer support or AI-assisted advisory systems.

Of particular relevance are labelling obligations for deepfakes and biometric categorisation systems. These requirements reflect the aim of promoting transparency and trustworthiness in AI usage without stifling innovation through excessive regulation.

Requirements for high-risk AI systems

The detailed requirements for high-risk AI systems are the core of the AI Act. They build on established quality management and product safety standards but go beyond them in many areas to address AI-specific challenges.

Technical documentation

The documentation requirements of the AI Act build on proven concepts of technical documentation, such as those known from the Machinery Directive. They are, however, extended by AI-specific aspects. Technical documentation must provide comprehensive insight into the system, from the basic architecture to development methods and validation procedures.

Particular importance is attached to the risk management system, which should be aligned with ISO/IEC 42001. This standard for AI management systems offers a structured framework for identifying, assessing and treating risks. Risk management must be understood as a continuous process accompanying the entire lifecycle of the system.

Data governance

The quality of training data is critical to the performance and reliability of AI systems. The AI Act therefore imposes special requirements on data management. Training data must be relevant and representative for the intended use case, while also ensuring accuracy and completeness.

The situation becomes particularly complex when personal data are processed. Here the requirements of the AI Act and the GDPR interact: in addition to the technical quality of the data, data protection aspects such as lawfulness of processing and privacy by design must be considered. Documentation of data processing must meet the requirements of both regulations.

Transparency and user information

Transparency is a key principle of the AI Act and is reflected in extensive information duties. Providers of high-risk AI systems must supply detailed user information that goes well beyond classic operating manuals. They must clearly define the specific intended use and outline the system’s limitations.

Communicating the system’s performance level and accuracy is particularly important. Users must be able to understand the reliability they can expect and which factors influence system performance. Maintenance requirements and calibration rules must also be clearly documented.

Human oversight

The concept of human oversight is fundamental for the safe operation of high-risk AI systems. The AI Act requires effective human supervision by individuals capable of understanding the systems and intervening when necessary. This requires careful organisation of oversight with clear responsibilities and competencies.

Practical implementation is guided by ISO/IEC 42001 and related standards. The qualification of supervisors is crucial: they must understand both the technical aspects of the system and its impacts in the application context. Appropriate monitoring tools and technical intervention options must be provided.

Conformity assessment and standards

Conformity assessment under the AI Act builds on established concepts of the New Legislative Framework of the EU New Legislative Framework of the EU but expands them to include AI-specific aspects.

Conformity assessment procedures

The AI Act foresees two fundamental routes for conformity assessment. Internal control (self-assessment) is possible for AI systems that are already subject to a conformity assessment under other EU regulations. This includes, for example, AI components in machines that fall under the Machinery Regulation.

Standalone high-risk AI systems, however, require assessment by a notified body. This body reviews not only the technical documentation but also the manufacturer’s quality management system. Requirements are aligned with ISO 9001 and the emerging ISO/IEC 42000 series for AI management systems.

Harmonised standards and norms

Technical implementation of the AI Act will be largely supported by harmonised standards. Existing standards such as ISO/IEC 42001 for AI management systems or ISO/IEC 23894 for AI risk management provide important foundations. ISO/IEC 42001 is expected to play a key role as it covers central aspects like quality management and risk control.

The European Commission has announced an extensive standardisation mandate that will prompt the development of further harmonised standards. These will focus on areas such as data quality, robustness and cybersecurity. For manufacturers, applying harmonised standards is practically important because it establishes a presumption of conformity.

Obligations of economic operators

The AI Act defines differentiated obligations for various actors in the AI value chain. These duties complement existing obligations from other regulations.

Manufacturer obligations

Manufacturer obligations are particularly comprehensive and build on the established concept of product responsibility. Central is the implementation of a quality management system according to ISO 9001, supplemented by AI-specific requirements of ISO/IEC 42001. Technical documentation must not only demonstrate conformity with the AI Act but also consider interfaces with other relevant regulations such as the Machinery Regulation Machinery Regulation and the Product Liability Directive.

Post-market monitoring is of special importance: manufacturers must monitor the performance of their AI systems in practical use and be able to react quickly to problems. This requires appropriate monitoring systems and defined processes for corrective actions.

Importers and distributors

Importers and distributors bear important responsibilities in the supply chain. Before placing an AI system on the market or making it available, they must verify its conformity. This includes checking the CE marking and the completeness of the documentation.

Traceability in the supply chain plays a central role: all economic operators must document from whom they obtained AI systems and to whom they supplied them. Incidents must be reported and documented, with reporting obligations modelled on existing systems such as the RAPEX product safety database.

Market surveillance and sanctions

Enforcement of the AI Act will be carried out by a network of national authorities in cooperation with the European Artificial Intelligence Board. This new body will play a coordinating role and ensure uniform application of the rules.

The sanctions framework is deliberately strict: violations can be punished with fines of up to EUR 35 million or 7% of worldwide annual turnover. The level of penalties is determined by the severity of the infringement and the size of the company. National authorities may impose additional sanctions.

Conclusion and outlook

The AI Act represents a milestone in AI regulation and will significantly shape the development and deployment of this technology in Europe. For companies this initially means a considerable implementation effort. In the long run, however, the regulation also offers opportunities: it creates legal certainty, fosters trust in AI systems and can thereby contribute to broader acceptance of the technology.

Decisive for success will be addressing implementation early and systematically. The forthcoming harmonised standards will provide important guidance. Companies should use the transition period to adapt their systems and processes and to train their staff accordingly.