You are reading Part 1 of our series of articles on the EU AI Act
The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive AI law. It will take effect in stages. The first prohibitions took effect in February 2025, followed by the rules for general-purpose AI models in August 2025. The requirements for high-risk AI systems will take effect in August 2026. This affects far more than just tech companies. The AI Act covers all companies that develop, use, or import AI systems, regardless of industry or company size.
Does your company use AI in quality assurance, customer interaction, diagnostics, or production? If so, a key question arises: When do which rules apply to you?
The AI Act distinguishes between four risk levels and takes a risk-based approach. The highest level includes prohibited AI practices, such as social scoring and biometric mass surveillance, which will be banned in February 2025. The high-risk AI category forms the core of the regulation and is below this level. Strict obligations apply here to risk management, data quality, transparency, and human oversight. Systems with limited risk, such as chatbots, are subject to labeling requirements. AI with minimal risk, such as spam filters, will remain largely unregulated.
The key point is that high-risk AI is prevalent in almost every industry.
However, many companies are unaware of which category their AI applications fall into and, consequently, what obligations they have.
The implementation plan is ambitious. High-risk obligations will take effect in August 2026. The EU has established clear enforcement mechanisms, such as national market surveillance authorities and a European AI Office. Violations are punishable by fines of up to €35 million or seven percent of a company’s global annual turnover.
However, financial penalties are only one aspect of the risk. More serious consequences include the loss of market confidence and competitiveness. Those who start too late have less leeway for corrections and risk delays in launching new products to market.
The obligations apply equally to all industries: Risk management, data governance, transparency, human oversight, technical documentation, and a functioning quality management system are mandatory for high-risk AI. The AI Act is not a paper tiger. It will be enforced.
The path to AI Act compliance begins with three basic, industry-agnostic steps:
Note that classification is not always clear-cut, especially at the boundaries between risk levels, where there is room for interpretation. An early technical assessment prevents unnecessary detours later on.
In the second part of this series, which will be available soon, we will discuss the industry-specific features of the AI Act for the medical technology, automotive, finance, and public administration sectors. If you would like more information, please reach out to us directly to discuss your needs. sepp.med supports companies across all industries with complex compliance requirements in regulated software development.
Tip: If you would like to explore this topic further, we recommend the Afterwork Exchange hosted by sepp.med on March 19, 2026. Florian Prester, CEO of sepp.med, will give a presentation on “The EU AI Act and its impact on regulated industries: Med-Tech, Mobility, and Public.”
How does AI evolve from a buzzword to a tangible business impact? At the Afterwork Exchange, “Business Impact: AI,” you can look forward to brief, practical insights and an overview of the EU AI Act. Afterwards, there will be an opportunity to network at a get-together and dinner.
When: March 19, 2026, 5:00 p.m.Where: sepp.med in Röttenbach
Yes, the AI Act applies to providers and deployers. Even internal AI applications can fall under the high-risk category, such as those used for human resources management or quality control.
The high-risk obligations will take effect in August 2026. Depending on the industry and product type, there are different transition periods. Taking an early inventory is the most important first step.
The AI Act provides exceptions for SMEs, including simplified documentation and preferential access to regulatory sandboxes. However, the core obligations for high-risk AI apply without restriction.
There is a risk of fines and market restrictions. More critical, though, is the potential loss of trust among customers and partners, especially in regulated industries.
Firstname:
Lastname:
E-Mail Address:
Phone:
Subject:
Your message:
Yes, I consent to my personal data being collected and stored electronically. My data will only be used for the purpose of responding to my inquiry. I have taken note of the privacy policy.
You are currently viewing a placeholder content from OpenStreetMap. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
You need to load content from hCaptcha to submit the form. Please note that doing so will share data with third-party providers.
You are currently viewing a placeholder content from Google Maps. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.