Artificial intelligence is permeating regulated industries: as a feature in medical devices, as decision support in finance, as a development tool in your teams. But AI systems behave fundamentally differently from traditional software. They do not deliver deterministic results, their decision-making logic often remains opaque, and their performance can deteriorate over time. The quality assurance of AI systems therefore requires new methods.
With the EU AI Act (effective August 2026 for high-risk systems), updated FDA guidance, and new ISO standards such as ISO/IEC 42001, quality assurance for AI systems is becoming mandatory. sepp.med supports you in securing your AI components and AI tools so that they withstand audits and comply with regulatory requirements.
We do DIGITALIZATION – but SECURE!
AI-based components of your product are subject to product approval and must be validated like any other safety-related function.
Examples:
In regulated environments, AI tools for code generation, documentation, or testing are considered computerized systems and must be validated.
Whether it's a product component or a work tool, quality assurance for AI requires new methods, adapted metrics, and regulatory expertise.
✅ Our solution:We establish statistical acceptance bands and metamorphic testing: Instead of exact matches, we check behavioral consistency and quality ranges. This makes non-deterministic behavior validatable.
✅ Our solution:We implement continuous model monitoring with drift detection. Kolmogorov-Smirnov tests, population stability index, and automated alerts detect performance degradation before it becomes a problem.
✅ Our solution:We integrate explainability methods such as SHAP, LIME, or Grad-CAM. This allows you to document which features influence decisions. Audit-proof and traceable.
✅ Our solution:In regulated environments: Yes. We create validation protocols for AI development tools in accordance with GAMP 5 / CSV requirements, including risk assessment and usage guidelines.
✅ Our solution:We conduct a gap analysis against the EU AI Act, ISO/IEC 42001, and industry-specific standards. You receive a roadmap with prioritized measures and clear deadlines.
✅ Regulatory compliance:Evidence and documentation comply with the EU AI Act, ISO/IEC 42001, industry-specific standards (MDR/IVDR, ISO 26262, MaRisk) and proven frameworks such as GAMP 5.
✅ Audit-ready AI documentation:Complete traceability from data origin to model architecture to validation evidence. Technical documentation that satisfies notified bodies and regulatory authorities.
✅ Measurable model quality:Performance baselines, fairness metrics, and robustness evidence make the quality of your AI systems quantifiable and comparable.
✅ Early warning system for quality loss:Continuous monitoring detects model drift, concept drift, and feature attribution shifts before they affect production operations.
✅ Human-in-the-loop processes:Defined oversight mechanisms comply with the requirements of the EU AI Act (Article 14) for human control and intervention options.
Systematic gap analysis against the EU AI Act, ISO/IEC 42001, and industry-specific requirements. Risk classification of your AI systems and tools. Result: prioritized roadmap with quick wins and strategic measures.
Design and implementation of validation studies for AI components: bias and fairness testing, robustness testing, performance baseline establishment. For medical devices: multi-reader multi-case studies (MRMC) according to IMDRF specifications.
Integration of explainability methods (SHAP, LIME, Grad-CAM) into your AI pipeline. Audit-compliant documentation of decision-making factors. Evidence management for transparency requirements under the EU AI Act.
Risk-based validation of GitHub Copilot, Claude Code, and other GenAI tools for use in regulated development environments. Usage guidelines, validation protocols, and training concepts.
Set up drift detection pipelines for your productive AI models. Automated alerts, retraining triggers, and documented escalation processes. Post-market surveillance for AI medical devices.
Support with conformity assessments, technical documentation, and notified body interactions. For FDA submissions: predetermined change control plans (PCCPs) for self-learning systems.
How does AI go from being a buzzword to having a real business impact? At the Afterwork Exchange “Business Impact: AI,” you can look forward to short practical insights and a look at the EU AI Act. Afterwards, you will have the opportunity to network at the get-together and dinner.
When: March 19, 2026, 5:00 p.m.Where: sepp.med in Röttenbach
In regulated environments (medical technology, pharmaceuticals, automotive with ASPICE): Yes, if the generated code is incorporated into the product. AI tools are then considered computerized systems and are subject to CSV/GAMP 5 requirements. The validation effort is risk-based: a Copilot-generated unit test requires different evidence than AI-generated product code in a Class III medical device.
The EU AI Act is a horizontal law that regulates AI systems regardless of industry. For medical devices and vehicles, it applies in addition to MDR/IVDR and vehicle registration law, respectively. The MDCG 2025-6 Guidance clarifies the interfaces. In practice, this means double compliance requirements, but also synergies in documentation.
With statistical acceptance bands and metamorphic testing. Instead of exact expected values, you define quality ranges (e.g., “accuracy between 85–92%”). Metamorphic testing checks whether logical relationships are maintained (for example, a rotated image should still be classified correctly). These methods are now industry standard.
Model drift refers to the gradual deterioration in model performance when real data deviates from the training data. Most ML models experience such performance losses over time. The EU AI Act requires continuous monitoring and mitigation of post-market risks. Without drift monitoring, you risk compliance violations and, more seriously, quality problems in production operations.
Generative AI (LLMs) due to hallucinations and prompt sensitivity. Reinforcement learning due to extremely large state spaces. Computer vision due to adversarial robustness (small pixel changes can cause misclassifications). However, established testing methods exist for each type of AI. The challenge lies in correct application and regulatory documentation.
Firstname:
Lastname:
E-Mail Address:
Phone:
Subject:
Your message:
Yes, I consent to my personal data being collected and stored electronically. My data will only be used for the purpose of responding to my inquiry. I have taken note of the privacy policy.
You are currently viewing a placeholder content from OpenStreetMap. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
You need to load content from hCaptcha to submit the form. Please note that doing so will share data with third-party providers.
You are currently viewing a placeholder content from Google Maps. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.