Imagine an autonomous vehicle braking and explaining when asked: “I detected the ball on the side of the road and assumed with high likelihood that it was a child.” This exact scenario was the focus of a panel discussion at NVIDIA’s GTC conference in March 2026. AI systems should not only act but also provide comprehensible justifications for their actions. Marco Pavone, NVIDIA’s director of autonomous vehicle research, describes reasoning as the process of “breaking down a complex problem into smaller, more manageable parts and planning an action step by step.”
For regulated industries, this is a promising approach. At the same time, however, it creates a new challenge for quality assurance. The reasoning itself becomes the test subject.
Article 13 of the EU AI Act requires that high-risk AI systems be “sufficiently transparent.” Reasoning traces seem to elegantly meet this requirement. But how reliable are a model’s justifications?
The problem is architectural in nature: parallel computation in Transformer networks cannot be translated into a sequential narrative without loss. NVIDIA addresses this problem through a post-training step that firmly links reasoning and action. This is an important advancement, but it does not fully solve the problem.
For companies in highly regulated industries, such as medical technology, automotive, and finance, this presents an opportunity. Those who document reasoning traces and systematically validate them create a new dimension of quality. This principle is familiar from the deterministic world. Every statement requires independent verification. Applied to AI reasoning, this means specifically:
This three-pronged approach complements existing QA methods. At GTC 2026, NVIDIA’s Yejin Choi made the crucial point: Models that merely imitate fail in rare or new situations. Reasoning capability mitigates this risk. Validating this capability ensures its benefits.
No single procedure meets all the requirements of the EU AI Act, MDR, ISO 26262, or DORA. A robust approach combines reasoning traces for understandable communication with quantitative methods for verifiable results. The ability to independently verify AI results and explanations is becoming a key competency in quality assurance within regulated industries.
Companies that develop this competency early on will be better prepared when auditors first ask about the validation of reasoning traces. sepp.med can help you integrate these new requirements into your existing QA strategy.
They make AI decisions comprehensible to auditors and subject matter experts using natural language. It is important to validate them through independent verification methods and not use them as the sole evidence.
Through consistency checks with varied inputs, by comparing them with quantitative XAI methods such as SHAP, and by systematically monitoring the quality of reasoning during ongoing operations.
No, Article 13 only requires “sufficient transparency” without specifying a particular technique. While this allows for flexibility, it also requires robust justification of the chosen methodology.
No, as they address a different level: understandable process narration instead of quantifiable feature attribution. Both approaches complement each other and should be used together.
Firstname:
Lastname:
E-Mail Address:
Phone:
Subject:
Your message:
Yes, I consent to my personal data being collected and stored electronically. My data will only be used for the purpose of responding to my inquiry. I have taken note of the privacy policy.
You are currently viewing a placeholder content from OpenStreetMap. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
You need to load content from hCaptcha to submit the form. Please note that doing so will share data with third-party providers.
You are currently viewing a placeholder content from Hubspot Meetings. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
You are currently viewing a placeholder content from Google Maps. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.