Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Engineers Most Skeptical of AI Are Best Positioned to Use It Responsibly at Scale
03/16/26
43 Likes

Engineers Most Skeptical of AI Are Best Positioned to Use It Responsibly at Scale


This article was written by Gokul Krithivasan, Co-Founder and Managing Partner at SecuRESafe (SRES), who has spent over a decade leading safety engineering programs across automotive, robotics, and advanced mobility systems.

It explores why engineers working closest to safety and compliance standards — often the most skeptical of general-purpose AI tools — may actually be best positioned to use them responsibly to accelerate engineering work in safety-critical industries such as automotive and robotics.


The Gap Between AI Adoption and Safety Engineering

According to Anthropic’s Economic Index, compliance and safety engineers rank among the lowest adopters of AI tools across all engineering disciplines. In automotive and robotics, where standards like ISO 26262, IEC 61508, ISO 21448, and ISO 8800 define the state of the art, this gap is really significant.

My argument in this blog is that the engineers sitting closest to that gap, the ones most skeptical of what generic AI can actually do in safety-critical work, are the ones with the most to gain from using it responsibly to enable incredible velocity for their organizations.


Why General-Purpose AI Chatbots and Agents Struggle with Standards-Based Technical Work

We have been experimenting and building with Large Language Models (LLM) since November 2022 when ChatGPT-3.5 was released. These models have improved exponentially with Supervised Fine-Tuning (SFT), Reinforcement Learning from Human or AI Feedback (RLHF/RLAIF) and distillation. But they still struggle when it comes to analytical and architectural work that systems safety engineers perform on a day-to-day basis.

When you ask for your favorite general-purpose AI model to help you develop a specific work product for ISO 26262, you’ll get back artifacts that look plausible, read confidently, BUT are subtly wrong in ways that matter enormously.

Functional safety and autonomy safety standards are structurally distinct from most technical content that these models are trained on. They define objectives, inputs, and required work products but deliberately leave the method flexible and open.

ISO 26262 doesn’t prescribe your Verification & Validation approach step by step as work instructions; it requires you to demonstrate that your approach meets the intent of the standard and it outlines test methods that you can leverage with varying degrees of recommendation based on the Automotive Safety Integrity Level (ASIL). When it comes to random hardware failures, the standard gives you a set of thresholds for hardware architectural metrics based on the ASIL. You can point to a completed Failure Modes and Effects Diagnostics Analysis (FMEDA) and say “we are done.”

For all their complexity, these standards have a recognizable shape that an experienced expert can navigate and validate at each stage. With domain-specific fine-tuning and RLHF, you could argue these models can substantially improve. Our customers are already making good progress down this path.


Why SOTIF and ISO 8800 Are Harder

ISO 21448 and ISO 8800 are a lot more complicated. SOTIF and AI safety are built around continuous, iterative lifecycles with no hard stopping points specified in the text or easily calculable by experts. You don’t complete a safety analysis and move on. You identify unsafe scenarios, work to address them, discover edge cases through testing, reduce those, and the cycle repeats through the full operational life of the vehicle fleet.

Systems safety engineer evaluating AI outputs under ISO 26262 and ISO 21448 (SOTIF) with ASIL D risk assessment requiring human validation.

ISO 8800 goes even further: AI systems learn, drift, and encounter distributional edge cases in the field that no pre-deployment analyses or tests can fully anticipate. The safety lifecycle doesn’t end at launch, and neither does the need for expert judgment. There are no checklists to run here, no clear gate to clear. What’s required is a continuous expert engineering judgment about where you are in the product development lifecycle.

Even the best general-purpose AI model doesn’t fully know whether its approach is an acceptable amount of tailoring or introduces an audit or assessment risk. It doesn’t distinguish between a normative “shall” and an informative “should” the way a seasoned systems safety engineer does. For highly iterative standards like ISO 21448 and ISO 8800 specifically, it has no framework for knowing when “enough” evidence actually exists. That judgment gap is widest precisely where the stakes are highest.


What Makes AI Agents Effective in Safety-Critical Engineering

Well-architected AI agents for safety and systems work keep competent experts meaningfully in the loop at every stage where judgment matters. That requires separating three distinct aspects that we have often seen engineering teams collapse together.

Standards Knowledge Must Be Structured

Standards knowledge needs to be structured, not just retrieved. The relationship between the objectives, prerequisites, requirements and acceptable work product artifacts is a semantic structure. An agent reasoning on a safety case standard like ISO/TS 5083 needs to understand what must be demonstrated. Building that structure correctly requires experts who have actually applied these standards in different environments and projects.

It’s not just about using a good Retrieval Augmented Generation (RAG) model but also thinking deeply about the structure and interconnection of the underlying inputs, requirements and outputs.

Organizational Tailoring Is a Critical Input

Every organization is making interpretive decisions about how they meet standard objectives. These are captured in safety and cybersecurity plan tailoring, previous assessments, and prior audit outcomes. An AI agent without that context gives generic advice, which is exactly what systems safety engineers already distrust.

Translating an organization’s existing approach into something an agent can reason against is where external independent safety experts add real value.

Expert Feedback Improves Agents Over Time

Expert feedback is what makes agents improve over time, especially for iterative standards. For ISO 26262 or IEC 61508, an experienced engineer knows what a completed phase looks like based on experience. For ISO 21448 and ISO 8800, the question is always:

“Is the current evidence sufficient given what we know now about the ODD and the organizational context?”

That judgment comes from accumulated cycles: what scenario coverage was accepted before, what field incident or Safety Performance Indicator (SPI) violation triggered a re-evaluation, what an assessor actually challenged. When a subject matter expert reviews AI agent-generated work and gives structured feedback, that signal compounds into something genuinely difficult to replicate.

This Is Also an Architecture Problem

A single general-purpose agent handed a safety case is not the same as a purpose-built system where specialized sub-agents handle SOTIF scenario analysis, AI safety requirements generation, continuous compliance checks, and work product drafting, each informed by your organization’s specific context and each validated by a subject matter expert.

The right multi-agent architecture will look different for a Tier 1 supplier working under ISO 26262 than for a humanoid robotics company who will navigate ISO 25785 for the first time. Getting that architecture right from the start is what produces responsible velocity rather than rework. The real strength is in ensuring there are sub-agents with highly specialized skills while the overall orchestration layer is first principles based, and standards agnostic for scaling.


The Resistance Problem

We need to address the elephant in the room. Some safety and systems engineers will strongly push back on this. AI adoption can feel like a threat to their expertise that defines a well-deserved professional identity. If an agent can apply ISO 26262 and only keep getting better, what exactly is the safety engineer for?

Historically, this is how every major efficiency transition has played out, and the outcome is predictable. The engineers who engaged with new technology became more capable and more valuable. The ones who held back found themselves in a shrinking lane.

We have strong reasons to believe that leadership is not only already aware of the efficiency gains on the table, but are actively planning medium-term engineering headcount assuming these gains. They are going to pursue them with or without grassroots buy-in. The real question is whether safety and systems engineers shape how AI gets responsibly deployed in their industry, or whether that gets decided by people who understand AI but don’t understand the safety and cybersecurity implications.

The engineers most skeptical of general-purpose AI are often the ones who understand this domain deeply enough to build AI agents that are more nuanced, hallucinate far less frequently and perform better. Your skepticism is an asset. It’s what produces responsible velocity rather than fast mistakes that many companies are already making.


What This Means in Practice

Responsible velocity in safety engineering means moving faster without compromising the judgment that makes the work defensible to an external independent assessor/ auditor. That requires the right AI agents with carefully specified skills, the right architecture for your specific standards and organizational context, and competent subject matter experts in the loop from the start.

At SRES, we work directly with automotive and robotics engineering organizations on exactly this. We bring deep expertise in safety and cybersecurity standards compliance and serve as the “experts in the loop” to tailor their multi-agent systems to their organization’s standards portfolio and compliance history. If your organization is figuring out where responsible AI agents fit in your engineering process, learn how SRES can support validating and operationalizing AI-enabled engineering workflows.


Interested in going deeper?

Register for our upcoming Responsible AI and AI safety training programs:

  • ISO 8800, AI-Safety Professional (AISP) Training – SGS-TÜV Saar
  • ISO/IEC 42001:2023 and the EU AI Act, Responsible AI Training
  • ISO/IEC TS 22440, Functional Safety and AI Systems

Have insights or questions? Send us an email at info@sres.ai or leave a comment below—we welcome thoughtful discussion from our technical community.

Interested in learning more about our approach? Explore why teams choose SRES training and how we help automotive organizations with consulting support across functional safety, cybersecurity, autonomy safety, and EV development.


Building Defensible AI Assurance Arguments in ISO/PAS 8800 (Part 3): Certification, Audits, and Applicability Beyond Automotive

Building Defensible AI Assurance Arguments in ISO/PAS 8800 (Part 3): Certification, Audits, and Applicability Beyond Automotive

03/12/26

SRES SafeStack | March 2026

03/20/26
SRES SafeStack | March 2026

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Insight Categories

  • Autonomous Systems20
  • Electric Mobility3
  • News17
  • Videos12
  • Functional Safety40
  • Responsible AI28
  • Cybersecurity6
Most Recent
  • ISO/PAS 8800 Walkthrough (Part 3): AI Safety Requirements and V-cycle
    ISO/PAS 8800 Walkthrough (Part 3): AI Safety Requirements and V-cycle
    04/07/26
  • Beyond TARA: How STPA Strengthens Automotive Cybersecurity
    Beyond TARA: How STPA Strengthens Automotive Cybersecurity
    03/30/26
  • ISO/PAS 8800 Walkthrough (Part 2): The AI and Dataset Lifecycles
    ISO/PAS 8800 Walkthrough (Part 2): The AI and Dataset Lifecycles
    03/25/26
  • On Redundant Systems
    On Redundant Systems
    03/24/26
  • SRES SafeStack | March 2026
    SRES SafeStack | March 2026
    03/20/26
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Physical AI

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Training Terms & Cancellation Policy
Accessibility
Consent Preferences

© Copyright 2026 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube