
From Evidence to Argument: Using GSN to Structure AV Safety Cases
An introduction to Goal Structuring Notation (GSN) and how it helps autonomous vehicle teams build clear, defensible safety cases.
SecuRESafe (SRES) enables organizations to confidently adopt AI-powered engineering tools in safety-critical development by validating their use, reducing systemic risk, and aligning AI-driven workflows with current and emerging safety and cybersecurity standards.
AI-powered tools are transformative and are the key to accelerating engineering output and product development productivity. They help engineers manage increasing system complexity, automate repetitive tasks, and make better-informed technical decisions—especially in domains where correctness, traceability, and safety assurance are non-negotiable.
While your mandate is to accelerate product development with agentic AI tools, our mandate is to validate it and protect you from the potential pitfalls. SRES provides the technical safety expertise that evolves your AI tool usage from “experimental risks” and “R&D” into producing mature engineering assets with appropriate integrity for safety-critical applications.
Safety engineering is not meant to be a gatekeeper. It is the discipline that maintains the safety of the public in an increasingly complex world. As AI reshapes how engineering is done, safety engineering must evolve in parallel—becoming faster, more integrative, and better equipped to assess new kinds of tooling and workflows.
While LLM-based AI tools bring tremendous opportunities, they also introduce new risks that compound existing challenges. When parallel development streams (System, Hardware, Software) all adopt AI tools simultaneously, the compliance friction multiplies. This requires a specialized and pragmatic safety approach. At SRES, we specialize in facilitating this transformation – ensuring that your engineering rigor is responsive to meet the demands and pace of AI-driven development.
Your organization should not have to choose between leveraging agentic tools for engineering productivity and achieving compliance to standards and regulations in safety and cybersecurity. SRES aims to resolve this paradox. Our experts will validate your automation strategies against existing and emerging standards, allowing you to innovate with confidence while maintaining the high safety assurance the industry and the general public demand.
SRES services utilize principles and methodologies compatible with the ISO 26262 Part 8 Clause 11 “Confidence in the use of software tools” and ISO/IEC TS 22440 standard “Artificial intelligence — Functional safety and AI systems“ to ensure forward-looking compliance for your AI-powered engineering workflows.
AI-powered tools are new, but safety engineering is not. SRES experts have been executing systems and safety engineering for decades. We have navigated complex failure modes, rare edge cases, and critical system vulnerabilities long before the rise of LLMs. Whether you are an integrator or supplier, your executives want accelerated progress and your safety team want reduced risk. SRES experts deliver both.
Let us know about your project and our team will be in contact

An introduction to Goal Structuring Notation (GSN) and how it helps autonomous vehicle teams build clear, defensible safety cases.

Observations from CES 2026 on humanoid robots, current safety strategies, and the gaps that emerge as robots move into shared human environments.

This article was written by an SRES functional safety expert and examines why software failures are treated as systematic—not random—under standards such as ISO 26262
We’re here to provide the guidance and support you need to navigate the complexities of engineering with responsibility and safety at the forefront.

info@sres.ai
How Can We Help You?
