Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
From Evidence to Argument: Using GSN to Structure AV Safety Cases
01/16/26
24 Likes

From Evidence to Argument: Using GSN to Structure AV Safety Cases


This article was written by an SRES autonomous systems expert and explores how Goal Structuring Notation (GSN) is used to build clear, evidence-based safety cases for autonomous driving systems, in alignment with standards such as ISO 5083, ISO 15026, and UL 4600. It examines why structured safety arguments are essential for demonstrating ADS safety and earning regulatory confidence.

Looking to go deeper? SRES provides expert-led training across autonomous systems and responsible AI safety, as well as hands-on consulting support to help organizations develop and deploy autonomous products responsibly and safely.


Why Safety Cases Matter for Autonomous Vehicles

In the Autonomous Vehicle (AV) industry, it is important to ensure the safety of vehicles deployed onto public roads. Deploying safe vehicles is essential in ensuring public trust and enabling the widespread adoption of AVs. To demonstrate safety and obtain regulatory approval, AV companies develop structured artifacts such as a Safety Case. This living document provides a traceable, evidence-based account of the processes, analyses, and controls implemented to ensure the vehicle operates safely.

ISO 5083 defines an ADS safety case as:

A structured argument, supported by evidence, that provides a compelling, comprehensible, and valid claim that the automated driving system (ADS) equipped vehicle has been developed to achieve safety for a given ADS feature in a given environment.

From Evidence to Argument: Introducing GSN

Safety cases can also be graphically represented through Goal Structuring Notations (GSN), a technique that is now widely adopted by AV developers. Before GSN, safety arguments often looked like massive text documents or folders full of test logs. The problem? Evidence without argument is unexplained. It doesn’t tell you why those logs prove safety, or what assumptions were made. GSN solves this by visually mapping the logic that connects high-level claims to low-level evidence.

In this post, we’ll explore why GSN has become the de facto standard for arguing safety in the autonomous domain and how it turns complex engineering data into a coherent argument for safety.

What Is Goal Structuring Notation (GSN)?

GSN is a graphical notation used to document the logic of a safety argument. Think of it as a flowchart for logic rather than process. It answers three critical questions:

  1. What are we claiming? (Goals)
  2. How are we arguing it? (Strategies)
  3. Why should you believe us? (Evidence/Solutions)

A GSN diagram uses specific shapes to represent different parts of the argument:

  • The Goal (Rectangle): This is a claim about the system.
  • The Strategy (Parallelogram): This explains the reasoning used to support the goal.
  • The Context (Rounded Rectangle): This sets the scope. A goal can be meaningless without context.
  • The Solution (Circle): This is the bottom of the structure, the actual evidence.
Unnamed 3
Common elements of a GSN Framework

How GSN Improves Traceability and Clarity

Using a GSN-based visual representation of a safety case provides a clear understanding of the processes and safety mechanisms. As a GSN framework includes a context section for each goal, it provides more clarity to the reader and helps them better understand the rationale behind the goal, and its supporting evidence. The tree structure of the GSN framework spreads horizontally instead of vertically as seen in a typical text-based safety case, hence it is easier to trace evidence to relevant goals/claims. Given that GSN frameworks are modular and help with traceability, if an evidence provided to satisfy a low-level claim is found to be invalid after an update or a modification to a component, it can be traced vertically to see which high-level goal it would violate.

Apart from GSNs, there are other ways of representing and presenting a safety case framework. Some commonly used techniques are Claims Arguments Evidence (CAE) framework, and Assurance Case as described in ISO 15026. 

The CAE framework is similar to the GSN framework. The difference is that the CAE framework uses claims, arguments, and evidence instead of goals, strategies, context, and solutions as in the GSN framework. Although structurally different, they serve the same purpose of representing a safety case framework. The three pillars of the CAE framework are:

  • Claims: These are statements that can be asserted to be true. The role of claims is to act as the destination or end result of the logic. When comparing both the frameworks, the claims from the CAE framework are equivalent to the goals of the GSN framework. 
  • Argument: These act as the connection between the claims and the evidence. It helps in explaining why the evidence that was collected is sufficient enough to satisfy the claim. Arguments bridge the gap between the high-level assertions and raw data. It is equivalent to the strategy in the GSN framework. 
  • Evidence: The artifacts, data, or reports that are provided to support the arguments. The role of evidence is to establish a solid proof. When compared with the GSN framework, it is similar to a solution. 

Part 2 of the ISO 15026 standard released in 2022, details using assurance cases as a method of representing safety. This standard defines what must be in the safety case to make it valid for certification. ISO 15026 can be considered as the primary standard that would help in building the safety case either through a GSN or CAE, as it provides details on what is to be included in the framework. The structure/framework terminology described in this standard is covered by the GSN but not vice versa. The standard provides a structure of an assurance case, mentioning three fields:

  • Main field: A supported claim
  • Evidence field: A set of evidence items
  • Report field: A narrative introduction

Applying GSN to an Automated Emergency Braking (AEB) Feature

To understand the implementation of the GSN framework, let us consider an Automated Emergency Braking (AEB) feature as the use case example for this discussion. The AEB is a feature that is widely used in modern vehicles, and provides the purpose of detecting pedestrians, vehicles or other critical obstacles in the path of travel of the vehicle, and applying brakes to prevent collision. 

Here, the top level goal is G1: Braking is acceptably safe within the ODD. This goal is present with two context statements, C1: System functions and C2: ODD of the system. As an example we can then decompose this top-level goal with a strategy S1: Prove that the braking function is fail operational. With this strategy we can prove that even when there is a failure in the braking system, the vehicle is able to stop and avoid a collision. With this strategy, we can then decompose into sub-goals G1.1 where we have a redundant braking channel, G1.2 where we have a secondary perception system that supports in detecting critical obstacles when the primary perception system, which is the camera, fails to do so, and G1.3 to ensure the performance of the ML model used in categorizing and detecting vehicles and pedestrians is at an acceptable level. The AV developer can then provide evidence as solutions to satisfy the sub-goals.

Example Goal Structuring Notation diagram demonstrating how safety goals, strategies, and evidence support an automated emergency braking safety case.
Simplified example GSN Framework for an AEB feature

A Layered Approach to Scalable Safety Arguments

Although GSNs are used to simplify the appearance of a safety case framework and ensure better traceability, it is easy to complicate the GSN tree by having numerous branches and claims supported by a load of evidence. This defeats the purpose of having the GSN framework. Hence, at SRES we are proposing a 5-layer approach to developing a GSN framework which includes:

Layer 0 – Values: defined as a top-level overall goal/ claim (generally only one for an ADS safety case)

Layer 1 – Pillars of Focus: main areas of focus for the organization, to keep it true to its values. This is the first layer of sub-goals/ sub-claims

Layer 2 – Argument Assertions: formal statements to define the approach to satisfy the specific focus area. This is the second layer of sub-goals/ sub-claims

Layer 3 – Completeness of Arguments: breaking down the arguments into comprehensive coverage areas. This is the third layer of sub-goals/ sub-claims

Layer 4 – Lowest-Level Claims: complete list of all activities or properties required to satisfy the arguments. This is the fourth and lowest layer of sub-goals/ sub-claims

Layer 5 – Evidence Layer: solutions or evidence of satisfying the claims; concrete, verifiable, and auditable information that demonstrates the top-level goal has been met. {One or more artifacts are needed to satisfy the evidence. These should be auditable artifacts. They can be process and/or product artifacts}

Layered Goal Structuring Notation (GSN) framework showing goals, strategies, assumptions, context, and evidence across hierarchical safety argument layers.
SRES GSN Framework

Conclusion

As autonomous vehicles continue to grow in complexity, the ability to clearly argue and justify safety becomes just as important as generating evidence itself. Goal Structuring Notation provides a structured and transparent way to communicate safety reasoning, enabling developers, assessors, and regulators to understand not only what evidence exists, but why it is sufficient. By combining the rigor of established standards such as UL 4600, ISO 5083, and ISO 15026 with a layered GSN approach, safety cases can remain scalable, traceable, and maintainable throughout the system lifecycle. Ultimately, a well-structured safety case is not just a regulatory artifact, but a foundational tool for building confidence, supporting informed decision-making, and enabling the responsible deployment of autonomous vehicles on public roads.


Have insights or questions? Send us an email at info@sres.ai or leave a comment below—we welcome thoughtful discussion from our technical community.

Interested in learning more about our approach? Explore why teams choose SRES training and how we help automotive organizations with consulting support across functional safety, cybersecurity, autonomy safety, and EV development.


CES Wrap-Up 2026: The Humanoid Robot Safety Question

CES Wrap-Up 2026: The Humanoid Robot Safety Question

01/15/26

SRES SafeStack | November 2025

11/03/25
SRES SafeStack | November 2025

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Insight Categories

  • Autonomous Systems19
  • Electric Mobility3
  • News14
  • Videos11
  • Functional Safety31
  • Responsible AI21
  • Cybersecurity5
Most Recent
  • SRES SafeStack | January 2026
    SRES SafeStack | January 2026
    01/06/26
  • SRES SafeStack | December 2025
    SRES SafeStack | December 2025
    12/01/25
  • SRES SafeStack | November 2025
    SRES SafeStack | November 2025
    11/03/25
  • From Evidence to Argument: Using GSN to Structure AV Safety Cases
    From Evidence to Argument: Using GSN to Structure AV Safety Cases
    01/16/26
  • CES Wrap-Up 2026: The Humanoid Robot Safety Question
    CES Wrap-Up 2026: The Humanoid Robot Safety Question
    01/15/26
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Physical AI

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Training Terms & Cancellation Policy
Accessibility
Consent Preferences

© Copyright 2025 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube