Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
SRES SafeStack | December 2025
12/01/25
2 Likes

SRES SafeStack | December 2025


SRES SafeStack is a monthly newsletter from SecuRESafe (SRES) featuring insights on functional safety, cybersecurity, autonomy, and AI across mobility and robotics—plus technical blogs, training opportunities, and SRES news.

Interested in applying these insights to your own work? Learn more about our consulting offerings here or view upcoming public trainings.


Ahead of our December 4th Fireside Chat, below are three themes we’ll be diving into — patterns we’re seeing across teams adopting AI in safety-critical engineering.

Note: Our Fireside Chat AI Tools is now available to watch anytime. Watch the entire discussion here.

1. AI is already reshaping safety-critical products and the teams who build them

Across the industry, teams are not just using an “AI that writes code.” They’re leveraging AI tools for the core systems engineering activities: requirements generation and traceability, design document generation, automated safety analyses and test case generation. The gains are coming from taking an AI-first approach to every engineering task and deeply understanding the strengths and weaknesses.

2. Expert-in-the-loop isn’t optional — it’s the architecture.

A recurring theme across these engineering leaders: AI tools only scale when humans remain responsible for interpretation, correction, and justification. Not as a final “approval step,” but as an integral part of the engineering work flow. Teams that treat AI tools as a co-pilot outperform teams that treat it as the expert. Expert oversight, review loops, and explainability are becoming the backbone of every serious deployment in safety-critical contexts. The demand for domain experts will be higher, not lower.

3. The safety case becomes the centerpiece in an AI-assisted world.

AI doesn’t reduce the burden of demonstrating safety and cybersecurity — it increases it substantially. When generative AI tools touch product requirements, perform analyses, or automatically test code, the safety case must explain how correctness and engineering intent are preserved. That means new evidence patterns, transparency around tool use, and workflows designed to demonstrate responsibility, not just velocity. AI raises the ceiling on engineering capability, but only if teams can prove they haven’t lowered the floor on risk assurance.

🚘 Join Us For Our Final Training of the Year

ISO 26262, Functional Safety Training 📍December 8-11 – Gain a complete understanding of the ISO 26262:2018 standard and its practical application across the full safety lifecycle. This four-day live course, led by SRES automotive safety experts, combines real-world examples, exercises, and discussion to help teams build confidence in developing and assessing safety-critical systems.

An optional Automotive Functional Safety Professional (AFSP) certificate exam, accredited by SGS-TÜV Saar, is available following the course.

[Register Now]

📘 Looking for something else?

SRES also offers private and customized team training by request. Email us at info@sres.ai to discuss how we can support.

👉 [View All Training Options] 👉 [Why Teams Choose SRES Training]

🧠 New Technical Blog: Humanoid Robot Safety Comes Into Focus

Humanoid robots are starting to move from research labs into real homes and workplaces — and the safety questions follow right behind them. These systems won’t be kept behind cages or light curtains. They’re meant to operate shoulder-to-shoulder with people, performing tasks that involve movement, balance, and physical interaction.

Our latest blog breaks down some of the most important safety ideas taking shape right now: controlled shutdown, large-scale scenario-based V&V, and the emerging international standards that will guide safe human-robot coexistence.

[Read the full blog]


Thank you for reading this month’s edition of SafeStack. As we close out the year, we want to thank you for being part of this growing community since SafeStack launched in July — now more than 1,100 subscribers strong! We look forward to providing more insights and staying connected in the new year.


SRES SafeStack | November 2025

SRES SafeStack | November 2025

11/03/25

SRES SafeStack | January 2026

01/06/26
SRES SafeStack | January 2026

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Insight Categories

  • Autonomous Systems19
  • Electric Mobility3
  • News14
  • Videos11
  • Functional Safety31
  • Responsible AI21
  • Cybersecurity5
Most Recent
  • SRES SafeStack | January 2026
    SRES SafeStack | January 2026
    01/06/26
  • SRES SafeStack | December 2025
    SRES SafeStack | December 2025
    12/01/25
  • SRES SafeStack | November 2025
    SRES SafeStack | November 2025
    11/03/25
  • From Evidence to Argument: Using GSN to Structure AV Safety Cases
    From Evidence to Argument: Using GSN to Structure AV Safety Cases
    01/16/26
  • CES Wrap-Up 2026: The Humanoid Robot Safety Question
    CES Wrap-Up 2026: The Humanoid Robot Safety Question
    01/15/26
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Physical AI

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Training Terms & Cancellation Policy
Accessibility
Consent Preferences

© Copyright 2025 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube