
SRES SafeStack | December 2025
SRES SafeStack is a monthly newsletter from SecuRESafe (SRES) featuring insights on functional safety, cybersecurity, autonomy, and AI across mobility and robotics—plus technical blogs, training opportunities, and SRES news.
Interested in applying these insights to your own work? Learn more about our consulting offerings here or view upcoming public trainings.
Ahead of our December 4th Fireside Chat, below are three themes we’ll be diving into — patterns we’re seeing across teams adopting AI in safety-critical engineering.
Note: Our Fireside Chat AI Tools is now available to watch anytime. Watch the entire discussion here.
1. AI is already reshaping safety-critical products and the teams who build them
Across the industry, teams are not just using an “AI that writes code.” They’re leveraging AI tools for the core systems engineering activities: requirements generation and traceability, design document generation, automated safety analyses and test case generation. The gains are coming from taking an AI-first approach to every engineering task and deeply understanding the strengths and weaknesses.
2. Expert-in-the-loop isn’t optional — it’s the architecture.
A recurring theme across these engineering leaders: AI tools only scale when humans remain responsible for interpretation, correction, and justification. Not as a final “approval step,” but as an integral part of the engineering work flow. Teams that treat AI tools as a co-pilot outperform teams that treat it as the expert. Expert oversight, review loops, and explainability are becoming the backbone of every serious deployment in safety-critical contexts. The demand for domain experts will be higher, not lower.
3. The safety case becomes the centerpiece in an AI-assisted world.
AI doesn’t reduce the burden of demonstrating safety and cybersecurity — it increases it substantially. When generative AI tools touch product requirements, perform analyses, or automatically test code, the safety case must explain how correctness and engineering intent are preserved. That means new evidence patterns, transparency around tool use, and workflows designed to demonstrate responsibility, not just velocity. AI raises the ceiling on engineering capability, but only if teams can prove they haven’t lowered the floor on risk assurance.
🚘 Join Us For Our Final Training of the Year
ISO 26262, Functional Safety Training 📍December 8-11 – Gain a complete understanding of the ISO 26262:2018 standard and its practical application across the full safety lifecycle. This four-day live course, led by SRES automotive safety experts, combines real-world examples, exercises, and discussion to help teams build confidence in developing and assessing safety-critical systems.
An optional Automotive Functional Safety Professional (AFSP) certificate exam, accredited by SGS-TÜV Saar, is available following the course.
📘 Looking for something else?
SRES also offers private and customized team training by request. Email us at info@sres.ai to discuss how we can support.
👉 [View All Training Options] 👉 [Why Teams Choose SRES Training]
🧠 New Technical Blog: Humanoid Robot Safety Comes Into Focus
Humanoid robots are starting to move from research labs into real homes and workplaces — and the safety questions follow right behind them. These systems won’t be kept behind cages or light curtains. They’re meant to operate shoulder-to-shoulder with people, performing tasks that involve movement, balance, and physical interaction.
Our latest blog breaks down some of the most important safety ideas taking shape right now: controlled shutdown, large-scale scenario-based V&V, and the emerging international standards that will guide safe human-robot coexistence.
Thank you for reading this month’s edition of SafeStack. As we close out the year, we want to thank you for being part of this growing community since SafeStack launched in July — now more than 1,100 subscribers strong! We look forward to providing more insights and staying connected in the new year.



