Overview
This two-day course is for professionals and decision-makers who share the SecuRESafe passion and purpose to ensure responsibly designed and used AI systems. We introduce Responsible Artificial Intelligence (RAI), an approach to designing, developing and using AI that integrates ethics and development at every step.
As AI incidents become increasingly prevalent, the lack of risk management and AI principles becomes clear. This course will equip you, the AI professionals and decision-makers, with skills to manage AI risks and create organizational AI policies, aligned to ISO/IEC 42001:2023 and the EU AI Act for high-risk AI systems.
The course includes automotive-specific examples and case studies, showing how Responsible AI principles and regulatory frameworks apply to intelligent vehicle systems.
Details
This course will introduce Artificial Intelligence (AI) in real-world systems. These systems rely heavily on machine learning (ML) models and ML algorithms in making critical decisions that can expose the general public to numerous unknown risks. What harm can a biased pedestrian detector cause? What causes the bias? What can we do in terms of mitigation? Such questions are covered in-depth by the course with concrete exercises.
Throughout the training, participants explore a real-world automotive AI scenario—pedestrian detection in Automated Driving Systems (ADS)—to understand how Responsible AI can be implemented across safety-critical automotive applications.
Most importantly, we give insight into what being “responsible” means. The training enables organizations to establish AI principles and robust processes in any industry–without specific standards/guidance.
Objectives
While it is important to understand and comply with the emerging and updated standards and regulations for AI development, they do not cover the holistic view of responsible AI, beyond just safety and security. A foundational discussion is needed within each organization on what it means to develop and deploy AI within their context responsibly. This course presents a risk-based approach, aligned to the ISO/IEC 42001:2023 standard and the EU AI Act for high-risk AI systems to facilitate this discussion at every phase of the AI system life cycle.
By the end of this course, participants will be able to:
- Enhance organizational practices: enhance your ability to lead AI discussions and develop dynamic processes to implement RAI including establishing AI objectives and AI policies
- Apply real-world solutions: work hands-on with other trainees and our AI experts to tackle AI issues and develop an AI management system
- Implement Risk Management aligned to ISO/IEC 42001:2023: learn to identify and treat AI risks with guidance approved by an international committee
- Align with regulations: understand the critical aspects required for compliance to
the EU AI Act for high-risk AI systems
Agenda
Below you will find a tentative schedule for this training course.
DAY 1
- Introduction to AI
- Understanding AI and current advancements
- AI development example
- Introduction to Responsible AI
- AI incident case study based on automotive applications
- EU AI Act
- Introduction to the EU AI Act
- Requirements for high-risk AI systems
- AI Development Standards
- ISO/IEC 42001:2023
- ISO/IEC 22989:2022
- ISO/IEC 5338:2023
- AI system lifecycle
- AI Management System Organization
- Context of the organization
- Leadership, roles and responsibilities
DAY 2
- AI Management System Inception
- AI policies
- AI objectives
- AI Management System Risk Management
- Performing risk management
- Applying risk treatment and AI controls
- Conducting an AI system impact assessment
- AI Management System Verification and Validation
- Performance evaluation
- Verification and validation
- Deployment
- AI Management System Improvement
- Nonconformity assessment
- Corrective action
- Internal and External audits

