Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
CES Wrap-Up 2026: The Humanoid Robot Safety Question
01/15/26
16 Likes

CES Wrap-Up 2026: The Humanoid Robot Safety Question


This article was written by an SRES autonomous systems expert and reflects observations from CES 2026, where humanoid robots were showcased performing increasingly human-like tasks in close proximity to people. It examines what these demonstrations reveal about current humanoid safety strategies, where they fall short, and why existing industrial safety approaches are insufficient as robots move into shared human environments.

Looking to go deeper? SRES provides expert-led training in autonomous systems, SOTIF, and AI safety, along with hands-on Physical AI  consulting, including robotics safety, to help organizations develop and deploy humanoid systems responsibly.


The 2026 Consumer Electronics Show (CES) was held in Las Vegas last week. This year over 100,000 attendees packed the show for four days, to see the latest in gadgets, robots, chips and vehicle technology.

I was amazed at the prevalence of humanoid robots at CES. Humanoid robots were everywhere at the show. They were playing pianos, folding laundry, fetching products from shelves, and breakdancing, just to name a few.

From a safety perspective, it could be alarming sometimes. How can we keep robots safe as they evolve into a major new product category? Watching carefully, I observed three main safety outcomes for humanoid robots (described as A, B, and C below). Two are observations of valid safety strategies; while the third is a safety gap that must be filled.

A. Some robots achieve safety using separation between bots and humans

If you look carefully at the use cases for large robots like Boston Dynamics Atlas or Agility’s Digit robot , you’ll see a theme: the industrial setting for the robot is walled off from humans. That’s a purposeful decision, to keep humans out of harm’s way in case the robot malfunctions, or just doesn’t know how to ensure safety around humans. Separation is an effective method for safety.

However, the separation strategy limits the usefulness of the robot in many use cases. Sure, a robot can move industrial parts without interacting with human beings. But separation prohibits robots from cleaning a home or assisting patients in a hospital setting. Separation prevents humanoid robots from interacting with humans. Surely we can find more advanced ways.

B. Many real-world robots that do interact with humans, do so in limited and carefully controlled ways

TechForce’s robotic bin carriers are a good example. Slowly wheeling bins among humans in facilities (think hotels, hospitals, and office corridors), the TechForce bots are not humanoid in nature. They’re low to the ground, making them nearly impossible to tip over. And if anything moves within a nearby space, the bot freezes in place to avoid collision.

Again, it works for safety. But it also means robot functions are limited, and robot actions are slow. The very-slow robot (or “Slowbot,” as I call it) was on full display at the show: robot speeds seemed always turned down to a snail’s pace. This “go slow” approach arises partly because dynamic balance is challenging to control. But it’s also a way to keep surrounding people safe, as very slow motions are easier to switch off in case of problems. And just as with the separation strategy, the “go slow” approach limits the utility of robots in real environments. After all, if my home helper robot takes two hours to load the dishwasher, I might as well do it myself.

C. For human speed robots interacting directly with humans, major safety gaps exist and must be closed

Not every robot at the show was separated from humans, or operated in “go slow mode.” But when those safety strategies weren’t used, it was doubtful whether other safety strategies took their place. One video showed a humanoid robot throwing darts. It was a fine way to highlight the force and dynamic capacity of the bot… but I shudder to think about those darts in a real-world setting. The same video showed the robot pouring scalding hot tea… hopefully always into its intended teacup. The list of hazards could fill a book.

Other robots box each other in the ring and dance onstage. When I asked several people about safety of such cases, mostly the safety method was some kind of remote human-operated kill-switch. That might be OK for a CES show. But it simply doesn’t work for a robot operating in your home. (That is, unless you want to let the robot do chores while you follow it around with a kill-switch in hand, monitoring its every move.)

What CES Revealed About Humanoid Robot Safety

Separation and “go slow” are safety ideas that work, but limit utility. And when robots interact with people and move fast, there’s a major safety gap to be filled. But how to fill that gap?

I propose two directions, both of which are relatively straightforward ideas. Implementing these is non-trivial. Both will be a challenge. But let’s first define the challenges, so that the amazing engineers who build this incredible tech can get to work solving them.

Safety Proposal #1: Humanoid Robots should implement the equivalent of "SOTIF" ("Safety of the Intended Function") to their overall safety cases

The idea of SOTIF (which is described in depth in ISO 21448 for automotive applications) is to ensure safety of the normal/nominal function of the autonomous system. This means several safety activities, including the following:

  • Specifying the operational design domain (ODD), which defines the envelope in which the autonomous application is intended to operate.
    • The ODD must not only be defined, but also enforced to ensure the autonomy does not extend outside its specified range.
  • Identifying functional insufficiencies (often failures of perception or planning under unique scenarios) that can expose unsafe behaviors, and developing software to eliminate those insufficiencies.
  • Intensive integrated testing across multiple scenarios, including:
    • Verification activities to demonstrate safe behavior under known scenarios
    • Validation activities to demonstrate safe behavior in unknown or uncontrolled environments within the ODD

Safety Proposal #2: Humanoid Robots should incorporate limited safe-shutdown or fail-operational frameworks

These frameworks ensure the robot can respond safely even in the case of a major fault. Boston Dynamics’ Atlas robot already apparently implements a kind of “squat in place” upon failure, which seems to be a strong example of the safe-shutdown idea. Others need to implement the same. Fortunately, ISO 25785-1 is now in development and is likely to include reference to these frameworks.

Closing the Humanoid Safety Gap

With safe-shutdown (proposal #2), robots will be safe when they incur a fault or failure.  With SOTIF (proposal #1), they’ll be safe in other times… including when interacting with humans. I look forward to many more robot-intensive CES shows in future years, with robots that are amazing, interactive, AND safe. Let’s get to work.


Have insights or questions? Send us an email at info@sres.ai or leave a comment below—we welcome thoughtful discussion from our technical community.

Looking for support? These challenges sit at the core of our Physical AI consulting, including robotics safety in shared human environments.


Conversations on Randomness of Software

Conversations on Randomness of Software

12/16/25

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Insight Categories

  • Autonomous Systems19
  • Electric Mobility3
  • News11
  • Videos11
  • Functional Safety31
  • Responsible AI21
  • Cybersecurity5
Most Recent
  • From Evidence to Argument: Using GSN to Structure AV Safety Cases
    From Evidence to Argument: Using GSN to Structure AV Safety Cases
    01/16/26
  • CES Wrap-Up 2026: The Humanoid Robot Safety Question
    CES Wrap-Up 2026: The Humanoid Robot Safety Question
    01/15/26
  • Conversations on Randomness of Software
    Conversations on Randomness of Software
    12/16/25
  • ISO 26262 Edition 3: Part 3 and Part 4 – Item and System Level
    ISO 26262 Edition 3: Part 3 and Part 4 – Item and System Level
    12/10/25
  • Humanoid Robot Safety Comes Into Focus
    Humanoid Robot Safety Comes Into Focus
    11/25/25
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Physical AI

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Training Terms & Cancellation Policy
Accessibility
Consent Preferences

© Copyright 2025 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube