As devices become more and more integrated into our world, how do we stay safe? How do we protect others? How do we balance the risks and benefits for ourselves and for our society?
My starting point for exploring these questions is a recent DARPA competition. It got me thinking about robots operating side-by-side or even in tandem with humans. (See Baxter for a step in this direction.) How safe will this be?
Many of the robots in operation today are powerful and fast-moving. Humans need to stay out of their way or risk life and limb. In fact, many operate in cages or come to a stop if humans step into their space. It sounds like we’re working with wild animals, and a long way from science fiction robots that have enough sense to allow for unpredictable actions of humans in their midst.
As it turns out, work is being done to bring the capabilities of robots into human space. Some people have even formulated design principles for safety of human-robot interactions. Sensors might be added to current robots or designed into new ones so the machines can determine the positions of humans, their proximity to moving parts, and their current or predicted states. In addition to visual detection and mapping (seeing) with processing to interpret position, motion gestures, and facial expressions, this might include sound detectors with processing that would recognize speech and utterances (including screams).
More broadly, there would be benefits to building in context awareness. Dangers can vary with time of day, equipment available to humans, toxicity of materials being processed, and many other factors. And these might also dictate different responses. Dynamic contextual processing, allowing quick changes in the options available would need to be included.
Fast reaction time would be another important feature, so the robot could stop, get out of the way, or even rescue a human in a timely manner.
I ran across two factors that surprised me. The first was robustness. The idea here is in the real world we inhabit, as opposed to the lab, there is a need to compensate for errors and missing data – or even damage to the system. The second was the use of compliant motors. While most motors we’re familiar with keep going when they meet an obstacle, say a fellow worker’s head, these motors have clutches or reaction force sensors that stop the motion when they hit an unexpected obstacle.
These principles have been shaped around the need for physical safety around big robots, but they provide an interesting starting point for exploring other smart “things” in our environment that might represent danger. More on this topic within the next post.
Comments are closed.