Perspective on ergonomic-related applications of video motion capture technology





The recent craze in ergonomics seems to be video motion capture technology, used to conduct what some term “dynamic biomechanics modeling”. Of course we have to sprinkle in buzzwords such as AI (Artificial Intelligence), ML (Machine Learning), and predictive analytics to get the full effect. Attend any ergonomics conference and you can’t miss it. The markerless 2D (even 3D) video motion capture technology tracks the position and angle of body segments and joints, at a rate of 1,000 data points per second. Video a job using your smartphone’s built-in video camera, and immediately get an analysis result. On the surface this appears very high tech. But is it? An in particular, how advanced is the ergonomic-related part of this technology? In the spirit of the classic Wendy’s commercial (Where’s the beef?, https://youtu.be/Ug75diEyiA0 ), a question comes to mind …


Where’s the ergo model?


With respect to ergonomic-related applications of video motion capture technology, I’ve concluded there’s too much focus on appearance (futuristic stick figures imposed on videos, overuse of buzzwords in marketing materials, etc … the bun) and insufficient advancement in what truly matters (how we model and make sense of the data … the beef). Don’t get me wrong, I’m very impressed by the video motion capture technology and the multiple independent entrepreneurs I’ve met who are advancing this technology. But I’ll have to admit that I’m a bit underwhelmed at the ergonomic-related modeling in this space to date. I don’t think the ergonomics community is leveraging video motion capture technology to its full potential.

A few observations …


Use of Single-task Ergonomic Risk Assessment Models

Video motion capture technology is being used to provide model inputs for 15-20 year old single-task RA (Risk Assessment) models (i.e., RULA, NIOSH Lifting Guide, etc.). And some applications (I’m told) require the user to manually enter the same values into an app that one had to write down when using the pen & paper version of these same models (That’s high tech?). This seems comparable to installing the latest lidar vehicle guidance system onto a Model T Ford. This amazing video motion capture technology (1k data points per second) plugged into a digitized, single-task observational model? This makes little sense, particularly when ergonomics evaluation is rapidly turning the corner toward modern multi-task evaluation models.

Multi-task models such as RCRA (Recommended Cumulative Recovery Allowance) and FFT (Fatigue Failure Theory) enable us to accurately evaluate the physical exposures of multiple different tasks in a single analysis; whereas single-task models inherently require using an average or worst case exposure level (unless you’re evaluating the rare mono-task job). Video motion capture technology, with it’s 1,000 data points per second, isn’t being utilized anywhere near its potential if the inputs are fed into a digitized version of an old single-task evaluation model, which is incapable of leveraging this rich source of input data. Single-task evaluation models were designed for single inputs levels for force, posture, and repetition (i.e., the worker does the same thing every time). What is special about video motion capture is we have a continuous stream of data representing the continually changing physical demands of real work (e.g., changing postures, changing force or moment levels, varying work pace, etc.). There is no way a single-task model can possibly leverage this rich source of input data to the fullest.


Use of Descriptive Statistics

Another approach for generating ergo-related metrics from the video motion capture technology is using simple descriptive statistics, particularly for posture assessment. For example, an output metric such as “right shoulder > 45 degrees abduction, 57.2% of the time”. Ok … that’s pretty much what we could determine from a detailed, manual video analysis (although it would have taken longer). Then there is the obligatory Red-Yellow-Green (high, moderate, and low risk) legend to categorize the somewhat arbitrary risk levels. The use of descriptive statistics does not equate to an “evaluation model”, but instead serves as a simplistic way to summarize the data. Again, ergonomists are not utilizing video motion capture technology to its full potential. We need a means to “model” the data, not just summarize it.

Calculate Cumulative Loading Over Time

Yet another technique is to plot the cumulative load over time, for things such as joint angle, joint moment, or calculated low back disc compression. The shortcoming here is the assumed linear relationship between load level and the impact on the worker’s body. This relationship is not linear, but rather exponential in nature. The higher load levels cause disproportionately more tissue damage and more localized muscle fatigue than do lower load levels. With such an exponential relationship (on a 0-10 scale for force) a force level of “8” does not cause twice the damage as a force level of “4”. The damage caused by the 8 is much greater than 2X the damage caused by the 4. The 8 causes substantially greater damage due to the exponential nature of the cause-effect relationship between force and tissue damage. So simply calculating a cumulative level over time (calculating the “area under the curve”) ignores this vital exponential relationship. This cumulative loading approach fails to leverage what science tells us about human tissue damage mechanisms, and also fails to leverage what we know about the causation of localized muscle fatigue. And the cumulative loading metric fails to leverage the full potential of video motion capture technology.

In summary, we need to rethink how ergonomists apply video motion capture technology. Fitting this exciting new technology to tired approaches of the past almost feels like taking a step backward. With a rich source of 1,000 data points per second input, streaming continuously, a different approach is needed to fully benefit from this technology. We should use the video motion capture inputs to generate continuous, multi-task evaluation model metrics. And most importantly, we should follow what science tells us about the causation of human tissue damage and localized muscle fatigue.


Saturn Ergonomics, with assistance from ergo professional and student collaborators, is working to do something special in the video motion capture technology space. We are going to incorporate the rich source of data provided by video motion capture technology as source inputs to modern multi-task ergonomics evaluation models. The objective is to create a true multi-task dynamic biomechanics model. Such a model will provide tremendous analysis resolution, accurately identifying the contribution of each task (or subtask) to a cumulative evidence-based exposure limit.


Stay tuned! Exciting innovation is on the horizon!


Murray Gibson

murray@saturnergonomics.com


copyright 2020, Saturn Ergonomics Consulting

CONTACT:

+1 (334) 502-3562
murray@saturnergonomics.com
311 N College Street, Auburn, AL, USA
  • Grey LinkedIn Icon
  • Facebook
  • YouTube
  • Instagram