Abstract

Reinforcement learning (RL) has potential to provide innovative solutions to existing challenges in estimating joint moments in motion analysis, such as kinematic or electromyography (EMG) noise and unknown model parameters. Here we explore feasibility of RL to assist joint moment estimation for biomechanical applications. Forearm and hand kinematics and forearm EMGs from 4 muscles during free finger and wrist movement were collected from six healthy subjects. Using the Proximal Policy Optimization approach, we trained and tested two types of RL agents that estimated joint moment based on measured kinematics or measured EMGs, respectively. To quantify the performance of RL agents, the estimated joint moment was used to drive a forward dynamic model for estimating kinematics, which were then compared with measured kinematics. The results demonstrated that both RL agents can accurately reproduce wrist and metacarpophalangeal joint motion. The correlation coefficients between estimated and measured kinematics, derived from the kinematics-driven agent and subject-specific EMG-driven agents, were 0.98±0.01 and 0.94±0.03 for the wrist, respectively, and were 0.95±0.02 and 0.84±0.06 for the metacarpophalangeal joint, respectively. In addition, a biomechanically reasonable joint moment-angle-EMG relationship (i.e. dependence of joint moment on joint angle and EMG) was predicted using only 15 seconds of collected data. In conclusion, this study serves as a proof of concept that an RL approach can assist in biomechanical analysis and human-machine interface applications by deriving joint moments from kinematic or EMG data.

This content is only available via PDF.
You do not currently have access to this content.