Abstract

The remanufacturing workforce can benefit from the capabilities of robotic technology, where robots can alleviate the labor-intensive nature of disassembly operations and help with handling toxic and hazardous materials. However, operators’ safety is an important aspect of human-robot collaboration in disassembly operations. This study focuses on predicting human hand motion to provide advanced information to disassembly robots when collaborating with humans. A prediction framework is proposed, which consists of two deep learning models, including convolutional long short-term memory (ConvLSTM) and You Only Look Once (YOLO). ConvLSTM forecasts the next-frame image using images from the disassembly process, and then the YOLO model identifies the human hand object on the predicted image resulting from ConvLSTM. The disassembly images collected from four desktop computers are used to train the ConvLSTM and YOLO. The results reveal that the combined framework of ConvLSTM and YOLO performs well in predicting human hand motion and locating the hand object. The outcomes highlight the need for developing deep learning models capable of recognizing human motion when working with different designs as often remanufacturing workforce have to deal with a wide range of products from different brands, models, and conditions.

This content is only available via PDF.
You do not currently have access to this content.