Personalized and timely feedback has the potential to improve an individual’s performance on a wide variety of engineering tasks. The ability to capture an individual’s affective state(s) and performance on a task is a key component needed to advance personalization of feedback. While automated methods exist for quantifying task performance, the ability to quantify an individual’s affective state(s) remains an open research area. Existing methods for quantifying an individual’s affective state(s) are challenging to implement where real-time assessment is needed (e.g., engineering workshop environments). This has sparked a growing interest for automated systems capable of inferring individuals’ affective state(s), based on their projected facial or body cues. However, existing methods attempt to employ a general model to label an individual’s affective state(s) into discrete categories, such as fear, joy, surprise, etc. Nonetheless, emotional expressions are far more complex, as individual differences in facial expressions, may deteriorate the performance of these systems in providing personalized feedback. To overcome these limitations, this work proposes a machine learning method for predicting an individual’s performance on a task by utilizing his/her unique facial keypoint data, hereby bypassing the need to infer his/her discrete affective states. A case study involving 31 participants is presented. The support vector machine model employed to predict an individual’s performance yielded an accuracy of 77.15% for an individual-task specific model. In contrast, a general model yielded an accuracy of only 52.69%, hereby supporting the authors’ argument that individual-task specific models are more suitable for advancing personalized feedback.

This content is only available via PDF.
You do not currently have access to this content.