Abstract

The manufacturing industry is currently facing an increasing demand for customized products, leading to a shift from mass production to mass customization. As a result, operators are required to produce multiple product variants with varying complexity levels while maintaining high-quality standards. Further, in line with the human-centered paradigm of Industry 5.0, ensuring the well-being of workers is equally important as production quality. This paper proposes a novel tool, the “Human–Robot Collaboration Quality and Well-Being Assessment Tool” (HRC-QWAT), which combines the analysis of overall defects generated during product variant manufacturing with the evaluation of human well-being in terms of stress response. The HRC-QWAT enables the evaluation and monitoring of human–robot collaboration systems during product variant production from a broader standpoint. A case study of collaborative human–robot assembly is used to demonstrate the applicability of the proposed approach. The results suggest that the HRC-QWAT can evaluate both production quality and human well-being, providing a useful tool for companies to monitor and improve their manufacturing processes. Overall, this paper contributes to developing a human-centric approach to quality monitoring in the context of human–robot collaborative manufacturing.

1 Introduction

Mass production has long been the standard in manufacturing, allowing efficient use of available resources and a corresponding reduction in production costs. In recent years, however, there has been a shift toward mass customization, an approach to production that allows products to be individually tailored to meet each customer's specific needs and preferences [1]. Several factors have driven this shift, including technological advances, increased demand for customized products, and growing awareness of mass production's environmental and social impacts [2]. As a result, manufacturers are increasingly turning to mass customization as a way to remain competitive in the marketplace and meet the evolving needs of their customers, thus representing a significant paradigm shift in how goods are produced and consumed.

While greater product variety can increase market share and sales volumes, it also increases product complexity and costs [1]. As a result, mass customization requires a flexible production system that can adapt to product volume and type variations. An effective approach to mass customization is the use of collaborative robots (also called cobots) in what is known as Human–Robot Collaboration (HRC) [3]. This approach combines the flexibility and versatility of human operators with the precision of cobots, creating a flexible system capable of assembling different product variants in the same workstation [4].

Interest in HRC has grown with the development of Industry 4.0 and is becoming increasingly important with the emergence of Industry 5.0. Indeed, the main goal of Industry 5.0 is to put human well-being at the center of production systems in order to provide sustainable prosperity for long-term development [57]. The technologies and processes used in Industry 5.0 are designed to enhance the capabilities of human operators, as in the case of cobots. This approach departs from traditional production methods, which prioritize efficiency and automation over the satisfaction of human workers. This shift toward human-centered production should lead to improved productivity, greater job satisfaction, and a more sustainable manufacturing sector toward mass customization.

While the benefits of HRC are clear, there is a lack of comprehensive tools that can assess and monitor the quality and well-being aspects of these systems. Therefore, a need exists for an evaluation tool that not only considers production quality but also considers the well-being of the human operator. With the growing emphasis on human-centered production in Industry 5.0, such an assessment tool becomes increasingly crucial.

To address the lack of comprehensive tools for assessing and monitoring HRC systems taking into account both process performance and human-centered performance for highly customized product variants, this research aims to answer the following research question: How can a diagnostic tool be developed to assess the performance of HRC systems considering the specificities of highly customized product variants?

In response to this research question, this paper introduces a novel tool, the “Human–Robot Collaboration Quality and Well-Being Assessment Tool” (HRC-QWAT). This tool integrates two indicators, a product and process quality indicator and a human stress response indicator, to assess and monitor the quality of an HRC system, specifically tailored to accommodate the unique challenges and variations associated with highly customized product variants. Unlike previous studies that focused on individual measurements and indicators, the HRC-QWAT incorporates multiple dimensions of evaluation, encompassing product and process quality as well as human well-being. This comprehensive approach enables a holistic assessment of the collaborative assembly process, capturing the interplay between quality outcomes and the well-being of human operators.

The versatility and adaptability of the HRC-QWAT are demonstrated by its applicability to single-variant and small-batch variant production scenarios, catering to diverse production contexts. Thus, the HRC-QWAT fills a critical gap in assessing and monitoring HRC systems, providing practitioners and researchers with a valuable tool for evaluating, diagnosing, and optimizing HRC systems in the context of diverse and customized product variants.

To show the practical implementation of the HRC-QWAT, a real-life case study was conducted involving the assembly of electronic board variants using a human–robot collaborative assembly system. The methodology consists of two main phases: (1) the realization phase, in which the HRC-QWAT is constructed by collecting historical experimental data and developing a model that relates the two performance measures, i.e., total defects (cobot-related and human-related errors) and human stress response, to represent the overall quality of the system; and (2) the use phase, in which the HRC-QWAT is used as a reference for predicting future products/batches and identifying critical products in terms of defects and human stress response. The HRC-QWAT can be used to identify critical production scenarios and implement necessary corrective actions to maintain the desired quality level while taking into account the well-being of human operators, thus advancing human-centered production practices within the framework of Industry 5.0.

The remaining paper is structured as follows. Section 2 summarizes the main studies in the field of quality in HRC. Section 3 presents the HRC assembly system used as a case study. Section 4 illustrates the complexity assessment of the product HRC assemblies. Section 5 presents the data collected on product and process quality and their relationship with assembly complexity. Section 6 illustrates data on human well-being and discusses the relationship with assembly complexity of product variants. Section 7 presents the novel diagnostic tool called HRC-QWAT, which shows the potential for single-variant and small-batch production. Finally, Sec. 8 concludes the paper.

2 Literature Review

HRC is a rapidly developing field with promising applications in service, social, and industrial contexts. When designing and implementing an HRC system, evaluating its quality is crucial to ensure that the system meets individual, collective, and production needs or objectives. From an engineering point of view, quality refers to the degree to which a system, product, service, or process conforms to specified requirements and conditions [6]. Quality models, such as conceptual or definition models, are commonly used in engineering to support, evaluate, and manage quality [8]. These models outline a set of quality attributes and their potential interrelationships and serve as a guide for selecting relevant factors for experimental validation of applications, services, or systems.

However, efforts to identify and classify factors, measures, and metrics that describe quality in the HRC field are still rare, especially from a human-centered perspective [6,9]. The industrial perspective can be categorized into two interests: performance-centered and human-centered. The former considers robots as a means to optimize the production process, often involving full automation and substituting human workers with machines, while the latter aims to improve human well-being by respecting their roles, needs, jobs, talents, and rights [1012]. Consequently, there is a trade-off between optimizing the production process and optimizing the well-being of the operators, which requires the use of performance measures specific to the collaborative environment.

According to Ref. [6], performance measures for HRC are variables that can be obtained from physical measurements or an aggregate of facts to assess the current or final state of the human, robot, process, or interaction. These measures can be grouped into four categories:

  1. Time behavior measures indicate the response and processing times required to perform functions or complete tasks.

  2. Process measures are an aggregation of facts related to task completion, workspace design, safety, or product quality.

  3. Physiological measures are obtained from body measures, such as heart rate, to understand the current state of the human.

  4. Human–robot physical measures are obtained from sensors that indicate the current state of the interaction, such as the distance between the human and the robot.

Moreover, performance metrics for HRC can be defined as a combination of direct measures used to express a rate, average, or input/output relationship [6]. Efficiency and effectiveness are considered the main attributes used to evaluate such performance. Efficiency metrics assess the use of resources, i.e., the input/output ratio. On the other hand, effectiveness metrics assess the accuracy and completeness of the achievement of specific objectives, measuring the relationship between actual and expected results. These metrics assess whether HRC systems are “doing things right” and “doing the right things,” respectively.

As far as the human-centered perspective is concerned, quality factors that have received more attention in the robotics literature are safety [13], trust [14], attitudes and acceptance [15], mental and physical workload [16,17], situation awareness and mental models [18,19], emotional responses [20,21], and anxiety [22].

Additionally, the review paper [6] identifies seven emergent research topics that could have a significant impact on future Industry 5.0 applications, including (i) non-invasive monitoring and online analysis of human factors, (ii) individualized HRC, (iii) transparent robotic systems, (iv) fluency, (v) adaptive workload systems, (vi) privacy in data-driven HRC, and (vii) benchmarks.

Regarding point (i), the Industry 5.0 paradigm aims to optimize human well-being through human-centered smart environments. However, most tools for assessing human factors in HRC require offline or intrusive techniques. Creating accurate, non-invasive, and online ergonomic assessment tools that require short preparation represents a relevant challenge in HRC for manufacturing settings [23,24]. One of the most widely used tools for this purpose is the Digital Twin (DT), which allows the human comfort and flexibility of the cobot to be improved in a non-intrusive way [25]. Several DT applications have already been implemented in the area of collaborative assembly and disassembly [26].

Concerning point (ii), applications enabling collaborations between humans and robots are generally short and static for practical reasons [27]. However, individualized machine collaboration is essential for Industry 5.0. Nowadays, various technologies have been identified that enable machine collaboration, such as human action recognition, intention prediction, augmented, virtual or mixed reality, exoskeletons, and collaborative robots. Personalized HRI systems can continuously collect and process personal and physiological data, adapt to individuals’ needs and preferences, and maintain long-term interactions [27,28]. Hedonics factors, which mostly focus on individual goals, require more research attention on applications for Industry 5.0 [29]. Additionally, human-centered initiatives need to consider technologies enabling job satisfaction, work-life balance, and up-skilling and re-skilling of workers [12].

Regarding point (iii), Industry 4.0 applications use black-box AI to enhance autonomy, while Industry 5.0 requires transparent AI that interacts with humans. In HRC, this transparency includes predictability, legibility, and explainability. Legibility enables observers to quickly infer correct goals, while predictability matches expectations. Creating legible trajectories is a broad open issue. Multimodal systems for anticipating human actions face high-dimensional data, which dimensionality reduction techniques can address. AI aims to explain robot behavior to users and improve trust and situational awareness, but challenges include creating methods for generating explanations and evaluating their effectiveness [30,31].

With regard to point (iv), fluency is not considered a metric but rather a quality of interaction in HRC, as described in Ref. [32]. In the HRC environment, fluency refers to the seamless interaction between humans and robots. It involves a high level of coordination, well-synchronized joint activities with precise and efficient timing, and dynamic adaptation of plans and actions. However, fluency is still a relatively new concept in HRC research, and proposed metrics for fluency are often task-specific [32]. Recent studies, such as those of Hoffman [32], have categorized fluency metrics as subjective or objective. However, due to the somewhat vague and ephemeral nature of fluency, it remains a topic of discussion in the robotics community, with further research needed to fully understand the factors affecting fluency and to design metrics that can assess it in various collaborative settings.

Concerning point (v), real-time workload assessment algorithms using physiological measures, such as heart rate, electrodermal activity, and skin temperature, can accurately estimate optimal workload levels in humans [17,33]. This information can be used to improve task performance, reduce errors, and prevent accidents by changing interaction mediums, level of autonomy, and reallocating tasks and responsibilities between humans and robots [33]. Such systems are called adaptive workloads or adaptive teaming systems [17]. The use of these algorithms in various human–robot teaming scenarios remains an open challenge [34].

Regarding point (vi), data-driven technologies like big data, machine learning, cloud computing, and IoT can enhance production performance and human working conditions. However, Industry 4.0 has largely overlooked the human factor and the privacy issues arising from the collection, storage, and processing of personal data that these technologies entail [35,36]. In human-centered manufacturing, privacy efforts must focus on protecting workers’ personal information and ensuring data security [35]. Cybersecurity assessment criteria for HRI in automobile manufacturing have been proposed [37], but comprehensive metrics are needed for HRI and HRC [37].

Finally, with respect to point (vii), international robotics competitions have become a valuable tool for evaluating the performance of robotics systems, providing a form of reproducibility and enabling the evaluation of non-competing systems. Although the scoring mechanism tends to hide the underlying characteristics of the system, competitions allow systems to be compared by linking relevant metrics to the score and explaining which aspects influenced the score and in what way. Typically, the score is based on objective task completion (e.g., image classification accuracy), with few competitions evaluating safety in HRI. However, there is a shift toward more human-centered objective evaluations, exemplified by the safety score in the Future Convenience Store Challenge [38,39].

Based on the literature review, it is evident that many approaches proposed in the field of HRC have a performance-centered perspective, which fails to consider the full potential of HRC applications. Toward a human-centered society and industry, HRC researchers should broaden their perspective beyond mere task fulfillment and adopt holistic approaches that enable robotic systems to achieve both collective and individual goals. In line with this viewpoint and to help address the challenges identified in the seven emerging research topics, the “Human–Robot Collaboration Quality and Well-Being Assessment Tool” (HRC-QWAT) has been proposed. In detail, the proposed tool can respond to the challenges mentioned above as follows:

  1. Non-invasive monitoring and online analysis of human factors: HRC-QWAT allows for real-time, non-invasive monitoring of human operators’ stress levels and well-being through the integration of wearable devices and sensors. This real-time evaluation ensures a prompt intervention to reduce stress levels, fostering a more efficient and balanced working environment.

  2. Individualized HRC: The tool offers the possibility of individualized HRC by considering the unique physiological responses of each worker. This personalized approach promotes a more efficient and harmonious human–robot interaction, potentially leading to improved productivity and well-being.

  3. Transparent robotic systems: Transparency is facilitated as the tool evaluates the collaborative process based on clear performance indicators and stress responses. These evaluations can be shared with human operators, fostering an understanding of the robot's function and promoting trust and collaboration.

  4. Fluency: By measuring the quality of the collaboration through multiple performance indicators, the HRC-QWAT contributes to assessing the fluency of the human–robot interaction and collaboration. This analysis promotes the optimization of joint actions and the creation of more fluid and synchronized interactions.

  5. Adaptive workload systems: The HRC-QWAT real-time monitoring of human stress responses can inform adaptive systems. By detecting stress or overwork, the system can automatically adjust the workload distribution between humans and robots, improving efficiency and reducing the risk of human error or health implications.

  6. Privacy in data-driven HRC: While HRC-QWAT uses data-driven methods for evaluation, it is designed with the utmost respect for privacy standards. Personal and sensitive data are strictly used for the intended purpose of enhancing human–robot interaction and are safeguarded according to the highest security protocols.

  7. Benchmarks: The proposed tool also serves as a benchmarking instrument for HRC in different scenarios. By providing comprehensive metrics on both the performance of the collaborative process and the well-being of the human operator, the HRC-QWAT offers a valuable standard against which different collaboration setups can be compared.

Accordingly, the HRC-QWAT serves as a comprehensive tool, specifically addressing the identified challenges in HRC, thereby offering a strategic instrument for human-centered Industry 5.0.

3 Human–Robot Collaboration Assembly System

An experimental campaign is conducted to assemble six different variants of electronic boards (from variant V1 to variant V6) using the ARDUINO UNO starter kit (ARDUINO®). The choice of electronic boards is based on the fact that, by using the same components, highly customized products with varying levels of complexity can be assembled (as will be discussed in Sec. 4). Moreover, these boards allow real-time verification of the correct functioning of the products, i.e., their proper assembly.

The ARDUINO UNO starter kit is composed of (i) the components that are assembled to make up the various boards listed in Table 1 (e.g., the jumper wires that carry current between the various components); (ii) the microcontroller, i.e., a small computer that enables the circuits to function; and (iii) the breadboard, i.e., a board on which the actual circuit can be built. The breadboard consists of rows and columns of holes that conduct electricity through thin metal connectors under the plastic screen, allowing the circuit components to be connected. The ARDUINO UNO Breadboard is defined as “seamless” as the components do not need to be welded but simply inserted into the holes. Figure 1(a) shows an example of an assembled electronic board (variant V3), while Fig. 1(b) displays the product circuit diagram.

Fig. 1
Example of an assembled electronic board (variant V3): (a) final product assembled and (b) circuit diagram
Fig. 1
Example of an assembled electronic board (variant V3): (a) final product assembled and (b) circuit diagram
Close modal
Table 1

Components of the six electronic board variants (V1–V6)

V1V2V3V4V5V6
Breadboard (BB)111111
Long wires (LW)128913
Short wires (SW)135364
Resistors (R)114622
Pushbuttons (PB)2421
Light emitting diode (L)111
Phototransistor (F)3
Potentiometer (PT)11
Piezo (PZ)1
Liquid crystal display (LCD)1
Battery snap (BS)1
DC Motor (M)1
H-bridge (HB)1
No. of components4917222423
V1V2V3V4V5V6
Breadboard (BB)111111
Long wires (LW)128913
Short wires (SW)135364
Resistors (R)114622
Pushbuttons (PB)2421
Light emitting diode (L)111
Phototransistor (F)3
Potentiometer (PT)11
Piezo (PZ)1
Liquid crystal display (LCD)1
Battery snap (BS)1
DC Motor (M)1
H-bridge (HB)1
No. of components4917222423

Each of the selected products has a varying number of components, which are connected to the breadboard. As outlined in Sec. 4, the six electronic boards were chosen to span a broad range of assembly complexity. Table 1 indicates the type and number of components required for each of the six electronic board variants (V1–V6).

The assembly of the six electronic board variants was conducted using a Universal Robots™ UR3e cobot, as depicted in Fig. 2. The boards were assembled using an OnRobot™ RG6 gripper, a versatile gripper capable of handling small objects and a range of other objects. Six skilled operators, proficient in electronics and electrical engineering, were involved in the assembly process of all six electronic boards, following a random order to prevent any learning effects. During the preliminary stages, each operator underwent training sessions to ensure consistent proficiency among the participants and minimize the potential impact of varying skill levels on the results. These training sessions were carefully designed to familiarize the operators with the assembly process and equipment, allowing them to develop a solid understanding of the tasks involved in the HRC assembly. Table 2 provides an overview of the participants’ characteristics, including relevant information such as age, gender, domain knowledge of HRC, and domain knowledge of assembly tasks. The inclusion of skilled operators with expertise in electronics and electrical engineering helped ensure that the participants were familiar with the intricacies of electronic board assembly and could contribute effectively to the HRC trials.

Fig. 2
Collaborative assembly workstation showing (a) the single-armed UR3e cobot equipped with the OnRobot RG6 gripper and (b) product components
Fig. 2
Collaborative assembly workstation showing (a) the single-armed UR3e cobot equipped with the OnRobot RG6 gripper and (b) product components
Close modal
Table 2

Participants’ characteristics

ParticipantAgeGenderDomain knowledge of HRCDomain knowledge of electronic board assembly
P121FemaleIntermediateIntermediate
P221MaleIntermediateExpert
P322MaleExpertExpert
P421MaleIntermediateExpert
P527MaleIntermediateExpert
P623MaleIntermediateIntermediate
ParticipantAgeGenderDomain knowledge of HRCDomain knowledge of electronic board assembly
P121FemaleIntermediateIntermediate
P221MaleIntermediateExpert
P322MaleExpertExpert
P421MaleIntermediateExpert
P527MaleIntermediateExpert
P623MaleIntermediateIntermediate

In the assembly phase, the cobot handed over the required components to the operator, who assembled the electronic boards in a predetermined order, defined based on circuit theory [40]. The operator completely controlled the logistic tasks by activating the cobot using a button. After the assembly was completed, an experienced external operator (who was not involved in the assembly) conducted an offline quality control check to identify any defects in the final product. Data on overall assembly defects (cobot-related and human-related errors) were collected during the trials, as described in Sec. 5. Additionally, data on the operators’ stress response during the assembly phase were collected, as per Sec. 6.

4 Complexity Analysis

In scientific literature, complexity is typically used as a metric to predict production performance, including production times and defects. Indeed, it is often found that a reduction in complexity is associated with a significant performance improvement [4143]. In this study, the structural complexity model, first introduced by Sinha et al. [44] and later adapted by Alkan and Harrison [45] and Verna et al. [43,46], serves as the foundation for assessing the assembly complexity of selected ARDUINO products. This model, originally developed for manual and fully automated assembly, is extended to the HRC assembly of the present case study, where the robot primarily performs organizational and logistical tasks, such as selecting components to be assembled in a predetermined sequence and delivering them to the human assembler. Adapting and integrating the structural complexity model to the domain of HRC assembly for highly customized product variants represents an innovative aspect of this study. This enables a quantitative assessment of assembly complexity within the context of mass customization.

The six product variants were selected to cover a wide range of assembly complexity. In the case study, each hole on the breadboard was modeled as a single component. This assumption allows multiple connections between the components and the board to be modeled and distinguished from single connections. For example, pushbuttons, i.e., the components that close a circuit when pressed, consist of four different pins that need to be connected to the board. As this type of connection is more complex than connecting a single-pin component, it was necessary to model the individual holes on the board to distinguish these different cases.

The structural complexity model used to model the HRC assembly complexity is based on Huckel's molecular theory [47] and defines the structural complexity of any network-based engineering system as a function of the complexity of individual components (C1), the pairwise interaction between connected components (C2), and the effects of the overall system topology (C3). The structural complexity, represented as C, is a combination of these factors and can be expressed as
C=C1+C2C3
(1)
In Eq. (1), C1 represents the complexity of managing and interacting with the individual components of a product when they are considered separately, i.e., the handling complexity of the product. C1 can be defined as follows:
C1=p=1Nhp
(2)
where N is the total number of product components and hp is the handling complexity of component p. One of the most widely accepted models for calculating a handling complexity index of individual components is the Lucas method [45], based on Design For Assembly (DFA). This method uses a point scale that provides a relative measure of assembly difficulty (a normalized handling complexity index) based on the size, weight, handling difficulty, and orientation (alpha and beta symmetry) of individual components (see Table 3). Using the Lucas method, each component can be assigned a different handling complexity index (see Table 4). The higher the value of hp, the more difficult the component is to handle and place on the board. These values are obtained as follows:
hp=dhA+1NBdhB+dhC+dhDhmax
(3)
where dhi{A,B,C,D} is the handling difficulty of attribute i, NB is the number of applicable handling difficulties related to attribute B, and hmax is the theoretical maximum value for the handling index (i.e., 6.9, according to Table 3).
Table 3

Difficulty of component handling attributes (adapted from Ref. [48])

Attribute iDescriptionHandling difficulty dh
A—Size and weight (one of the following)Very small—requires handling aids1.5
Easy—requires one hand only1
Large and/or heavy—requires more than one hand or aid1.5
Large and/or heavy—requires hoist or more than one person2
B—Handling difficulty (all that apply)Delicate0.4
Flexible0.6
Sticky0.5
Tangible0.8
Severely nest0.7
Sharp/abrasive0.3
Untouchable0.5
Gripping problem/slippery0.2
Automatic handling—no difficulty0
C—Alpha symmetry (one of the following)Symmetrical—no orientation required0
Easy to orient—end to end0.1
Difficult to orient—end to end0.5
D—Beta symmetry (one of the following)Rotational orientation is not required0
Easy to orient—end to end0.2
Difficult to orient—end to end0.4
Attribute iDescriptionHandling difficulty dh
A—Size and weight (one of the following)Very small—requires handling aids1.5
Easy—requires one hand only1
Large and/or heavy—requires more than one hand or aid1.5
Large and/or heavy—requires hoist or more than one person2
B—Handling difficulty (all that apply)Delicate0.4
Flexible0.6
Sticky0.5
Tangible0.8
Severely nest0.7
Sharp/abrasive0.3
Untouchable0.5
Gripping problem/slippery0.2
Automatic handling—no difficulty0
C—Alpha symmetry (one of the following)Symmetrical—no orientation required0
Easy to orient—end to end0.1
Difficult to orient—end to end0.5
D—Beta symmetry (one of the following)Rotational orientation is not required0
Easy to orient—end to end0.2
Difficult to orient—end to end0.4
Table 4

Handling complexity (hp) of components and connection complexity (cpr) of components with the breadboard in the six electronic board variants (V1–V6)

Componenthpcpr
Breadboard (BB)1.7
Long wires (LW)1.83.7, 5.3, 6.3
Short wires (SW)2.33.7, 5.3
Resistors (R)1.83.8
Pushbuttons (PB)1.94.2
LED (L)1.94.2
Phototransistor (F)1.94.2
Potentiometer (PT)1.75.8
Piezo (PZ)1.73.7
LCD (LCD)3.06.4
Battery snap (BS)1.83.7
DC motor (M)1.83.7
H-bridge (HB)1.94.2
Componenthpcpr
Breadboard (BB)1.7
Long wires (LW)1.83.7, 5.3, 6.3
Short wires (SW)2.33.7, 5.3
Resistors (R)1.83.8
Pushbuttons (PB)1.94.2
LED (L)1.94.2
Phototransistor (F)1.94.2
Potentiometer (PT)1.75.8
Piezo (PZ)1.73.7
LCD (LCD)3.06.4
Battery snap (BS)1.83.7
DC motor (M)1.83.7
H-bridge (HB)1.94.2
In Eq. (1), C2 is the complexity of connections and liaisons between components, calculated as the sum of the complexities of the pairwise connections present in the product structure, according to Eq. (4)
C2=p=1N1r=p+1Ncprepr
(4)
where cpr is the complexity in achieving a connection between components p and r, and epr is the (p,r)th entry of the binary adjacency matrix (AM) of the product. It has to be noted that in this specific case study, given that all components are connected to the breadboard, the rth component is always the breadboard.
The complexity cpr can be evaluated by the Lucas method [45], by using the difficulty of connection attributes reported in Table 5, and is obtained as follows:
cpr=dcE+dcF+dcG+dcH+dcI+dcJ+dcKcmax
(5)
where dcj{E,F,G,H,I,J,K} is the connection difficulty of attribute j, and cmax is the theoretical maximum value for the connection index (i.e., 13.1, according to Table 5).
Table 5

Difficulty of component connection attributes (adapted from Ref. [48])

Attribute jDescriptionConnection difficulty dc
E—Component placing (one of the following)Self-holding1
Holding down required2
F—Component fastening (one of the following)Self-securing1.3
Screwing4
Riveting4
Bending4
Mechanical deformation4
Soldering or welding6
Adhesive5
G—Direction (one of the following)Straight line from above0
Straight line not from above0.1
Not straight line and/or bending is required1.6
H—Insertion (one of the following)Single0
Multiple0.7
Simultaneous multiple insertions1.2
I—Restricted vision (one of the following)Visible0
Not visible1
J—Difficult to align (one of the following)No0
Yes0.7
K—Resistance to insertion (one of the following)No0
Yes0.6
Attribute jDescriptionConnection difficulty dc
E—Component placing (one of the following)Self-holding1
Holding down required2
F—Component fastening (one of the following)Self-securing1.3
Screwing4
Riveting4
Bending4
Mechanical deformation4
Soldering or welding6
Adhesive5
G—Direction (one of the following)Straight line from above0
Straight line not from above0.1
Not straight line and/or bending is required1.6
H—Insertion (one of the following)Single0
Multiple0.7
Simultaneous multiple insertions1.2
I—Restricted vision (one of the following)Visible0
Not visible1
J—Difficult to align (one of the following)No0
Yes0.7
K—Resistance to insertion (one of the following)No0
Yes0.6

Thus, the Lucas method provides a normalized assembly index that penalizes the physical attributes (e.g., component positioning and fastening, assembly direction, visibility, alignment, and resistance to insertion) that affect assembly difficulty.

In Eq. (4), epr is defined by using the symmetric AM matrix of the product (see Fig. 3). It can take two different values:
epr={1,ifthereisaconnectionbetweenpandr0,otherwise
(6)
Fig. 3
AM matrix of variant V3
Fig. 3
AM matrix of variant V3
Close modal

Each entry in the AM matrix indicates the presence of an assembly connection between the component and the breadboard. As an example, Fig. 3 shows the AM matrix of product variant V3.

As shown in Table 4, the connection complexity of each component with the breadboard (cpr) in the six electronic board variants (V1–V6) can take different values depending on multiple factors. For example, the connection complexity of long wires to the breadboard ranges from 3.7 to 6.3, depending on how the component is inserted into the breadboard and what other components are already connected. A complexity score of 5.3, for instance, is given if the wire needs to be bent to make the connection, and 6.3 if the connection is made with reduced visibility.

Finally, in Eq. (1), C3 represents the topological complexity, i.e., the complexity associated with the product architecture pattern, which is defined as follows:
C3=EAMN=q=1NδqN
(7)
where EAM is the matrix energy of AM, i.e., the sum of the singular values δq of AM [43]. It increases as the system topology shifts from centralized to more distributed architectures [44].

According to the increasing total assembly complexity C, Table 6 lists the complexities C1, C2, and C3 of the selected product variants. It is worth noting that an increase in complexity does not always imply an increase in the number of components. In fact, although variant V5 has more components than variant V6, the total complexity of variant V6 is higher than that of variant V5. This is due to the different nature of the components that compose the different products, the nature of the connections, and the architecture of the final assembly.

Table 6

Complexities of the six electronic boards (V1–V6)

V1V2V3V4V5V6
C11.643.125.356.597.496.97
C22.905.8910.0313.3915.8318.24
C30.750.570.450.400.370.39
C3.806.509.8311.9513.3714.12
V1V2V3V4V5V6
C11.643.125.356.597.496.97
C22.905.8910.0313.3915.8318.24
C30.750.570.450.400.370.39
C3.806.509.8311.9513.3714.12

5 Product and Process Quality Analysis

In this section, complexity measures are integrated into the analysis of product and process quality, providing a novel perspective on the relationship between assembly complexity and the occurrence of defects in collaborative assembly processes for customized products.

During the manufacturing process, quality data on the overall defectiveness of the product and process were collected to assess the quality of the HRC system (see Table 7). Specifically, for each product variant assembly, the total number of defects (both in-process defects occurring during assembly—referred to as D1—and offline defects detected during offline quality control—referred to as D2) was recorded. A classification was made for both types of defects, D1 and D2 (see Table 8). During the manufacturing process, the assembly operators and the quality control operator filled out Table 8, indicating the number of defects found in each category for each assembled board. Certain defect categories, such as “Unpicked Component” and “Slipped Component,” specifically relate to errors made by the cobot during the assembly phase. These categories reflect instances where the cobot failed to pick up a component or where a component slipped during the cobot handling. It is important to highlight that these defect categories capture cobot-related errors occurring during the assembly phase. Furthermore, it should be noted that the defects recorded in the in-process and offline phases reflect a combination of both cobot-related and human-related errors. This means that the defect data collected encompasses the performance of both the cobot and the human operators involved in the assembly process. To achieve a holistic view of the quality of the system, the total number of assembly defects Dtot (i.e., the sum of in-process and offline defects) was considered and analyzed (see Table 7).

Table 7

Experimental values of total defects (Dtot) and human stress response (HS) recorded in each trial

ParticipantVariantCDtotHS
1V411.9538.97
1V614.12734.87
1V13.800.00
1V513.37510.00
1V39.8334.02
1V26.513.13
2V513.37412.70
2V411.95316.65
2V39.8338.46
2V614.12520.45
2V26.500.33
2V13.800.00
3V39.8308.95
3V614.12623.27
3V13.800.00
3V411.95312.16
3V26.522.23
3V513.37311.30
4V26.527.35
4V411.95314.90
4V13.800.00
4V39.8306.35
4V614.12419.84
4V513.37611.45
5V13.800.00
5V39.83211.12
5V513.3739.21
5V614.12522.01
5V411.95011.55
5V26.515.00
6V614.12627.88
6V26.501.04
6V513.37117.31
6V39.8318.75
6V13.800.00
6V411.9527.75
ParticipantVariantCDtotHS
1V411.9538.97
1V614.12734.87
1V13.800.00
1V513.37510.00
1V39.8334.02
1V26.513.13
2V513.37412.70
2V411.95316.65
2V39.8338.46
2V614.12520.45
2V26.500.33
2V13.800.00
3V39.8308.95
3V614.12623.27
3V13.800.00
3V411.95312.16
3V26.522.23
3V513.37311.30
4V26.527.35
4V411.95314.90
4V13.800.00
4V39.8306.35
4V614.12419.84
4V513.37611.45
5V13.800.00
5V39.83211.12
5V513.3739.21
5V614.12522.01
5V411.95011.55
5V26.515.00
6V614.12627.88
6V26.501.04
6V513.37117.31
6V39.8318.75
6V13.800.00
6V411.9527.75
Table 8

Number of defects classified into in-process (D1) and offline (D2) defects for the six assembled products

VariantIncorrect componentMisplaced componentUnpicked componentSlipped componentDefective componentImproperly inserted component
D1D2D1D2D1D1D1D2D1D2
V10000000000
V20011300001
V30052300001
V40043400030
V500631120000
V60011111000010
Total0027203120042
VariantIncorrect componentMisplaced componentUnpicked componentSlipped componentDefective componentImproperly inserted component
D1D2D1D2D1D1D1D2D1D2
V10000000000
V20011300001
V30052300001
V40043400030
V500631120000
V60011111000010
Total0027203120042

The exclusion rule used was the Modified Interquartile Range Method, which is widely recognized as a practical and effective method for identifying outliers, taking into account the sample size [49]. The relationship between the total number of defects recorded by the six operators for each of the six variants of electronic boards and the complexity of the assembly (calculated as described in Sec. 4) was then analyzed. The “operator factor” was not considered in the analysis after checking its non-significance at a 95% confidence level using a two-way ANOVA (p-value of 0.290). The Poisson regression model was used for the analysis, as total defects are count data [50]. The logarithm and square root link functions were considered, and different models were compared up to the third order of the predictor (i.e., assembly complexity C). The selection of the best model was made based on Akaike's Corrected Information Criterion (AICc) and Bayesian Information Criterion (BIC), goodness-of-fit tests (Deviance and Pearson tests), and deviance residual plots [50,51]. The Deviance and Pearson tests assessed whether the predicted number of events deviated from the observed number in a way that was not predicted by the Poisson distribution. If the p-value was less than the significance level, the null hypothesis that the Poisson distribution provided a good fit could be rejected [50,51].

According to the results, the most appropriate Poisson model to describe the relationship between defects and complexity was the one using the square root link function, represented as
Dtot=(k1C)2
(8)
where Dtot is the total number of defects (in-process and offline), C is assembly complexity evaluated according to Eq. (1), and k1 is the regression coefficient. The results of the Poisson regression analysis, reported in Table 9, showed that the relationship between Dtot and C was statistically significant. In addition, the analysis of the deviance residuals and the goodness-of-fit tests of Deviance and Pearson (where p-values are higher than the significance level of 0.05) indicated that the model fitted the data well. In addition, a very high value of the deviance R2 was obtained.
Table 9

Poisson regression output for total defects (Dtot) versus assembly complexity (C)

k1SE (k1)Coefficient p-valueDeviance R2Goodness-of-fit tests
0.1440.008<0.000599.29%Deviance test p-value0.557
Pearson test p-value0.933
k1SE (k1)Coefficient p-valueDeviance R2Goodness-of-fit tests
0.1440.008<0.000599.29%Deviance test p-value0.557
Pearson test p-value0.933

Note: Model is in the form Dtot = (k1 · C)2.

Figure 4(a) shows the total defects recorded during the experiment and the predicted curve obtained by Poisson regression with 95% confidence and prediction intervals are represented. Moreover, Fig. 4(b) shows the deviance residual plots, where the residuals appear satisfactory overall. Also using the Anderson–Darling test, the hypothesis of normality of the residual distribution cannot be rejected at the 95% confidence level (p-value = 0.194, which is higher than the significance level of 5%). The results obtained for product and process quality show that the increase in assembly complexity of the variants leads to an increase in the total number of defects, following a non-linear trend.

Fig. 4
Total defects (Dtot) versus assembly complexity (C): (a) Poisson regression model and (b) deviance residual plots
Fig. 4
Total defects (Dtot) versus assembly complexity (C): (a) Poisson regression model and (b) deviance residual plots
Close modal

6 Human Well-Being Analysis

In this section, existing methodologies for assessing human well-being are integrated and adapted to capture the impact of assembly complexity on the human stress response in the context of mass customization, showcasing the originality of the proposed approach.

Physiological measures can be used to assess the state of human well-being during production, providing an objective measure compared to self-report tools, which may suffer from retrospective post-task bias [52]. Electrodermal activity (EDA) data are used in this study as a measure of human well-being, as it is commonly used as an indicator of the human stress response [53]. The Empatica E4 wristband (see Fig. 5(a)), a non-invasive biosensor that records EDA information at 4 Hz, was used to collect the EDA data. In addition to EDA, the Empatica E4 also records information on pulsed blood volume (BVP), operator pulse motion (ACC), heart rate variability (HRV), and temperature (TMP). Figure 5(b) shows an example of the raw output provided by the Empatica E4.

Fig. 5
(a) Empatica E4 wristband and (b) Empatica E4 outputs versus time
Fig. 5
(a) Empatica E4 wristband and (b) Empatica E4 outputs versus time
Close modal

For each test performed by the operators, this raw signal was recorded and then analyzed using the EDA Explorer software [54]. This software removes any external noise from the raw signal and decomposes the EDA signal into two types of signals: the tonic signal and the phasic signal. The tonic signal refers to the long-term fluctuations of the EDA signal that are not explicitly triggered by external stimuli. Changes in Skin Conductance Level (SCL) are the best indicator of tonic activity. On the other hand, phasic activity refers to transient changes in EDA that are triggered by typically perceived and externally delivered stimuli. It is best characterized by Skin Conductance Response (SCR) changes. Accordingly, the SCR can be defined as a change in the amplitude of the EDA signal from the SCL to a peak in the response [53].

According to its widespread use [52,53], the average value of the SCR peak amplitude was used as a stress indicator for each assembly worker in this study. The peak amplitude values were then normalized in the formulation of the final stress indicator to remove individual differences between individuals. As a result, the human stress response (HS) indicator for each operator can be defined as
HS=[w=1NPawNPaminamaxamin]100
(9)
where aw is the amplitude of the wth SCR peak, NP is the total number of SCR peaks during the assembly of a given product variant, amin is the minimum amplitude of the SRC peaks, and amax is the maximum amplitude of the SRC peaks (both related to each operator).
The human stress response data obtained during the 36 assembly processes (i.e., the six product variant assemblies performed by the six operators) are reported in Table 7. The HS value of each operator is related to the assembly complexity (as per Sec. 4) in order to model the function that captures their relationship. The “operator factor” was not considered in the analysis after checking its non-significance at 95% confidence level using a two-way ANOVA (p-value of 0.999). Figure 6 shows the two-term power curve fitting relating human stress response and product variant assembly complexity, in the form
HS=k2Ck3
(10)
where HS is the human stress response, C is the assembly complexity evaluated according to Eq. (1), and k2 and k3 are the regression coefficients.
Fig. 6
Human stress response (HS) versus assembly complexity (C): (a) non-linear regression model, and (b) residual plots
Fig. 6
Human stress response (HS) versus assembly complexity (C): (a) non-linear regression model, and (b) residual plots
Close modal

This model was the best-fitting model compared to various linear and non-linear models, considering the goodness-of-fit statistics and residual analysis [55]. The statistical significance of the parameter estimate is confirmed by checking that the 95% confidence intervals for the parameters, calculated from the corresponding Standard Errors (SE) reported in Table 10, exclude the zero [56,57]. The S-value, i.e., the standard error of the regression, is a measure of the goodness of fit of the model under consideration instead of the R2 for non-linear models [57]. The residual plots in Fig. 6(b) appear satisfactory overall, and using the Anderson–Darling test, the hypothesis of normality of the residual distribution cannot be rejected at the 95% confidence level. It should be noted that, according to the result obtained, non-linear regression is preferable to linear quadratic regression, as linearizing the function to perform linear regression can lead to bias in the predictions [58]. According to the results shown in Table 10 and Fig. 6, there is a super-linear relationship between human stress response and the complexity of product variant assembly. This result, which is one of the first attempts to study the relationship between assembly complexity and human stress response, shows that as the complexity of the product assembly increases, the assembly process becomes more challenging and entails a higher degree of mental workload and cognitive effort, leading to a more than proportional increase in human stress response.

Table 10

Non-linear regression output for human stress response (HS) versus assembly complexity (C)

k2SE (k2)95% CI for k2k3SE (k3)95% CI for k3S
0.0040.006(4105,0.076)3.2220.594(2.086, 5.025)4.284
k2SE (k2)95% CI for k2k3SE (k3)95% CI for k3S
0.0040.006(4105,0.076)3.2220.594(2.086, 5.025)4.284

Note: Model is in the form HS=k2Ck3.

7 HRC-QWAT

This section introduces the “Human-Robot Collaboration Quality and Well-Being Assessment Tool” (HRC-QWAT), a tool designed to synthesize previous analyses of quality and human well-being, by directly relating HS and Dtot, regardless of the complexity of the product assembled. The selection of total defects and human stress response in the HRC-QWAT tool was based on their significant impact on evaluating the performance of the HRC system, including both product quality and human well-being. The tool establishes a direct relationship between human stress response and total defects, enabling a comprehensive assessment of system performance within the context of single-variant and small-batch variant production. This design is specifically tailored to address the distinct challenges and demands posed by customized production scenarios, where the adaptability of the production process and the individuality of each assembly play pivotal roles.

Two typologies of HRC-QWAT are proposed. The first typology is intended for single-variant production of highly customized products produced one by one in the HRC system, even if repeated over time. This type of production involves the manufacture of a single product variant at a time, typically in response to specific customer orders or market demand. The production process is adapted as required for each variant, which can result in longer lead times and higher production costs. In this scenario, the company is interested in monitoring the performance of each individual product variant assembly in terms of quality and human well-being. On the other hand, the second typology of HRC-QWAT is proposed to provide companies with a diagnostic method for products of the same variant manufactured in small batches, after each of such productions. This type of production involves the manufacture of small batches of the same product variant, typically in response to forecasted demand or market trends. The production process is adapted for each batch, allowing a product variant to be produced more efficiently and cost-effectively than in the single variant scenario. The choice between single variant production and small-batch variant production generally depends on factors such as demand variability, lead time requirements, and production costs. Single variant production is best suited for highly customized products with low demand, while variant batch production is more efficient for producing a range of products with moderate to high demand.

The use phase of the proposed tool is the same for practitioners in both cases, while the difference lies in the realization phase of the HRC-QWAT.

In both typologies, the HRC-QWAT allows for the assessment of quality and human well-being in the collaborative assembly process, taking into account the unique characteristics and requirements of each production scenario. This tool offers a comprehensive evaluation of the HRC system's performance, considering the relationship between human stress response (HS), total defects (Dtot), and the complexity of the assembly process. Although the complexity indicator is not explicitly included as a separate metric in the HRC-QWAT, its influence on the performance measures is implicitly accounted for in the evaluation. As discussed in the previous sections, assembly complexity plays a crucial role in affecting performance metrics, particularly in highly customized and personalized product assemblies within the same product family [5961]. Although the HRC-QWAT does not directly measure complexity, it considers its impact on the overall performance of the collaborative process. By capturing the relationship between human stress response, total defects, and the intricate nature of the assembly, the tool indirectly accounts for the effects of assembly complexity on the HRC system's performance.

To construct the HRC-QWAT, the following operational steps should be taken. First, a set of historical experimental data representative of the production must be collected. In the case of the HRC-QWAT for single variant production, a reasonable number of products (at least about 30, for robust regression parameter estimation [62]) should be produced, and quality and human stress responses should be measured (according to Secs. 5 and 6, respectively). On the other hand, for the HRC-QWAT for small batches, an adequate number of production units should be collected for each batch (at least about fifteen units for each product type, if possible [62]) and the average performance measures should be obtained for each batch. As mentioned in Secs. 5 and 6, it is advisable to perform preliminary data analysis using conventional statistical techniques to detect and filter outliers [63].

Second, the model relating the two performance measures should be developed to represent the overall quality of the systems, in terms of product/process quality and human well-being. Considering the case study, the combination of the models in Eqs. (8) and (10) leads to a linear model. Such a linear model is the best fit when considering single variant production, as also confirmed by the goodness-of-fit statistics and residual analysis [55]. Figure 7(a) depicts the prediction model relating human stress response HS to total defects Dtot, and Fig. 7(b) shows the residual plots. The output of the regression is shown in Table 11.

Fig. 7
Human stress response (HS) versus total defects (Dtot) for single variant production: (a) linear regression model, and (b) residual plots
Fig. 7
Human stress response (HS) versus total defects (Dtot) for single variant production: (a) linear regression model, and (b) residual plots
Close modal
Table 11

Linear regression output for human stress response versus total defects for single variant production and small-batch variant production

Modelk4SE (k4)Coefficient p-valueR2R2 pred.S
Single variant productionHS = k4 · Dtot3.8210.278<0.000584.38%82.99%5.243
Small-batch variant productionH¯S=k4D¯tot4.2570.294<0.000597.67%95.64%2.127
Modelk4SE (k4)Coefficient p-valueR2R2 pred.S
Single variant productionHS = k4 · Dtot3.8210.278<0.000584.38%82.99%5.243
Small-batch variant productionH¯S=k4D¯tot4.2570.294<0.000597.67%95.64%2.127

When considering small batches of products of the same variant, average values of the human stress response (H¯S) and total defects (D¯tot) should be obtained for each variant. Then, the prediction model should be derived using these averages. In the case study, six small batches are considered, one for each product variant (V1–V6), each consisting of six products.

Referring to the case study data, Fig. 8 illustrates the best-fitting model, i.e., a linear regression model, with the residual plots, and the main output of the regression is reported in Table 11.

Fig. 8
Average human stress response (H¯S) versus average total defects (D¯tot) for small batches of product variant: (a) linear regression model, and (b) residual plots
Fig. 8
Average human stress response (H¯S) versus average total defects (D¯tot) for small batches of product variant: (a) linear regression model, and (b) residual plots
Close modal

The HRC-QWAT diagnostic tool (see Fig. 9) uses the model as a reference for prediction and takes into account the associated uncertainty range. Specifically, the two prediction limits (lower prediction limit, LPL; upper prediction limit, UPL), derived from the regression models shown in Figs. 7 and 8, serve as thresholds for identifying critical products and small batches, respectively. Products and small batches are classified as critical in terms of both defects and human stress response when a special source of variation, i.e., a source not inherent to the process, occurs [62]. It should be noted that the negative values of LPL are set equal to zero, as this is not physically possible. As a result, for some products or batches, the prediction interval may not be symmetrical with respect to the predicted regression value, as shown in Fig. 9.

Fig. 9
HRC-QWAT for (a) single variant production and (b) small-batch variant production
Fig. 9
HRC-QWAT for (a) single variant production and (b) small-batch variant production
Close modal
The two prediction limits can be calculated as follows:
LPL=HSt1α2,γ[SE(Fit)]2+S2UPL=HS+t1α2,γ[SE(Fit)]2+S2
(11)
where HS is the predicted value of the regression curve, t1α2,γ is the value of the Student's t distribution with γ degrees-of-freedom (i.e., number of observations minus 1) and significance level α, SE(Fit) is the standard error of the fit, and S is the standard error of the regression [62].

In the use phase, when new single products or small batches of products are produced, the observed values (Dtot, HS) or (D¯tot,H¯S) are compared with the corresponding prediction limits from the HRC-QWAT for single variant or small-batch production, respectively. Accordingly:

  • If the observed (Dtot, HS) or (D¯tot,H¯S) value falls within the prediction range (LPL and UPL), the product or batch is considered non-critical.

  • If the observed (Dtot, HS) or (D¯tot,H¯S) value is higher than the upper prediction limit (UPL) (area A in Fig. 9) or lower than the lower prediction limit (LPL) (area B in Fig. 9), it indicates a mismatch between the human stress response and the total defects, and an abnormal situation exists, resulting in the product or batch being signaled as critical. Specifically, products or batches located in area A of Fig. 9 are reported as critical due to the high level of stress response experienced by operators compared to the number of total defects detected. On the other hand, products or batches lying in area B are characterized by abnormal defectiveness compared to the level of human stress response.

Table 12 reports an example of critical product production and an example of small-batch production detected as critical using the HRC-QWAT and possible root causes.

Table 12

Examples of critical situations detected by the HRC-QWAT

HRC-QWATObserved valuesPossible root cause
Single variant production(Dtot,HS)=(2,35) Area A (cf. Fig. 9(a))Abnormal stress experienced by the operator
Small-batch variant production(D¯tot,H¯S)=(6,10) Area B (cf. Fig. 9(b))Wrong/faulty components undetected by the operator during the production process
HRC-QWATObserved valuesPossible root cause
Single variant production(Dtot,HS)=(2,35) Area A (cf. Fig. 9(a))Abnormal stress experienced by the operator
Small-batch variant production(D¯tot,H¯S)=(6,10) Area B (cf. Fig. 9(b))Wrong/faulty components undetected by the operator during the production process

The proposed diagnostic tool has been developed with a dual objective. First, it aims to accurately position products or small batches on the HRC-QWAT, thereby providing a clear understanding of their relative position compared to other products. This information can be valuable in making informed quality control decisions and identifying areas for improvement. Second, the diagnostic tool is designed to detect unusual production scenarios and identify critical out-of-control situations. By continuously monitoring production processes, the tool can identify any deviations from established normal operating conditions, allowing corrective action to be taken in a timely manner. This feature of the diagnostic tool acts as an in-process control mechanism, ensuring that the quality of the overall system (product/process and human) remains consistently high throughout the production process.

In conclusion, the proposed diagnostic tool represents a significant step forward in quality control and monitoring, providing manufacturers with a powerful tool to ensure consistent product quality and to detect and correct quality deviations in real-time.

8 Discussion

The novelty of the HRC-QWAT lies in its comprehensive assessment of quality systems, encompassing both technical aspects of production quality and the human factor of worker well-being. While previous studies have focused on individual measurements and indicators in HRC systems, the HRC-QWAT combines multiple dimensions of evaluation to provide a more holistic understanding of the collaborative assembly process. By integrating indicators related to total defects and human stress response, the tool offers a more nuanced evaluation of the performance of HRC systems.

Moreover, the HRC-QWAT's versatility and adaptability contribute to its novelty. It can be applied to both single variant and small-batch production scenarios, accommodating different production environments and collaboration settings. Whether the work is predominantly performed by a robot or in a high-intensity human work environment, the HRC-QWAT assesses production quality and worker stress response, ensuring an optimized collaborative process. The tool's adaptability allows it to be fine-tuned to the unique parameters of various production environments and collaboration scenarios, making it not only a quality and well-being assessment tool but also a strategic tool for comparing and contrasting different collaboration scenarios.

The potential for generalization is another key aspect of the HRC-QWAT's novelty. Although the case study focused on electronic board assembly, the design and methodology of the HRC-QWAT were conceived with a broader application in mind. Its adaptability allows it to be utilized in a wide range of production scenarios, even when a robot performs the majority of the work and the role of the human operator is minimal or focused on labor-intensive tasks. It should be noted, however, that the generalizability of the HRC-QWAT depends on careful adaptation and refinement of the model parameters. This will allow the tool to accurately reflect the interaction dynamics and associated stress responses in different HRC settings. The possibility of extending the use of the HRC-QWAT to more diverse and nuanced collaboration scenarios represents a promising avenue for future research and development in the field of HRC.

9 Conclusions

The aim of the present research was to propose a novel tool, called the HRC-QWAT, which combines two indicators to evaluate and monitor the quality of a production system: the total number of defects generated during the production of product variants, and the stress response of workers. This innovative tool addresses a significant gap in the field of human–robot collaboration assessment, providing a unique approach to evaluating both the production quality and the well-being of human operators. The methodology used a collaborative human–robot assembly system as a case study to demonstrate the feasibility of the HRC-QWAT approach. The methodology consists of two main phases: (1) the realization phase, in which the HRC-QWAT is constructed by collecting historical experimental data and developing a model relating the two performance measures (total defects and human stress response) that represent the overall quality of the system; and (2) the use phase, in which the HRC-QWAT is used as a reference for predicting future products/batches and identifying critical products in terms of defects and human stress response. The diagnostic tool uses the model to compare observed performance measures with corresponding prediction limits and detect abnormal production scenarios.

The HRC-QWAT introduces a novel approach to the evaluation of quality systems in HRC. Unlike previous studies that focused on individual metrics, this tool comprehensively assesses both technical production quality and worker well-being factors. Its adaptability and versatility make it suitable for single variant or small-batch production and for different environments and collaborative settings. In addition, although in this study the HRC-QWAT was applied to electronic boards assembly, its adaptable design allows for a broader application, opening doors for future research in the evaluation and development of human–robot collaboration.

A limitation of the proposed approach is the use of a structural complexity model, which was originally designed for manual and fully automated processes. While such a model serves as a good first approximation for collaborative human–robot contexts, as the cobot mainly performs logistical and organizational support tasks, a more refined complexity model will be required for a more accurate evaluation. Another limitation of the study is that the comparison in the HRC-QWAT is based on only two indicators, total defects and human stress response. In such measures, the performance of the cobot is not directly evaluated, although it is implicitly reflected in the total number of defects. Furthermore, additional performance measures, such as workload, are not directly addressed. Recognizing these limitations, it is important to consider that the proposed approach has the flexibility to be extended to include additional indicators, including cobot’s performance and workload, as well as encompass process sustainability and economic impact measures.

Future research efforts will aim to overcome (at least some of) the above-mentioned limitations. Particular attention will be paid to refining the complexity model by including factors related to HRC and performing a validation of the proposed approach using different products to quantitatively assess its efficiency. In addition, the study could be extended to include other cobot performance measures, including efficiency metrics (cycle time, throughput), accuracy and reliability metrics, safety metrics, and environmental/economic sustainability indicators, such as equivalent carbon dioxide emissions and life cycle costs.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The data sets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

Nomenclature

C =

assembly complexity

S =

standard error of the regression

N =

total number of product components

R2 =

coefficient of determination

amin =

minimum amplitude of the SRC peaks

amax =

maximum amplitude of the SRC peaks

aw =

amplitude of the w-th SCR peak

cmax =

theoretical maximum value for the connection index

cpr =

complexity in achieving a connection between components p and r

epr =

{1,ifthereisaconnectionbetweenpandr0,otherwise

epr =

(p,r)th entry of the AM matrix of the product

hmax =

theoretical maximum value for the handling index

hp =

handling complexity of component p

k1 =

regression coefficient of the model Dtot versus C

k2 =

regression coefficient of the model HS versus C

k4 =

regression coefficient of the model HS versus Dtot

k3 =

regression coefficient of the model HS versus C

t1α2,γ =

value of the Student's t distribution with γ degrees-of-freedom and significance level α

C1 =

handling complexity

C2 =

complexity of connections and liaisons

C3 =

topological complexity

EAM =

matrix energy of AM

D1 =

in-process defects

D2 =

offline defects

Dtot =

total number of defects

HS =

human stress response

NB =

number of applicable handling difficulties related to attribute B

NP =

total number of SCR peaks

dhi{A,B,C,D} =

handling difficulty of attribute i

dcj{E,F,G,H,I,J,K} =

connection difficulty of attribute j

D¯tot =

average value of total defects

H¯S =

average value of human stress response

HS =

predicted HS value of the regression curve

AM =

binary adjacency matrix of the product

CI =

confidence interval

EDA =

electrodermal activity

HRC =

human–robot collaboration

HRC-QWAT =

human–robot collaboration quality and well-being assessment tool

SCR =

skin conductance response

SE(Fit) =

standard error of the fit

SE =

standard error

PI =

prediction interval

LPL =

lower prediction limit

UPL =

upper prediction limit

δq =

singular values of AM

References

1.
Falck
,
A.-C.
,
Örtengren
,
R.
,
Rosenqvist
,
M.
, and
Söderberg
,
R.
,
2017
, “
Basic Complexity Criteria and Their Impact on Manual Assembly Quality in Actual Production
,”
Int. J. Ind. Ergon.
,
58
, pp.
117
128
.
2.
Buckholtz
,
B.
,
Ragai
,
I.
, and
Wang
,
L.
,
2015
, “
Cloud Manufacturing: Current Trends and Future Implementations
,”
ASME J. Manuf. Sci. Eng.
,
137
(
4
), p.
040902
.
3.
Krüger
,
J.
,
Lien
,
T. K.
, and
Verl
,
A.
,
2009
, “
Cooperation of Human and Machines in Assembly Lines
,”
CIRP Ann.
,
58
(
2
), pp.
628
646
.
4.
Peshkin
,
M.
, and
Colgate
,
J. E.
,
1999
, “
Cobots
,”
Ind. Robot An Int. J.
,
26
(
5
), pp.
335
341
.
5.
Maddikunta
,
P. K. R.
,
Pham
,
Q.-V.
,
Prabadevi
,
B.
,
Deepa
,
N.
,
Dev
,
K.
,
Gadekallu
,
T. R.
,
Ruby
,
R.
, and
Liyanage
,
M.
,
2021
, “
Industry 5.0: A Survey on Enabling Technologies and Potential Applications
,”
J. Ind. Inf. Integr.
,
26
, p.
100257
.
6.
Coronado
,
E.
,
Kiyokawa
,
T.
,
Ricardez
,
G. A. G.
,
Ramirez-Alpizar
,
I. G.
,
Venture
,
G.
, and
Yamanobe
,
N.
,
2022
, “
Evaluating Quality in Human-Robot Interaction: A Systematic Search and Classification of Performance and Human-Centered Factors, Measures and Metrics Towards an Industry 5.0
,”
J. Manuf. Syst.
,
63
, pp.
392
410
.
7.
Ramanujan
,
D.
,
Bernstein
,
W. Z.
,
Diaz-Elsayed
,
N.
, and
Haapala
,
K. R.
,
2023
, “
The Role of Industry 4.0 Technologies in Manufacturing Sustainability Assessment
,”
ASME J. Manuf. Sci. Eng.
,
145
(
1
), p.
010801
.
8.
Deissenboeck
,
F.
,
Juergens
,
E.
,
Lochmann
,
K.
, and
Wagner
,
S.
,
2009
, “
Software Quality Models: Purposes, Usage Scenarios and Requirements
,”
Proceedings of the 2009 ICSE Workshop on Software Quality
,
Vancouver, BC
,
May 16
, IEEE, pp.
9
14
.
9.
Damacharla
,
P.
,
Javaid
,
A. Y.
,
Gallimore
,
J. J.
, and
Devabhaktuni
,
V. K.
,
2018
, “
Common Metrics to Benchmark Human-Machine Teams (HMT): A Review
,”
IEEE Access
,
6
, pp.
38637
38655
.
10.
Breque
,
M.
,
De Nul
,
L.
, and
Petridis
,
A.
,
2021
, “
Industry 5.0: Towards a Sustainable, Human-Centric and Resilient European Industry
,”
Publications Office of the European Union
, European Commission, Directorate-General for Research and Innovation. https://data.europa.eu/doi/10.2777/308407
11.
Leng
,
J.
,
Sha
,
W.
,
Wang
,
B.
,
Zheng
,
P.
,
Zhuang
,
C.
,
Liu
,
Q.
,
Wuest
,
T.
,
Mourtzis
,
D.
, and
Wang
,
L.
,
2022
, “
Industry 5.0: Prospect and Retrospect
,”
J. Manuf. Syst.
,
65
, pp.
279
295
.
12.
Xu
,
X.
,
Lu
,
Y.
,
Vogel-Heuser
,
B.
, and
Wang
,
L.
,
2021
, “
Industry 4.0 and Industry 5.0—Inception, Conception and Perception
,”
J. Manuf. Syst.
,
61
, pp.
530
535
.
13.
Marvel
,
J. A.
,
Bagchi
,
S.
,
Zimmerman
,
M.
, and
Antonishek
,
B.
,
2020
, “
Towards Effective Interface Designs for Collaborative HRI in Manufacturing: Metrics and Measures
,”
ACM Trans. Hum.-Robot Interact.
,
9
(
4
), pp.
1
55
.
14.
Khavas
,
Z. R.
,
Ahmadzadeh
,
S. R.
, and
Robinette
,
P.
,
2020
, “
Modeling Trust in Human-Robot Interaction: A Survey
,”
Proceedings of the Social Robotics: 12th International Conference, ICSR 2020
,
Golden, CO
,
Nov. 14–18
,
Springer
, pp.
529
541
.
15.
Venkatesh
,
V.
, and
Davis
,
F. D.
,
2000
, “
A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies
,”
Manage. Sci.
,
46
(
2
), pp.
186
204
.
16.
Young
,
M. S.
,
Brookhuis
,
K. A.
,
Wickens
,
C. D.
, and
Hancock
,
P. A.
,
2015
, “
State of Science: Mental Workload in Ergonomics
,”
Ergonomics
,
58
(
1
), pp.
1
17
.
17.
Heard
,
J.
,
Harriott
,
C. E.
, and
Adams
,
J. A.
,
2018
, “
A Survey of Workload Assessment Algorithms
,”
IEEE Trans. Hum.-Mach. Syst.
,
48
(
5
), pp.
434
451
.
18.
Tabrez
,
A.
,
Luebbers
,
M. B.
, and
Hayes
,
B.
,
2020
, “
A Survey of Mental Modeling Techniques in Human–Robot Teaming
,”
Curr. Robot. Rep.
,
1
(
4
), pp.
259
267
.
19.
Mathieu
,
J. E.
,
Heffner
,
T. S.
,
Goodwin
,
G. F.
,
Salas
,
E.
, and
Cannon-Bowers
,
J. A.
,
2000
, “
The Influence of Shared Mental Models on Team Process and Performance
,”
J. Appl. Psychol.
,
85
(
2
), p.
273
283
.
20.
Hudlicka
,
E.
,
2003
, “
To Feel or Not to Feel: The Role of Affect in Human–Computer Interaction
,”
Int. J. Hum. Comput. Stud.
,
59
(
1–2
), pp.
1
32
.
21.
Zhang
,
P.
,
2013
, “
The Affective Response Model: A Theoretical Framework of Affective Concepts and Their Relationships in the ICT Context
,”
MIS Q.
,
37
(
1
), pp.
247
274
.
22.
Naneva
,
S.
,
Sarda Gou
,
M.
,
Webb
,
T. L.
, and
Prescott
,
T. J.
,
2020
, “
A Systematic Review of Attitudes, Anxiety, Acceptance, and Trust Towards Social Robots
,”
Int. J. Soc. Robot.
,
12
(
6
), pp.
1179
1201
.
23.
Lorenzini
,
M.
,
Kim
,
W.
, and
Ajoudani
,
A.
,
2022
, “
An Online Multi-Index Approach to Human Ergonomics Assessment in the Workplace
,”
IEEE Trans. Hum.-Mach. Syst.
,
52
(
5
), pp.
812
823
.
24.
Ajoudani
,
A.
,
Albrecht
,
P.
,
Bianchi
,
M.
,
Cherubini
,
A.
,
Del Ferraro
,
S.
,
Fraisse
,
P.
,
Fritzsche
,
L.
,
Garabini
,
M.
,
Ranavolo
,
A.
, and
Rosen
,
P. H.
,
2020
, “
Smart Collaborative Systems for Enabling Flexible and Ergonomic Work Practices [Industry Activities]
,”
IEEE Robot. Autom. Mag.
,
27
(
2
), pp.
169
176
.
25.
Fan
,
J.
,
Zheng
,
P.
, and
Lee
,
C. K. M.
,
2023
, “
A Vision-Based Human Digital Twin Modelling Approach for Adaptive Human-Robot Collaboration
,”
ASME J. Manuf. Sci. Eng.
,
145
(
12
), p.
121002
.
26.
Verna
,
E.
,
Puttero
,
S.
,
Genta
,
G.
, and
Galetto
,
M.
,
2023
, “
Toward a Concept of Digital Twin for Monitoring Assembly and Disassembly Processes
,”
Qual. Eng.
.
27.
Irfan
,
B.
,
Ramachandran
,
A.
,
Spaulding
,
S.
,
Glas
,
D. F.
,
Leite
,
I.
, and
Koay
,
K. L.
,
2019
, “
Personalization in Long-Term Human-Robot Interaction
,”
Proceedings of the 2019 14th ACM/IEEE International Conference on Human–Robot Interaction (HRI)
,
Daegu, South Korea
,
Mar. 11–14
, IEEE, pp.
685
686
.
28.
Müller
,
J.
,
2020
, “
Enabling Technologies for Industry 5.0—Results of a Workshop with Europe’s Technology Leaders
,”
Publications Office, European Commission
, Directorate-General for Research and Innovation. https://data.europa.eu/doi/10.2777/082634
29.
Hu
,
Y.
,
Abe
,
N.
,
Benallegue
,
M.
,
Yamanobe
,
N.
,
Venture
,
G.
, and
Yoshida
,
E.
,
2022
, “
Toward Active Physical Human–Robot Interaction: Quantifying the Human State During Interactions
,”
IEEE Trans. Hum.-Mach. Syst.
,
52
(
3
), pp.
367
378
.
30.
Setchi
,
R.
,
Dehkordi
,
M. B.
, and
Khan
,
J. S.
,
2020
, “
Explainable Robotics in Human-Robot Interactions
,”
Procedia Comput. Sci.
,
176
, pp.
3057
3066
.
31.
Anjomshoae
,
S.
,
Najjar
,
A.
,
Calvaresi
,
D.
, and
Främling
,
K.
,
2019
, “
Explainable Agents and Robots: Results From a Systematic Literature Review
,”
Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019)
,
Montreal, Canada
,
May 13–17, 2019
,
International Foundation for Autonomous Agents and Multiagent Systems
, pp.
1078
1088
.
32.
Hoffman
,
G.
,
2019
, “
Evaluating Fluency in Human–Robot Collaboration
,”
IEEE Trans. Hum.-Mach. Syst.
,
49
(
3
), pp.
209
218
.
33.
Heard
,
J.
,
Harriott
,
C. E.
, and
Adams
,
J. A.
,
2017
, “
A Human Workload Assessment Algorithm for Collaborative Human-Machine Teams
,”
Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
,
Lisbon, Portugal
,
Aug. 28–Sept. 1
, IEEE, pp.
366
371
.
34.
Heard
,
J.
,
Heald
,
R.
,
Harriott
,
C. E.
, and
Adams
,
J. A.
,
2019
, “
A Diagnostic Human Workload Assessment Algorithm for Collaborative and Supervisory Human–Robot Teams
,”
ACM Trans. Hum.-Robot Interact.
,
8
(
2
), pp.
1
30
.
35.
Petersen
,
S. A.
,
Mannhardt
,
F.
,
Oliveira
,
M.
, and
Torvatn
,
H.
,
2018
, “
A Framework to Navigate the Privacy Trade-Offs for Human-Centred Manufacturing
,”
Collaborative Networks of Cognitive Systems: 19th IFIP WG 5.5 Working Conference on Virtual Enterprises, PRO-VE 2018
,
Cardiff, UK
,
Sept 17–19, 2018
,
Springer
, pp.
85
97
.
36.
Mannhardt
,
F.
,
Petersen
,
S. A.
, and
Oliveira
,
M. F.
,
2019
, “
A Trust and Privacy Framework for Smart Manufacturing Environments
,”
J. Ambient Intell. Smart Environ.
,
11
(
3
), pp.
201
219
.
37.
Rahman
,
S. M. M.
,
2021
, “
Cybersecurity Metrics for Human-Robot Collaborative Automotive Manufacturing
,”
2021 IEEE International Workshop on Metrology for Automotive (MetroAutomotive)
,
Virtual Conference
,
July 1–2
, IEEE, pp.
254
259
.
38.
Causo
,
A.
,
Durham
,
J.
,
Hauser
,
K.
,
Okada
,
K.
, and
Rodriguez
,
A.
,
2020
,
Advances on Robotic Item Picking
,
Springer
,
New York
.
39.
Fujita
,
M.
,
Domae
,
Y.
,
Noda
,
A.
,
Garcia Ricardez
,
G. A.
,
Nagatani
,
T.
,
Zeng
,
A.
,
Song
,
S.
,
Rodriguez
A.
,
Causo
A.
,
Chen
I. M.
, and
Ogasawara
T.
,
2019
, “
What Are the Important Technologies for Bin Picking? Technology Analysis of Robots in Competitions Based on a Set of Performance Metrics
,”
Adv. Robot.
,
34
(
7–8
), pp.
560
574
.
40.
Zadeh
,
L.
,
1962
, “
From Circuit Theory to System Theory
,”
Proc. IRE
,
50
(
5
), pp.
856
865
.
41.
ElMaraghy
,
H.
,
Schuh
,
G.
,
ElMaraghy
,
W.
,
Piller
,
F.
,
Schönsleben
,
P.
,
Tseng
,
M.
, and
Bernard
,
A.
,
2013
, “
Product Variety Management
,”
CIRP Ann.
,
62
(
2
), pp.
629
652
.
42.
Genta
,
G.
,
Galetto
,
M.
, and
Franceschini
,
F.
,
2018
, “
Product Complexity and Design of Inspection Strategies for Assembly Manufacturing Processes
,”
Int. J. Prod. Res.
,
56
(
11
), pp.
4056
4066
.
43.
Verna
,
E.
,
Genta
,
G.
,
Galetto
,
M.
, and
Franceschini
,
F.
,
2022
, “
Defect Prediction for Assembled Products: A Novel Model Based on the Structural Complexity Paradigm
,”
Int. J. Adv. Manuf. Technol.
,
120
(
5–6
), pp.
3405
3426
.
44.
Sinha
,
K.
,
2014
, “
Structural Complexity and Its Implications for Design of Cyber-Physical Systems
,”
PhD dissertation
,
Engineering Systems Division, Massachusetts Institute of Technology
,
Cambridge, MA
.
45.
Alkan
,
B.
, and
Harrison
,
R.
,
2019
, “
A Virtual Engineering Based Approach to Verify Structural Complexity of Component-Based Automation Systems in Early Design Phase
,”
J. Manuf. Syst.
,
53
, pp.
18
31
.
46.
Verna
,
E.
,
Genta
,
G.
,
Galetto
,
M.
, and
Franceschini
,
F.
,
2022
, “
Defects-Per-Unit Control Chart for Assembled Products Based on Defect Prediction Models
,”
Int. J. Adv. Manuf. Technol.
,
119
(
5–6
), pp.
2835
2846
.
47.
Hückel
,
E.
,
1932
, “
Quantentheoretische Beiträge Zum Problem Der Aromatischen Und Ungesättigten Verbindungen. III
,”
Zeitschrift für Phys.
,
76
(
9–10
), pp.
628
648
.
48.
Chan
,
V.
, and
Salustri
,
F. A.
,
2003
,
Dfa: The Lucas Method
,
Ryerson University
,
Toronto
.
49.
Barbato
,
G.
,
Barini
,
E. M.
,
Genta
,
G.
, and
Levi
,
R.
,
2011
, “
Features and Performance of Some Outlier Detection Methods
,”
J. Appl. Stat.
,
38
(
10
), pp.
2133
2149
.
50.
Cameron
,
A. C.
, and
Trivedi
,
P. K.
,
2013
,
Regression Analysis of Count Data
,
Cambridge University Press
,
Cambridge, UK
.
51.
Myers
,
R. H.
,
Montgomery
,
D. C.
,
Vining
,
G. G.
, and
Robinson
,
T. J.
,
2012
,
Generalized Linear Models: With Applications in Engineering and the Sciences
,
John Wiley & Sons
,
Hoboken, NJ
.
52.
Gervasi
,
R.
,
Aliev
,
K.
,
Mastrogiacomo
,
L.
, and
Franceschini
,
F.
,
2022
, “
User Experience and Physiological Response in Human-Robot Collaboration: A Preliminary Investigation
,”
J. Intell. Robot. Syst.
,
106
(
2
), p.
36
.
53.
Zhao
,
B.
,
Wang
,
Z.
,
Yu
,
Z.
, and
Guo
,
B.
,
2018
, “
EmotionSense: Emotion Recognition Based on Wearable Wristband
,”
Proceedings of the 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI)
,
Guangzhou, China
,
Oct. 8–12
, pp.
346
355
.
54.
Taylor
,
S.
,
Jaques
,
N.
,
Chen
,
W.
,
Fedor
,
S.
,
Sano
,
A.
, and
Picard
,
R.
,
2015
, “
Automatic Identification of Artifacts in Electrodermal Activity Data
,”
Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
,
Milan, Italy
,
Aug. 25–29
, pp.
1934
1937
.
55.
Montgomery
,
D.
,
Runger
,
G.
, and
Hubele
,
N.
,
2010
,
Engineering Statistics
,
John Wiley & Sons Inc
,
New York
.
56.
Seber
,
G. A. F.
, and
Wild
,
C. J.
,
1989
,
Nonlinear Regression
,
John Wiley & Sons
,
New York
.
57.
Bates
,
D. M.
, and
Watts
,
D. G.
,
1988
,
Nonlinear Regression Analysis and Its Applications
,
John Wiley & Sons, Inc.
,
Hoboken, NJ
.
58.
Galetto
,
M.
,
Verna
,
E.
, and
Genta
,
G.
,
2020
, “
Accurate Estimation of Prediction Models for Operator-Induced Defects in Assembly Manufacturing Processes
,”
Qual. Eng.
,
32
(
4
), pp.
595
613
.
59.
Hasan
,
S. M.
,
Baqai
,
A. A.
,
Butt
,
S. U.
, and
quz Zaman
,
U. K.
,
2018
, “
Product Family Formation Based on Complexity for Assembly Systems
,”
Int. J. Adv. Manuf. Technol.
,
95
(
1
), pp.
569
585
.
60.
Lim
,
K. Y. H.
,
Zheng
,
P.
,
Chen
,
C. H.
, and
Huang
,
L.
,
2020
, “
A Digital Twin-Enhanced System for Engineering Product Family Design and Optimization
,”
J. Manuf. Syst.
,
57
, pp.
82
93
.
61.
Dan
,
B.
, and
Tseng
,
M. M.
,
2007
, “
Assessing the Inherent Flexibility of Product Families for Meeting Customisation Requirements
,”
Int. J. Manuf. Technol. Manag.
,
10
(
2–3
), pp.
227
246
.
62.
Montgomery
,
D. C.
,
2019
,
Introduction to Statistical Quality Control
,
Wiley Global Education
,
Hoboken, NJ
.
63.
Barbato
,
G.
,
Germak
,
A.
, and
Genta
,
G.
,
2013
,
Measurements for Decision Making
,
Società Editrice Esculapio
,
Bologna
.