Design project management is witnessing an increasing need for practitioners to rely on tools that reflect the integrated nature of the social and technical characteristics of design processes, as opposed to considering the two as separate concepts. For practitioners, this integration has the potential value of predicting the future behavior of design processes by allowing them to understand what task to do next, whom to assign a task given the availability of resource, and the levels of knowledge and expertise required. In response to these challenges, this paper contributes to the development of a new process modeling method, called actor-based signposting (ABS), that looks at the early stages of the product development processes from the perspective of integrated sociotechnical systems. The objective is to support managers and decision-makers on both typical planning issues, such as scheduling and resource allocation, and less conventional issues relating to the organizational planning of a design project, such as identification of criticalities, matching required skills and expertise, and factors of influence. Ultimately, the aim is to support organizations to be more adaptive in responding to change and uncertainty. Two case studies in the automotive and aerospace industries with different properties and modeling objectives were selected to demonstrate the utility of the proposed method. Experimental analysis of these cases led to a range of insights regarding the future of modeling for academia as well as the decision-making capabilities for managers and practitioners.
Engineering design projects, even in their simplest or routine form, are complex to manage due to their integrated and multidisciplinary nature, which involves the properties of both complex physical-technical systems and networks of interdependent actors [1,2]. In addition, they comprise a set of activities that requires collaboration between cross-functional teams, which often adds to the complexity. Consequently, each design project might face unique problems that lead to different process behavior, even when compared with similar projects in the past. This element of uniqueness makes the management of design projects highly challenging, and it becomes vital to understand how organizational factors affect the ways in which work is done and technical systems are used .
In reality, project managers make decisions based on what they see or perceive, and since they cannot see the entire scope of the design project to predict future behavior, they must rely on models to support planning . Additional complexity is linked to the fact that no single model can address all perspectives . Therefore, the development of dynamic models, which can reflect the complexity of design projects and are “richer” in providing useful information, is of value to project managers. Multiple classes of models have been developed to address this need, each one focusing on a particular aspect of design, such as activities, agents, or decisions, and for which there are several comprehensive reviews in the literature [4–6].
A general criticism facing the majority of existing models is that they have primarily been focused on supporting the selection of tasks, rather than providing detailed information on how to perform the tasks, i.e., by whom and how [4,7]. Another criticism is that social (organizational) and technical properties of the projects are typically embedded in the models separately and not in an integrated framework : evidence in the literature shows the omission of individual resources’ properties (such as expertise and preferences) when modeling the activity duration [8,9], or conversely focusing on the organizational-behavioral properties such as communication and interactions when defining design activities [10,11]. As a result, there is a gap between the models that focus on organizational properties and technical performance, and consequently their mutual influences are likely to be neglected in the model, particularly when these variables change simultaneously.
This interdisciplinary modeling is particularly critical in early design phases where the information is imprecise or indeterminate (e.g., product requirements are not fully defined) and can affect the execution of design tasks in different ways. In such situations, finding the right individual or choice of agent for each design task can be challenging as it depends on a number of factors, such as the composition of the team, availability of the agent, knowledge on the task (availability of information), and feasibility in terms of manpower cost (i.e., the assumption is that more experienced agents usually infer more overhead costs to the system, compared to junior ones).
To address these challenges, this paper focuses on the modeling and simulation of the early stages of product development processes (PDPs)—typically referred to design processes (DPs)—with an integrated sociotechnical view. The sociotechnical process modeling, in the context of this paper, aims to find the right choice of agent for each design task, considering the availability of knowledge and level of expertise. The paper is concerned with the development of a new process modeling method described as actor-based signposting (ABS). It represents an alternate way of looking at engineering DPs that combines the activity-based and agent-based concepts of process modeling. The specific objectives are twofold: (1) understanding how the availability of knowledge and the level of expertise affect the outcome of design tasks in terms of duration, cost, and performance quality and (2) supporting managers and decision-makers with insights relating to both technical planning (scheduling and resource allocation) and organizational planning (identifying critical tasks and actors based on their performance quality, matching required skills and expertise, and tasks-actors fit).
In doing this, the existing Signposting approach  is extended as a dynamic task-based model to consider the allocation of agents to tasks. It is postulated that, when modeling a sequence of tasks to be performed, taking the role of actors (agents) into consideration opens up additional challenges—and presumably new opportunities—to the modeling, as the overall process behavior will now depend on the actors’ behavior (e.g., interactions with other actors), on their performance (e.g., level of expertise), and on the mechanism by which actors are assigned to individual tasks (e.g., task execution strategy).
Section 2 outlines the theoretical foundations of sociotechnical complexity in engineering design, the need for a new process modeling method, and the research methodology employed. Section 3 presents the previous versions of the Signposting method upon which the proposed model is built. The proposed ABS method is explained in Secs. 4 and 5 presents two applications of the model in practice. Finally, Sec. 6 concludes the paper with future research opportunities.
It is evident that combining the social and technical properties of DPs in models can provide a more realistic representation of the reality of design projects, both in terms of addressing their mutual dependencies and delivering value to the design stakeholders [2,12,13]. This section argues that understanding the way that organizational factors, such as individuals’ expertise, affect the performance of design tasks requires a detailed model of the multiple types of dependencies between individuals and tasks. While this can provide managers with rich information on the content of the work, it also can add to the complexity of the model and modeling process for such projects in terms of the number of elements that need to be modeled.
Therefore, a particular challenge facing the sociotechnical modeling of DPs is to make an appropriate compromise between the complexity of the model and its subsequent utility. In the following, we first present aspects of sociotechnical complexity in engineering design and continue by discussing the relevant literature to identify the needs for developing a new modeling method. The discussion will be followed by a description of the research methodology.
Sociotechnical Complexity in Engineering Design.
This section discusses sociotechnical design complexity. In the broader scope of systems engineering, the term complexity can have multiple facets , yet in the context of engineering design, it has typically been associated with the challenges stemming from aspects of uncertainty, iteration, and multidisciplinary and dynamic behavior of DPs [4,5,15–17]. The sociotechnical aspect of design complexity is based on the premise that actors involved in the DP interact and negotiate a solution that can be considered as satisfactory. In this context, “satisfactory” can be considered at two levels: at the task-level, where it is related to the degree with which the involved actors perceive the outcome of their job (successful completion of a given task), and at the project level, where it is related to the degree with which the same outcome complies with the constraints cast by the rest of the DP (impact of local success on the project behavior).
Assuming the DP as a network of tasks, similar to what is shown in Fig. 1, the sociotechnical aspect of design complexity advocates that the properties of a design task, such as its duration, cost, and performance quality, are affected not only by the technical properties of the product being designed (such as functions, computational resources, technology options) but also by the way that cross-functional design teams act and interact (design organization), either directly or indirectly influencing the performance of design tasks and the whole project [14,18].
For example, successful completion of task D in Fig. 1 is a function of the performance of the involved actor, who has sufficient knowledge and expertise relating to the task, can be available at the scheduled time, has access to the nonhuman (e.g., computational) resources, and can perform the given task in a feasible time (ΔT) and at a good level of quality, satisfying project targets (1 − Pr). Compromising such situations is not always straightforward. Consider the situation that task D can be accomplished by two actors (e.g., designer), a junior and a senior. Compared with the junior, the senior is more knowledgeable on the task and has more confidence to deliver a high-quality task within the specified time yet at double the cost. However, because of his or her role in the organization, the senior actor is working on multiple projects simultaneously and is not readily available.
At the project level, finding the right fit of actors to tasks is more challenging and depends on a number of factors, such as the targets and objectives, criticality of the tasks, the project phase, expected quality/performance of the project, and composition of the team. An additional challenge, relating to the multidisciplinary nature of such projects, is that many of the tasks are reliant on effective communication, collaboration, and coordination mechanisms among cross-functional teams , whose performance can affect each other, often without them being aware of it.
Another key source of design complexity is iteration. From a sociotechnical point of view, design iteration implies a broader range of meanings, in the sense that it is not only about progression and correction of the content of the work toward successful completion of the design project but also about coordination of the people involved in the process, in terms of how they act and interact and how these issues can influence the project success . Based on the classification of iteration by Wynn and Eckert , and by uncovering the role of actors involved in DP, this paper is concerned with the following aspects of iterations: (1) convergence: the progressive increase in the quality level of project parameters to satisfy predefined targets; (2) incremental completion: the planned repetition of a task to move toward a desired goal; (3) rework: the redoing of a task in a similar way (comparing to its original) because of imperfect inputs or outputs (e.g., failed quality or overtime duration); and (4) comparison: the repetition of a task with multiple options (of actors or levels of expertise) to decide between them.
Overall, the integrated sociotechnical perspective can be more pragmatic, in the sense that it is possible to get a better understanding of the reality of projects, which is of value for research and practice. Simultaneously, it can be more problematic, since the interplay between sociotechnical properties (with mutual and overlapping impacts) raises further challenges in modeling of DPs in order to provide guidance on the content of the work (who should do what) and hints on how to carry this out.
The Need for a New Method.
The term sociotechnical was first introduced by the London Tavistock Institute in the early 1950s to oppose Taylorism, suggesting that technical systems need to respect social needs. From this perspective, a sociotechnical system can be defined as a social system sitting on a technical base . Bringing the sociotechnical principle to today’s engineering design processes reflects the importance of making the right choice of decision at the right time by the most appropriate actor; in other words, the right fit of actors to tasks. For example, a design solution that may be appropriate at one time may turn out to be inappropriate when new information is available .
To be able to cope with these challenges, managers commonly use a variety of process models since a single model may not contain the right information to best support multiple purposes . Therefore, a “richer” model of DPs that are able to provide more relevant information on the reality of design projects is a need for both research and industry. To this end, the more recent models of DPs have attempted to propose richer models of DPs by combining two or more typical models in a single framework in order to make the most benefit from them in supporting multiple project purposes.
For example, Karniel and Reich  focus on understanding the impact of dynamic changes affecting multiple product-process levels on process planning and demonstrate that the combination of design structure matrix (DSM) with Petri Nets can provide a better representation of capabilities of PDPs. Kasperek  argues that structural modeling tools such as DSM and its extension, multiple domain matrix (MDM) show only a static view of the system and therefore uses MDMs as a basis to derive system dynamics models to capture the dynamic behavior of the engineering DPs. Further examples of the literature on design process models can be found in reviews such as Refs. [4–6,23]. However, in spite of the capability of these models to provide rich information to support DP planning, many of them do not consider organizational-related factors.
While sociotechnical DP modeling can be considered at different levels of abstraction, a common assumption is that the way people (e.g., designer) behave and interact can affect both the organizational structure and the technical decisions. At a more abstract level, for example, De Bruijn and Herder  looked at the similarities of the social and technical perspectives to investigate their multiple forms of combination. The authors concluded that full integration is not always the best way and the two perspectives should, therefore, be used alongside each other. Ouertani et al.  highlighted the impact of conflict as a critical element of collaborative design and developed DEPNET, an integrated data management tool to support designers with information identifying the negotiators, based on the previous knowledge of the product. Parraguez et al.  focused on the DP interface level and developed a process improvement approach based on dynamic monitoring and analysis of project progress. The authors utilized the concepts of centrality and clustering from network science to quantify the information dependencies between design activities and design organization. While these models have looked at the interfaces between the product, process, and organization domains, highlighting the mechanism of information flows between them, they are not detailed enough to provide rigorous information on the content of work, i.e., the way that individuals’ expertise affects the duration and performance quality of design tasks.
At a more detailed level, previous studies have combined task dependencies with models of agent behavior to gain a deeper insight into the dependencies between organizational dynamics and project performance. For example, Danesh and Jin  combined the concepts of decision networks and agent-based design to support collaborative decision-making by facilitating the negotiation process in downstream concurrent activities. Crowder et al.  proposed a multi-agent system to simulate team working at individual, task, and team levels, aiming to support multidisciplinary organizational decision-making. The “virtual design team” (VDT)  was another simulation-based method, developed and applied in a number of case studies to analyze the impact of organization structure on team performance. Further relevant studies can be found in the references [28–32].
In summary, the review of sociotechnical DP modeling presented here highlights the need for further research to address some limitations of the existing models, including the following: (1) the multiple types of dependencies between tasks and actors that are not always explicitly defined; (2) the majority of existing models assume that the organization structure is static, and that all tasks are ex-ante assigned to actors; (3) that when iteration occurs, it is assumed to be accomplished in the same way as that of the original task (using the same actor at the same level of information); (4) the mechanisms of communication and interaction among cross-functional teams and their impact on actors’ performance are not always explicitly defined; and (5) there is no explicit understanding of how to measure the performance of actors with respect to the outcomes of a given task.
This paper contributes a process modeling method to address some of these limitations, specifically the following: dependencies between tasks and actors, dynamic assignment of actors to tasks, flexible iteration policy, and performance measurement of actors based on the number of tasks successfully completed. The authors extend the Signposting approach  to additionally consider the allocation of agents’ properties to design tasks. This is due to the flexibility of Signposting in capturing multiple sources of uncertainty at the same time, the capability of modeling of multiple types of dependencies between design tasks, the richness of output information, and the industry-independent structure [14, p. 93]. The next section explains the underlying methodology behind this research.
The Research Method.
To address the need for developing a method for sociotechnical modeling of DPs, this research follows a systematic procedure, which is the design research methodology (DRM), proposed by Blessing and Chakrabarti . The DRM builds a detailed elaboration of research toward a clear objective, including methods, deliverables, and potential iterations, which is quite beneficial for the development of support methodologies. It consists of the following four main phases:
Research Clarifications. This research started with a systematic review of the engineering design literature with a focus on the capability of existing models in supporting management needs [14, Sec. 2]. In doing this, the authors examined the characteristics of the existing models, with a focus on activity, process, and management, and their ability to deal with aspects of design complexity. This resulted in the identification of the specific requirements that can influence performance of design tasks, such as activity duration, information quality, concurrency, resource allocation, rework, risk, and dependency within and across product, process, and organization domains .
Descriptive Study I. The authors then expanded the scope of the literature review to (1) obtain an overview of the most frequently used approaches for dealing with sociotechnical project complexity in systems engineering  and (2) get a deeper understanding of aspects of sociotechnical uncertainty levels in engineering design, which resulted in a classification framework . At the same time, the authors described the fundamental elements of DP management and their interrelations in the context of an integrated reference model and defined a range of analyses to conceptually improve process understanding and planning [14, Sec. 3.5]. Together, the first two phases resulted in an expanded definition of the requirements (Sec. 2.1), which were continuously refined during the literature review, and a description of the context against which the results may be evaluated (Sec. 2.2).
Prescriptive Study. By the end of the previous phases, the authors had focused on the way that design tasks are carried out (by whom and how) and discussed that “understanding the impact of design organization on design activities (who is doing what) requires the development of a new method” (Sec. 2.2). The conceptual descriptions of the modeling requirements (method elements) and their dependencies were then converted into detailed definitions and quantifiable functions (Table 2) that can be simulated and implemented. Subsequently, the Signposting method was selected as the baseline for the development of ABS, capturing the specified requirements (Sec. 4). In parallel to the development of the ABS method, a survey of discrete-event simulations was performed to identify the most appropriate simulation engine [14, Sec. 6.1].
Descriptive Study II. From the earlier stages of the model development, ABS was developed concurrently, through a process of iterative refinement guided by frequent interactions with practitioners when applying the model to the case studies (Sec. 5) and by extensive discussions with experts in the field. In addition, several small-scale examples from the literature were used during the method development to continuously evaluate the simulation model before using real-case applications. Finally, the proposed ABS model was validated against the success (performance) criteria (listed in Table 2) by application to two case studies in the automotive (Sec. 5.2) and aerospace (Sec. 5.3) industries, which have different objectives and specific requirements. A range of workshops and discussions were arranged in each case to support data collection, sharing of primary results, gathering of feedback, and sharing of advanced results including sensitivity analyses (Sec. 5.4). Further information on the verification method is provided with the case studies in Sec. 5. By the end of these applications, the practitioners had confirmed the plausibility and practical utility of the proposed approach against the initial modeling targets.
Evolution of Signposting Models
Structurally, Signposting is a dynamic task-based model of a DP, aiming to generate a sequence of tasks based on their information input and output properties . This is achieved using the concept of current confidence for a set of tasks, as an indicator of quality parameter state, in the sense that output parameters of a precedent task are used as the inputs for the next task .
The original Signposting was developed in response to an industry challenge and targeted to provide guidance on “what task to do next.” This was done by color-coding tasks based on four contextual levels of confidence: zero, low, medium, and high . Due to the potential confusion on choosing tasks at any point of the process, Melo  proposed a Markovian chain analysis, aiming to optimize the whole process route, instead of only the next task.
In the model developed by O’Donovan , several developments were added to the original Signposting, mainly in terms of the possibility of modeling multiple-class resource constraints and in-process learning through parameter evolution. In addition, modeling non-Markovian processes was possible by dedicating numerical values to the parameters, thus enabling the model to consider all types of real-life parameters. Connectivity and parallelism were the focus of the model proposed by Flanagan . The core of this model was concerned with project planning and representation in general, by investigating the impact of multiple sources of uncertainty, process properties such as scale and connectivity, and the product-process link.
To increase the practicality of Signposting in modeling real-life projects, Wynn et al.  combined the concepts of DSM with Signposting and developed a tool called the Applied Signposting Model (ASM). This model was based on a hierarchical structure of tasks and parameters as a support to model representation, enriched by a user-friendly platform. Due to the dependency-driven nature of ASM (as a task-precedence network) compared with the original Signposting as an information-driven model, the concept of iteration was largely expanded in ASM.
Based on the ASM, Shapiro  recently concentrated on the properties of a design task, in particular, design confidence and iteration. The goal was to support process planning and execution through identifying and prioritizing changes in task properties. In another recent study, Chen et al.  combined ASM with Bayesian Networks to investigate the impact of different types of resource on the project outcomes and support agile resource management. The next section describes the proposed ABS version of the Signposting system.
The Proposed Method
This section presents the configuration and formulation of the proposed method that is termed actor-based Signposting. As clarified in Sec. 2, the fundamental objective behind ABS is to understand the interplay between design organization (e.g., designer’s expertise level) and design tasks, in terms of total process duration, cost, and performance quality, and highlight the role of individuals who perform the tasks. It is based on the premise that individuals involved in a DP with a similar set of skills or roles within the organization have the potential to perform the same set of tasks. However, depending on their degree of competency and experiences, they might have different levels of knowledge relating to a task.
The proposed method presents an alternate way to identify the most appropriate choice of actor for each task in order to satisfy required quality parameters. Therefore, the model is expected to provide insights on the optimal allocation of actors to tasks (task-actor fits) to support the selection of who should do the next task and how, rather than just what task to do next. This section starts with an overview of the proposed method and continues with a description of the operation of the model.
Overview of the ABS Model.
An overview of the ABS method is presented in Fig. 2. Structurally, it consists of four iterative steps:
Modeling requirements (Fig. 2(a)): The proposed model starts with identifying the business case, i.e., What is the purpose of modeling? What are the main properties of the DP? How much detail in modeling can properly reflect the process behavior? Depending on the case, the user requires some basic information on the product, its associated design process, the design organization, and the dependencies between them. Information of the product and process will later be used to ensure successful completion of design tasks, while organization-related information (e.g., expertise level, influences) will be used to compare and choose the best actor for each task. All the required information will be converted into the inputs and variables of the model. Modeling inputs (listed in Table 1) are those elements whose values are fixed during the simulation and should be directly added by the user. Variables (listed in Table 2) are those elements whose values are changing during the simulation and will mainly be used to measure the performance of the simulated process.
Modeling dependencies (Fig. 2(b)): The result of a primary study of the business case is information about the modeling requirements. This includes information on tasks to be done to achieve the purpose, quality parameters associated with the tasks, and a list of actors involved in the DP. The next step is to understand the dependencies between these requirements. The dependency between tasks (binary task precedence matrix) is an essential input to understand the next alternate task in the process, where number 1 in the matrix (Fig. 2(b)) means precedency and 0 otherwise. Upon completion of a precedent task, the model searches for the next alternative: this may be a new task in the precedent matrix or may be a rework task waiting for its associated actors to be available. Accordingly, the model will be terminated upon completion of all tasks, which is why it is called task-forward. Understanding the actors who influence task properties (binary task-actor association matrix) is an essential input to understand the potential choice of actors for each task, where number 1 means the actor can potentially perform the task and 0 otherwise. Eventually, understanding the parameters associated with each task (numerical task-parameter association matrix) is essential to ensure completion of the task at a desired level of quality. This matrix represents the minimum level of (quality) confidence in the parameters associated with the tasks: numbers 1, 2, and 3 show the low, medium, and high level of confidence, respectively. In addition, ABS views the DP as a network of tasks, characterized by the associated parameters, which are performed by a group of actors at different levels of expertise (as the main organizational parameter). Therefore, there is no direct dependency between actors and task-related parameters (Fig. 2(b)). A further input to the model is related to the influences between actors. This information is helpful in comparing multiple choices of actor for a given task, when they represent the same level of confidence in performing the task (Sec. 4.5).
Modeling tasks (Fig. 2(c)): Providing the input information, the rest of the model is concerned with preprocessing (finding the precedence task), processing (assigning the right choice of actor based on the chosen execution strategy in Sec. 4.5), and postprocessing (checking the quality of output against the minimum quality levels and finding the right iteration policy). Detailed information on the functionality of tasks in ABS is presented in Sec. 4.4. The goal of modeling tasks in ABS is to ensure successful completion of all tasks while maximizing the quality of outputs with respect to the minimum quality levels and minimizing waiting time for alternate actors, which results in optimal resource allocation.
Modeling iterations (Fig. 2(d)): In ABS, iteration is assumed to occur at the task level (within a task) and hence, the model does not allow the possibility of an iteration loop that encompasses multiple tasks. In addition, task iteration in ABS has two facets: when the output confidence level of a task (output state) cannot satisfy the minimum confidence level (input state) or is based on the likelihood of success of a task. Therefore, when iteration occurs, the model iterates until achieving the target output of the task and then continues with the next precedent task. In such cases, depending on the availability of actors, two iteration policies are embedded into the model: rerunning the task using the same actor or using a different choice of actor.
Mechanism of the ABS Model.
Technically, the core elements of ABS are parameters, tasks, and actors. Figure 3 shows the relationships between these elements in ABS. All the requirements that affect the quality of performing tasks are captured in the form of parameters within the model. The product and process parameters are used to represent the performance quality of tasks during execution, while organization-related parameters (e.g., expertise level) reflect the performance of design team, improving the fit of the actor task. The model starts with identifying the precedent task among the pool of tasks, i.e., all tasks that should be performed to complete the design project (preprocessing). As part of the task model, the actor model then helps find the right choice of actor, based on the preadjusted task execution strategy (processing). Eventually, task-related parameters ensure successful completion of the task with respect to the minimum quality requirements.
Rooted in Signposting, ABS is a parameter-driven model, in that successful execution of a task depends on satisfying the minimum level of quality on the system parameters. Parameters in this sense can be any kind of requirement, attribute, or function related to the product, the design process, or the design organization (Fig. 2(a)). Quality levels in Signposting are represented by the concept of confidence levels. Therefore, confidence is an abstraction of quality parameters that may be described qualitatively using parameter quality levels (represented as numerical ordinal scales) or quantitatively using process variables .
Since the actual value of the parameter is not used in Signposting, the quality levels are defined as an abstract representation of the quality or maturity of the parameter . Therefore, their exact meaning varies with the parameter type, and there is no boundary in the number of quality levels. Confidence levels are a translation of quality levels into numerical values, so there is one confidence level associated with each quality level in the model.
In ABS, confidence levels are used to reflect the properties of design tasks (e.g., geometry) and actors (e.g., expertise level, influence level) and also to reflect the progress of work being done in terms of the number of completed tasks. It means that a task cannot be completed until a certain actor with the necessary skills is assigned in such a way as to satisfy the minimum level of confidence required for the task parameters. Accordingly, if a task is associated with three different parameters, all of them should be satisfied to ensure the successful completion of the task.
ABS is a task-forward model, in that the progress of the model depends on the number of tasks being successfully executed. Properties of the tasks (such as the total duration to complete all the tasks) are also identified as key criteria against which the performance of the design project may be evaluated. Representation of tasks in Signposting is typically based on the input and output states for the associated parameters . They represent the minimum level of confidence that is required for the task to be executed (inputs) and the new state of the parameters indicating new (or unchanged) confidence levels after execution of the task (outputs) . In ABS, representation of a task in the simulation model can be considered at three stages: preprocessing, processing, and postprocessing.
When an alternate task being executed fails, regardless of the reason, it goes back to the preprocessing stage to check the feasibility of resources and the waiting time. In ABS, the waiting time for a rework task (WTRn) follows the same format (using a triangular distribution function) as the waiting time for original task (WTOn). However, to reflect the impact of information evolvement and learning during process, WTRn is expected to be less than the WTOn. The value of WTRn typically varies depending on the task (n), such as the case of aircraft engine design (Sec. 5.3). However, if the information was not available, it is considered to be equal to 80% of WTOn as the default, such as in the case of the engine oil pipe (Sec. 5.2).
When the alternate task is ready for execution, the next step is to identify the choice of actor (to answer who should do that) and the task execution strategy (to answer how to do that). At this step, it is assumed that actors with different levels of knowledge on a task commonly perform it differently in terms of duration and performance quality level. Therefore, an underlying model of the task, referred to as the Actor Model, was developed to evaluate multiple choices of actors for a given task and assign the best choice to the task. The Actor Model acts as a task performance meta-model, in the sense that the outcome of a task (in terms of duration and performance quality) depends on the choice of actor and his or her expertise level.
Understanding the best way of doing a task (best execution strategy) is not always straightforward, particularly when the information regarding actors is indeterminate or ill defined. Predicting the best task execution scenario among all possible options can be very challenging, even when the organization structure is assumed to be static. One way to cope with this challenge is to compare all possible scenarios, where the total number of simulation scenarios depends on how well the task and actor parameters are defined and also on the execution strategy.
However, depending on the context, all possible scenarios might not always seem relevant to the managers. In projects such as in the case of aircraft engine design used in this study (Sec. 5.3) with a well-established DP, most of the tasks are handled by using a single actor. In such cases, the challenge facing managers is to identify the minimum level of actor’s expertise that can satisfy the quality requirements of a task. On the other hand, in more flexible DPs, such as designing a completely new product, there might be multiple choices of actors for doing the same job. To enhance usability of the model in handling multiple situations (being industry independent), a flexible actor selection policy is proposed in ABS, called the “task execution strategy,” in order to control the number of simulation scenarios in the model. A description of different task execution strategies is presented in Sec. 4.5, with reference to the Actor Model.
The postprocessing stage is essentially concerned with checking satisfactory completion of the whole task, evaluating the performance of the ABS model, and making reports. As a result of processing tasks by different choices of actors (or by an actor at different levels of expertise), there will a range of output states at different quality (confidence) levels. In generating output states (low, medium, and high), it was assumed that actors with a higher level of expertise were more likely to perform the task at a higher level of quality. For example, the probability of performing a task at (low, medium, and high) levels for a senior-level actor is set to (10%, 30%, and 60%). This probability for a junior-level and a novice-level actor is equal to (20%, 60%, and 20%) and (60%, 30%, and 10%), respectively.
If the task failed, the simulation model reruns the task based on the adjusted iteration policy (Fig. 2(d)). Note, the ABS offers two iteration policies in terms of using the same choice of actor at the same level of expertise or using a different choice or the same actor yet at a different level of expertise. Similar to the task execution strategy, different options might not be usable in different contexts. The rationale behind this flexible iteration policy is that in the early phases of a design project, the design teams interact to negotiate a solution. When an actor fails in a task (e.g., a novice designer), there might be further interactions and negotiations to figure out the source of failure, thus helping to perform the task with a higher level of knowledge (e.g., information).
This follows by assigning the same actor with more expertise, assuming that information has evolved, or the actor has learned during the process, or by finding another choice of actor (e.g., a senior designer) from the design team. The adapting flexible iteration policy (including the learning during process) is a means to capture the indirect impact of interactions and negotiations on task execution. Regardless of the iteration policy, impact of information evolvement or learning during process is incorporated into the proposed model by means of a specified percentage of improvement in duration and cost of the reworked task, when assigned to the same choice of actor. In the context of this work, the percentage comes from the experts (during the workshops) who have sufficient knowledge about the nature of design tasks.
By completing each task in the process successfully, the model checks the current status of the project in terms of the number of completed tasks (to determine the remaining tasks to complete the project) and project duration and cost (to determine the validity of the project). If all the tasks were completed successfully or the project deadline or budget was overrun, the simulation model was terminated. Otherwise, the model went back to the preprocessing stage to identify the next alternate task, referred to as the “trajectory of the next task” in Fig. 2(d). In such situations, the value of the variable TSi in the precedence matrix will be changed from 0 to 1, and a signal will be sent back to the preprocessing stage (using a separate variable) to search for the next alternate task.
In addition, by completing each task in the simulation model, the performance variables of the design project are updated. In defining the list of performance variables, there is concern about the value of information that the model can provide managers, in particular, what sort of information should be included or disregarded in the model to support managers in making specific decisions . Nevertheless, the project performance was considered at two levels (Sec. 2.1): the individual task level and the overall project level. These are shown in Table 2.
According to the table, the project-level criteria include information on the progress of the design project, the total duration, cost, amount of rework, and increment (in the quality of parameters) to perform the jobs, as well as resource utilization in terms of the number of completed tasks by a specific actor (e.g., novice designer). At the individual task level, there are similar performance criteria relating to the duration (waiting time and processing time), cost, rework, and increment in the quality of a task. These performance criteria consequently give insights on sequencing, scheduling, and resource utilization of the project (conventional process planning issues), and also further information relating to the identification of the critical tasks and actors, multiple combinations of actors’ expertise with tasks and their impact, and the impact of using different strategies for task execution (organizational planning issues).
ABS is an actor-based model, in that the permission for tasks execution is subject to finding an appropriate actor. Previously, the term actor in Signposting , and in a broader sense in activity models , has referred to the audience (i.e., user) of the model or as a resource in task mapping. From this perspective, the way that actors are assigned to tasks and the result of their influence in altering design decisions are missing. Some attempts, such as the recent study of Chen et al. , used the Bayesian Theory to understand how different resource properties can affect design project performance, and execution of the task in particular. However, in this study, similar to VDT , the design organization (e.g., choice of actor, expertise levels) is preassigned to the tasks and therefore its performance cannot affect the quality of the project.
The core functionality of the actor model in ABS is to find and assign the right choice of actor to each task. The model requires two types of information as the input: the complete list of actors involved in the DP and, for each specific task, the potential candidates who can perform the job. This information is provided to the model using a binary task-actor dependency matrix (ACjn in Table 1), where the number 1 implies actor j is associated with the task n and 0 otherwise.
As far as this research is concerned, there are two challenges in finding the right choice of actor for a task: What is the best alternative among available options and what is the actor’s most appropriate level of expertise (knowledge on the task) to satisfy the minimum quality requirements? Addressing these challenges is likely to be different with regard to different contexts. When there is one choice of actor for a task, for example, in aircraft engine design with a well-established DP (Sec. 5.3), the challenge is to identify the minimum level of the actor’s expertise that can satisfy the quality requirements of a task. In more flexible DPs, such as designing a completely new product, there might be multiple choices of actors for doing the same job. In such cases, the challenge is to identify the right choice and the right level of expertise in satisfying the task requirements.
When discussing the task execution strategies with practitioners, a number of responses were received: (1) they know who the best choice is, according to their experience, or they do not have any other option at hand for that job; (2) they know who is doing the job but want to know what might happen if somebody else were to do the same job; or (3) they just know somebody who might have the relevant skills to do the job. In all these situations, understanding the minimum level of expertise for performing a task has nevertheless been a serious challenge.
Depending on the research context, the above challenges might be considered in different ways. In ABS, when setting up the simulation model, three types of task execution strategies (Fig. 2(c)) are proposed in order to (1) consider multiple forms of allocating actors to the project tasks, (2) keep the number of simulation scenarios as relevant as possible (Sec. 4.4.2), and (3) not to lose sight of the essential information among other extraneous information .
Deterministic assignment of the actor: In this case, the actor is already assigned to the task and this is fixed throughout the simulation, but their level of expertise relating to the task is unknown and should be established before starting the simulation. This setting is more appropriate for large mature DPs where the design organization is less concerned with the composition of the team and more concerned with increasing the performance of the team. From a simulation point of view, this strategy significantly reduces the number of modeling scenarios, but cannot provide much information on what will happen if the task is performed by somebody else.
Comparative analysis of different actors: In this case, there is more than one choice of actor for a task and it is assumed that they can perform the task at different levels of expertise. In other words, both actor choice and expertise level are uncertain. Therefore, the model runs the same task using all different choices and at different levels of expertise. It then compares the output states in terms of the quality confidence level to find the best possible choice. For example, given two choices of actors, three levels of expertise for an actor (novice, junior, and senior), and three levels of quality confidence (low, medium, and high), applying the comparative execution strategy results in 18 different scenarios. Comparative execution can be helpful when building up a design team, for example in early phases of a DP, where there is a lack of explicit information on what actor should do what task and at what level of quality to improve project outcomes. Following a scenario analysis, if there were two or more actors representing the same outcome, the model assigns the actor with the higher level of influence. The information comes from the numerical actor influence matrix (Fig. 2(b)). The matrix can use the numbers 1, 2, and 3 to represent the low, medium, and high levels of influence, respectively, between actors. The rationale comes from the reality of design projects where people in the organizational hierarchy usually reflect different levels of influence and power.
Probabilistic assignment of actors: It is the expanded version of deterministic assignment, in the sense that the input information is not sufficient to confidently assign an actor to a task, but it can represent the suitability of multiple choices in terms of probability values. The user of the model may not be able to explicitly say, between actor 1 and actor 2, who is the better choice for running task A, but the user can say which one is more likely to be a better choice. For example, he or she can distinguish the choice of 1 or 2 by assigning probabilities of 60% and 40% to him or her. Similarly, the user can fix a choice of actor and distinguish different levels of expertise by assigning different probability values, given the sum of the probabilities is equal to 1. The default setting of the ABS simulation model in this case is 60% to a senior, 30% to a junior, and 10% to a novice. These values are modifiable before each round of simulation.
Applications of ABS and Implications
This section discusses the utility of the proposed method in practice. It first presents an overview of the algorithm that was used for ABS simulation. Then, the two case applications are presented to demonstrate the effectiveness of the model in dealing with real-world challenges. The case studies use a range of experiments to show the relevance of model outcomes in supporting dynamic process planning. The section ends with some implications for addressing managerial concerns.
Setting Up the Simulation.
Built on the previous discussion, the Arena software package from Rockwell Automation has been selected as the main simulation platform to codify the method. The overview of the ABS simulation algorithm is illustrated in Fig. 4. The simulation model comprises three submodels corresponding to the preprocess, processing, and postprocessing phases of the tasks. The processing phase itself contains three submodels related to the different task execution strategies. Overall, 26 attributes, 27 variables, and 6 queueing types were used to simulate the ABS method.
To provide better functionality for the user, the Arena simulation platform was integrated with an Excel plugin. This has the potential to enable simplicity of reading, scalability of input data, and interactive visualization of outcomes. In doing so, an Excel file is used to transfer the input data (listed in Table 1) into the simulation model (specifically the preprocessing submodel). This includes the information on the initial task precedence, probabilistic waiting and execution times, minimum quality levels of parameters, and the human and nonhuman (computational) resource required to accomplish each task. The duration uncertainty of the tasks (both waiting and processing times) is represented in the simulation model using triangular probability distributions (lower limit, mode, and upper limit), noting that other probability distributions are also available.
Another file was created to automatically record the outputs from the postprocessing submodel and export them into multiple Excel sheets, each one of which is responsible for recording a specific type of output (e.g., task-related or actor-related data). This file is continuously updated when the simulation is running and provides information on the performance of the project (Table 2). To provide a better visualization of the outputs, the Excel sheets were integrated with a range of Scatter Plots, Gantt Charts, and Matrices. This information, together with the typical information provided by Arena, was then used for further statistical analysis. The following section presents the result of applying ABS to two real-case DPs.
Case Study 1: Flexible Redesign Process.
The first case study was undertaken in collaboration with a consulting company that provides re-engineering and design services for the automotive sector. During the 5 months stay of the first author in the company, the method was applied to the case of an engine oil pipe. The primary finding of this application has been previously reported in Ref.  and will be elaborated in the following.
Problem Description. The product had already been designed and hence the main parameters, albeit not their values, were roughly determined. The design team was asked to redesign the product. Six concepts were generated, each one of which was characterized by unique technical performance, resource composition, and overall costs. While being flexible on the time taken to deliver the final solution, the client had a particular emphasis on satisfying the product requirements at a lower cost. It was, therefore, fundamental for the managers in charge to be precise in their cost estimation. Satisfying client needs, which led to the product requirements, was very challenging for the design team, because of product-related changes necessary to stay competitive in the market, and also process-related changes in properties, such as activity durations, number of designers, or availability of the right competencies that could affect the process performance.
In this situation, the ABS method contributed to the concept selection as a process improvement tool to support managers in finding the most appropriate composition of the design team considering the project duration and cost. In particular, rather than offering an optimal design solution, the objective was to predict the process behavior by offering a range of possible scenarios and quantifying their impacts on project outcomes. The situation is displayed in Fig. 5, illustrating the concept selection process from the primary design concept to the final best proposal, based on the modeling and analysis results.
The data required for the simulation of the conceptual design process for the engine oil pipe came from a range of workshops and individual discussions with managers and decision-makers. Access was also granted to relevant documents and there was a chance to visit a number of laboratories and workshops where product design and development were carried out. This close collaboration with practitioners provided the authors with detailed information on the requirements, clients’ expectations, and objectives of the redesign process, which was helpful when simulating and verifying the model.
Structurally, the DP was comprised of four main phases: requirement analysis, concept design, detail design, and testing. Twenty-two tasks were identified for the redesign process that could be accomplished on regular time (usually with overtime) or on-time (without delay). Eight technical parameters (such as temperature pressure and process technology options) were recognized with potential impact on the execution of these tasks. In addition, up to four actors—including a senior and a junior designer, and a senior and a junior cost engineer—were assigned to perform the redesign process. However, allocation of actors was subject to the condition that there should be a combination of designers and cost engineers: two designers could not work together and at least one cost engineer should be involved. As the consequence of this rule, the task execution strategy in the simulation model adjusted to “deterministic.”
Simulation Result. The primary simulation of the case was performed using two actors (one designer and one cost engineer) and based on the on-time completion (without delay). To reflect the learning during the DP, experts in the company suggested consideration of the duration of a rework task as 80% of the original duration, but at the same cost. Figures 6 and 7 show the performance evaluation of the ABS simulation model at the individual task level (based on a single simulation run) and overall project level (based on 1000 simulation runs), respectively. Improving and verifying the simulation results was achieved during workshops with managers and experts over a 2-month period.
Regarding the discussion on sociotechnical complexity that was presented in Sec. 2.1 (and displayed in Fig. 1), the task-level performance criteria (Fig. 6) provide detailed information on the way that design tasks should be carried out, i.e., how much effort is required to accomplish each task in terms of duration (as a Gantt Chart), cost, rework, and performance quality. The parallel bars in Fig. 6 provide a simple and visual way to present different properties of the tasks (cost–benefit trade-off) in order to identify criticalities. For example, the tasks CD03-1 and CD02-1 represent a shorter execution time, but the highest level of rework required to satisfy minimum quality levels. Tracking these tasks back in the process shows that they are related to the documentation and investment analysis of the production process. Some other tasks are more effort-intensive, such as CD01-3 (idea generation) and DD02 (concept testing), but can significantly improve the overall project quality with fewer iterations. This type of trade-off helped managers identify criticalities and mutual impacts between individual task properties.
The project-level performance criteria (Table 2) focused on high-level properties of the design project, such as the total project duration, cost, amount of rework, and increment in quality levels. To verify the use of the simulation model to capturing the effect of uncertainty levels (e.g., associated with waiting time, processing time, expertise level, etc.) on project properties, 1000 simulation runs were executed. The result is shown graphically in Fig. 7 using Parallel Coordinates plots, including the full view (at the top right corner) and the customized business view. Each column-type/dimension in the figure is associated with one of the project properties: total duration, cost, number of reworks, and increment in quality levels.
The customized view in Fig. 7 illustrates the example of the situation (range of scenarios) that was of more interest to managers: indicating the range of project properties when focusing on the delivery of a quality product to the client at a lower cost. From a modeling point of view, delivery of a quality product means achieving the higher confidence levels in quality parameters, which, in turn, depends on the expertise level of actors. Tasks accomplished by more experienced actors (seniors) are expected to deliver a higher confidence level along with an increase in the total cost. Conversely, tasks performed by less experienced actors appear to be cheaper, despite the higher number of rework cycles required to achieve the target quality. Parallel visualization of the simulation outputs, as it is represented in Figs. 6 and 7, helped identify the right composition of the design team as well as work packages.
Sensitivity Analysis. At least three extended meetings were organized in the company to evaluate aspects of the sensitivity analysis of the model. A set of five what-if scenarios were studied, mainly in relation to the impact of change in the probability of successful completion of the tasks, in rework policies, in client needs (change in the quality parameter levels), in product requirements (in terms of task-parameter associations), and in composition of the design team (the number of actors involved in the process) . As an example, the following shows how a change in the composition of the design team can affect the total project duration.
We previously mentioned that the redesign process can be accomplished at a regular time or on-time. In addition, up to four actors have been involved in the process, subject to the condition that there should be a combination of designers and cost engineers. Reflecting these rules in the simulation model resulted in the creation of four different scenarios of performing tasks: (S1) two actors with normal (regular) completion, (S2) two actors with on-time completion, (S3) four actors with regular completion, and (S4) four actors with on-time completion. 1000 simulation runs were executed to study the impact of using different design team compositions on the project duration. Figure 8 compares the histogram graph for each scenario.
From the practitioners’ perspective, it was interesting to see how multiple project planning policies related to each other. As expected, on average, normal completion (usually with overtime) gave a higher project duration comparing with the on-time completion, yet at a lower variance. Moreover, using four actors represented a higher project duration than using two actors. It might be due to the expected waiting time for the alternate actors who were dealing with other in-progress tasks. This sort of sensitivity analyses might be more helpful when taking other project properties (such as reworks and cost) into consideration. It might support managers to uncover the interplay between design team and design tasks, and eventually to investigate the feasibility of multiple design packages in a broader scope of project portfolio.
Case Study 2: Well-Structured Preliminary Design Process.
The second case study was carried out in collaboration with a world leading company that specialized in power and aviation systems. During the 8-month engagement of the first author in a collaborative project within the company, the ABS method was applied to the case of a preliminary DP relating to fan subsystems for civilian aircraft engines. The general objective of the project was to develop and deliver different design support methods, with each method concentrating on a specific range of challenges that DPs face. The work presented focuses on the modeling and analysis of the impact of actors’ knowledge and expertise on project execution.
All the research studies used the same case of a fan subsystem. In terms of research method, part of the input information in this study was adapted from the outcome of workshops that has been conducted by two previous PhD candidates, as reported in Refs. [38,39]. However, several individual discussions with an expert in the company and knowledgeable persons in academia were undertaken to convert the primary data into an ABS model and to verify the findings with respect to the company needs.
Problem Description. The fan subsystem is a performance-driven product whose properties such as weight, cost, and efficiency can heavily affect the performance of the whole system. Accordingly, the fan preliminary DP aimed to achieve a mechanically acceptable quality in a given time. Hence, unlike the former case, time to achieve the expected quality played a significant role. However, a large number of tasks in the project were handled by a single actor for each task. Therefore, the choice of actors for each task in this case was considered to be fixed and the main challenge that the design organization faced was related to identifying the appropriate level of actors’ expertise for each task. In a broader sense, total project duration became a fundamental issue that could have been affected by several properties, such as the quality of actors, quality of outcomes, and task execution strategies. Through the application of the ABS method, it was important to address the above challenge and compare different task execution strategies to determine the best fit between tasks and actors. The ultimate objective was to support managers and decision-makers in making resource management more dynamic in response to uncertain conditions.
Following the data collection process, the process contains three principal phases: concept generation, aero-thermal design, and mechanical design, which includes primary stress analyses, impact analyses, and manufacturing assessment. As the result of interviews and workshops with practitioners (presented in Refs. [38,39]), 52 tasks were identified for modeling and 14 associated actors were identified, including designers, engineers, analysts, and managers [38, p. 159].
Simulation Result. The primary simulation of this case was performed based on the “comparative” task execution strategy (Fig. 2(c)), in that each task could be executed at three levels of expertise: novice, junior, and senior. The purpose of this comparison was to ensure that each task was executed at its highest level of quality (in confidence levels). As mentioned before (Sec. 4.4.2), the probability of delivering a low-, medium-, and high-quality task by a novice was taken as (60%, 30%, and 10%), by a junior as (20%, 60%, and 20%), and by a senior as (10%, 30%, and 60%). The impact of learning during the process on task duration, similar to the previous study, was set to 80% of the original duration for the rework task. However, due to confidentiality issues, the information regarding project cost was not made available in this study. Instead, due to their importance for the practitioners’ human resource-related information, such as the best actor choice for each task as well as overall resource utilization, are presented in more detail in the simulation results. The results of the ABS simulation performance evaluation are presented in Figs. 9 and 10 at the individual task level (based on a single simulation run) and overall project level (based on 1000 simulation runs), respectively.
The parallel bars in Fig. 9 represent the effort (duration and rework) that was required to perform the task by the best choice of actor at its highest level of quality satisfying the minimum confidence levels. The output quality bar in the figure confirms that all the tasks are performed at least at a medium level of quality. To achieve the target quality, a few tasks show a considerable level of rework. Among the five tasks with the highest number of rework cycles, four of them belong to the mechanical design phase. This was not surprising to the practitioners since the majority of the tasks in this phase are concerned with the evaluation of product properties, such as stress test, impact analysis, and manufacturing assessment.
Measuring the actors’ performance in the proposed model was based on the number of tasks assigned to them (Sec. 4.5). Given the comparative task execution strategy as the default scenario, the result of a single-run simulation indicates that almost half of the tasks (52%) are performed by a kind of novice actor, while one-fourth of them involved a high degree of rework. The statistics for the junior and senior levels of expertise are 21% and 27%, respectively. This result was worth further investigation, since the project manager should make a trade-off between the cost of running tasks by less experienced actors, given a higher chance of iteration at a lower individual cost, and the cost of running tasks by more experienced actors, accepting a higher waiting time and at a higher individual cost.
As with the previous study, 1000 simulation runs were executed to verify the simulation performance by capturing the effect of uncertainty levels on project duration. The column-type dimensions in the Parallel Coordinates map (Fig. 10) show the total duration, number of reworks, and novice, junior, and senior levels of expertise as the best choice of actor. The objective of this study was to understand possible combinations of the work package that result in minimizing the project duration. This is reflected in Fig. 10 by focusing on the lower range of project durations. An additional restriction was applied to the results by focusing on scenarios that used more novice actors to achieve those durations. Based on 1000 simulation runs, it was shown that that situation was not very probable. Finding the right compromise in such situations depended on the management preferences and consideration of the design project in the broader context of the project portfolio.
Sensitivity Analysis. One of the main contributions of this study has been to understand the impact of design organization on project execution. This has been accomplished in this case study by designing a set of scenarios as the result of using different task execution strategies: (1) deterministic assignment: all tasks done by novices, (2) deterministic assignment: all tasks done by juniors, (3) deterministic assignment: all tasks done by seniors, (4) comparative assignment, and (5) probabilistic assignment. Figure 11 compares the histogram distributions of project duration for each scenario, which are based on 1000 Monte-Carlo simulation runs.
It is not surprising that using different execution strategies, and in particular changes in the expertise level of people involved in the process, led to different project behaviors. According to Fig. 11, using only senior actor resulted in minimum project duration compared with the other scenarios. However, the simulations were based on the assumption that senior actors could perform a high-quality job assuming likelihoods of (10%, 30%, and 60%); should a different set of probabilities be used, the results would most likely be different. Nevertheless, the result implies that if the objective of the project was to reduce uncertainty, then scenarios 2 and 3 might have been a better option. On the contrary, if the objective was to remove project constraints or shift the target duration, then scenario 4 (comparative assignment) should be considered. For the decision-makers, it also has the implication that when allocating actors to the tasks, the way that actors communicate to deliver a better design solution (as the result of sharing knowledge) can affect the execution of such tasks; for example, a design task can be run by a novice who is advised by a senior.
Summary of Case Studies and Implications.
In this section, the utility of the proposed model was illustrated through its application to two real-case studies, which were substantially different in their structure and specific objectives. All of the results summarized above were discussed with managers and decision-makers, who verified the practical usefulness and plausibility of the proposed method. The simulation model for the ABS method has been continuously refined throughout the case studies, based on the feedback received from experts and practitioners. Some of the suggested improvements were applied to the model and discussed further with the practitioners during the sensitivity analysis. The rest remains as future opportunities that are presented in this section.
It is widely accepted that a single model cannot cover all aspects of a DP [3,4]. Through the integration of Arena with Excel in the proposed method, attempts were made to broaden the range of visualization tools that can provide improved decision-making support . While in each case, the company was using a range of tools and techniques to support process improvement, the usefulness of the proposed model confirmed by experts comparing to the development cost of the model.
In particular, the company of the first case study continued using the ABS method as a predictive tool for at least six months. Due to their familiarity of the visualization tools (Parallel Coordinates, Charts, Histograms), interpretation of the results was not a challenge. By the end of the first case study, the company had suggested to expand the scope of the modeling to include the entire product life cycle, in order to provide a range of PDP leverages.
Concerning the latter fan case study, the ABS method was used more of a roadmap, identifying who should do what and how the quality of actors would affect the quality of process execution. The DP of the fan subsystem was highly sequential in most parts, and it did not allow full interaction between the design team to negotiate a design solution. As a result, when performing a task, an actor should consider both the outcome of his or her job (local success) and its global impact on overall system performance (see the discussion in Sec. 2.1). ABS addressed this critical challenge for the company by providing implicit knowledge on the impact of actors’ (local) quality on the project (global) quality. The company verified the usefulness of the proposed method in capturing the mutual impact of uncertainties. They expressed much interest in improving the model to deliver a better representation of each role in the case study. “This is indeed a never-ending challenge and worth further investigation,” said one systems engineer.
Overall, the proposed method aimed to provide a more realistic model of design projects that is rich in proving helpful supporting information for decision-makers. Rather than delivering one optimal solution, the proposed method articulated the landscape of the project, within which managers could explore business opportunities based on possible outcome scenarios. There remain several points of improvement, reported by the practitioners, that provide opportunities for future research.
The main limitation of the proposed model is related to its knowledge-intensive structure. This might work when the information is uncertain and tacit, as the mechanics of the model enable response to any change in the input information. However, when the information is scarce or indeterminate, such as the information on actors’ influences, this might lead to imperfect results.
Measuring the mechanism of communication and interaction in ABS is implicit: it relied on an actor influence matrix (as the input) to capture the mechanism of interaction between people. In addition, people are likely to interact and influence each other indirectly through the organization hierarchy. Even if the relevant information is available, as it was in the case studies, it is likely to be represented in a subjective way or be inaccessible due to confidentiality issues.
Resource sharing among tasks of multiple projects was not considered in the current version of ABS. In large companies, people usually work on more than one project at a time. Therefore, their availability is subject to considering the design project within a portfolio. From a modeling viewpoint, it is very challenging to find the balance between richness in information and generality of the model in a project portfolio.
Finally, in the current version of ABS, the dependency between parameters was not considered. This was due to the difficulty in explicitly understanding the dependency between product–process–organization parameters. This point was not reported by practitioners, but it is worth looking into in future development attempts.
This paper proposed a simulation-based model to support management of DPs. It discussed the current needs of managers to understand the interplay between technical systems and design organizations, leading to the requirement for process modeling to be viewed from the perspective of sociotechnical systems. Built on the relevant literature, some of the sociotechnical aspects of uncertainties, whose simultaneous occurrence can have a great impact on design decisions, were included.
As a response to these challenges, a method was proposed, referred to as actor-based Signposting, and its efficiency and effectiveness in supporting decision-makers was demonstrated by two real applications. Consequently, considering the managerial insights, ABS could offer managers an actionable and potentially scalable tool to support a range of planning issues, ranging from traditional scheduling and sequencing to the identification of organizational planning factors to help identify the way that a DP should be carried out.
To summarize its development and application, ABS can support the following:
Predictive process planning: Thanks to task-forward construction, restructuring tasks in ABS is much easier than with similar dynamic modeling tools (e.g., DSM, Petri-Nets, and ATP). It could be used to support managers on speeding up tasks to achieve a certain level of quality, on making project plans predictable, on achieving an effective task-actor planning system, and on distinguishing between good decisions and better decisions;
Dynamic organizational planning: Modeling actors in ABS provide a mechanism to measure the impact of organizational preferences on technical decisions. The result is assigning the most appropriate choice of actor to a task in order to, for example, achieve the most reasonable number of reworks in an acceptable project length. In addition, it creates an opportunity for stepping toward organizational design issues, such as identifying the mechanism by which actors should communicate to achieve the highest quality in project outcomes;
Integrated modeling of sociotechnical systems: From this perspective, DP would be more complex and more problematic to model. The proposed method attempted to identify some hidden issues of the DP that had previously been neglected, and at the same time to uncover new challenges for modelers. Nonetheless, it remains a challenge to find the right balance between the complexity of the model and its utility in modeling real-life issues;
Support managers in locally dealing with uncertainty: through the stochastic formulation of the problem, i.e., process variables at both task and project level. This allows modeling of various types of uncertainties with respect to project objectives. Consequently, further analysis of uncertainty levels during the simulation makes it possible to assess the overall process behavior through sensitivity analysis.
Overall, the experience of a number of case studies, including those presented here, showed that there should be an effective compromise between the time spent to build a model and the usability of its output in the company.
The authors would like to thank Peter Holloway (Rolls-Royce plc) and Daniel Shapiro and Hillario Xin Chen (Cambridge EDC) for their valuable inputs and assistance during the fan subsystem case study. We also gratefully acknowledge all STC-srl employees for their valuable feedback and support during the development and application (including the case study and knowledge transfer) of the methodology. Some material in this manuscript was adapted and substantially extended from the earlier works of Refs. [14,17,41,42].
- n =
- PDe =
- PPr =
progress of the project (percentage)
- TBu =
total budget of project
- TPC =
total project cost
- TPD =
total project duration
- TPR =
total number of rework cycles in a project
- TTC =
total number of completed tasks
- TTI =
total increment in quality of tasks
- ACj =
choice of actor j
- ECOn =
execution cost for original task n
- ETOn =
execution time for original task n
- ETRn =
execution cost for rework task n
- ETRn =
execution time for rework task n
- MTCi =
minimum confidence level of parameter i
- TCni =
output confidence level of parameter i on task n
- TIn =
increment in quality of task n
- TRen =
number of rework cycles per task n
- TSn =
state of a task (binary)
- URj =
utilization of human resource (actor) j in process
- WTOn =
waiting time for original task n
- WTRn =
waiting time for rework task n