Abstract

Concept selection is one of the most important activities in new product development processes in that it greatly influences the direction of subsequent design activities. As a complex multiple-criteria decision-making problem, it often requires iterations before reaching the final decision where each selection is based on previous selection results. Reusing key decision elements ensures decision consistency between iterations and improves decision efficiency. To support this reuse, this article proposes a fuzzy ontology-based decision tool for concept selection. It models the key decision elements and their relations in an ontological way and scores the concepts using weighted fuzzy TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution). By applying the tool to an example, this article demonstrates how the concepts, criteria, weights, and results generated for one decision can be reused in the next iteration.

Graphical Abstract Figure
Graphical Abstract Figure
Close modal

1 Introduction

At the beginning of a product development process, companies generate multiple product concepts in response to customer needs. How well the chosen concept meets these needs greatly impacts the quality of the final product and its development process [1]. This makes concept selection, as an engineering design decision problem, one of the most important activities in new product development processes [2]. Product development processes are iterative including exploration, concretization, refinement, and incremental completion, but also correction of mistakes and rework as assumptions or requirements change [3]. In response, concept selection is also iterative as concepts are modified or new concepts are generated. New criteria could also be introduced due to changing requirements [4,5] and designers becoming aware of new contextual factors [6]. In each iteration, one or more concepts are selected for further investigation or development [7,8]. It is important to maintain consistency in the criteria and constraints as far as possible and have transparency in the decision process.

The literature on concept selection focuses largely on single decisions by developing mathematical models for scoring concepts with given criteria, for example, the works by Ayağ [9], Hayat et al. [10], Gironimo and Grazioso [11], Jing et al. [12], Qi et al. [13], and Song et al. [14]. It neglects the iterative nature of concept selection and misses the potential to reuse decision chunks to reduce the effort in setting up decision-making problems.

This article proposes a fuzzy ontology-based decision tool for concept selection, aiming to answer two questions:

  • RQ1: How to support the reuse of decision elements for concept selection in the design process?

  • RQ2: How to facilitate the reuse of decision elements across selection iterations in the design process?

The proposed tool allows designers to choose which criteria, concepts, associated values, and calculation method they intend to reuse from previous iterations. It also supports modifications on these elements or introducing new ones. This maintains consistency in decisions between iteration cycles, offers transparency, and reduces the effort in using a decision-making tool. Section 2 explains the principle behind the tool using an example from the literature on designing golfing machine. Section 3 presents the tool and develops the example further. Section 4 discusses the benefits of concept selection in engineering design. This article is concluded in Sec. 5.

2 The Scheme of the Proposed Tool

Figure 1 presents the framework of the tool. The ontology model describes the elements responsible for articulating concept selection decisions. The computing engine implements the algorithms for scoring the concepts. The engine is fully automatic but allows interaction if the user wants to modify the data.

Fig. 1
The framework of the proposed tool
Fig. 1
The framework of the proposed tool
Close modal

2.1 The Fuzzy Ontology Model.

To allow reuse, the knowledge needs to be captured and represented in an understandable and interoperable manner. This can be done using ontologies, which are explicit specifications of a shared conceptualization, describing features, attributes, and restrictions [15,16]. The ontology models vary depending on the definitions of domain problems. Concept selection is formalized ontologically in engineering design, for example, by Shah et al. [17], Gosnell and Miller [18], Ranjan et al. [19], and Siddharth, et al. [20]. They explicitly define the measurement elements and outcomes such as quality, novelty, and creativity and propose equations to calculate the scores for each concept. These provide a useful starting point for designers and, however, do not reflect the need to reuse these decision elements across different decision cycles and are not presented in a way that lends itself to be used in computer tools.

This article develops an ontology model for concept selection, which defines the decision elements, their relations, and the properties of elements. It uses the formal terms of a generalized ontology and is implemented in the standard Web Ontology Language OWL. This captures information for concept selection at the computational level that can be easily processed by computers and implemented in a tool.

  • Classes: Collection of instances that share common properties and provide conceptual description of the domain knowledge scope.

  • Individuals: Specific instances within a class, representing a particular object in the domain.

  • Data properties: Attributes or characteristics of individuals.

  • Object properties: Relationships or connections between classes or individuals.

For concept selection, concepts are scored by combining the weights of criteria and the performance values. The one with the top score is considered the best choice. To represent the selection process, six key decision elements are extracted as classes, and the ontology model is designed as shown in Fig. 2.

  1. Concept: The concept of an alternative solution that meets the problem definition and fulfills the specifications.

  2. Criterion: A specification or requirement that a concept should meet.

  3. Performance: How well a concept satisfies the evaluation criterion under consideration.

  4. Preference: The judgment from an expert on the importance of a criterion.

  5. Ranking: The calculated results including the score and the rank of a concept.

  6. Weight: The calculated priorities of a criterion.

Fig. 2
The ontology model for concept selection
Fig. 2
The ontology model for concept selection
Close modal

In the ontology model, DecisionKnowledge links all the classes, while Result gathers the calculated results. The semantic relationships between classes are captured by object properties. The classes are connected to data via data properties. The definitions of the properties are presented in the  Appendix. In each iteration, designers can select existing instances under these classes or create new ones and then retrigger the computing engine.

During the selection, there are two types of data: objective values, such as “price” and subjective judgments, such as “easy to manufacture.” Linguistic terms (e.g., high, medium, and low) describe how concepts perform under subjective criteria. To address this uncertainty, the ontology model incorporates triangular fuzzy numbers (TFNs), a special type of fuzzy sets. A classical fuzzy set consists of two components, a set of elements x, and an associated membership function that assigns to each element a value between 0 and 1 as the degree to which it belongs to the set. For a TFN, membership function is defined as shown in Fig. 3(a), which is usually expressed as a triple (l, m, h). Introducing fuzzy sets into ontology gives rises to fuzzy ontology [21]. Although there is not a standard for fuzzy ontology representation, fuzzy OWL 2 proposed by Bobillo and Straccia [22] is a convenient extension to standard Web Ontology Language OWL. Figure 3(b) shows an example of expressing a TFN. In this way, the proposed tool can well support the use of OWL editors such as Protégé.2

Fig. 3
TFN: (a) membership function and (b) an example expression in fuzzy OWL 2
Fig. 3
TFN: (a) membership function and (b) an example expression in fuzzy OWL 2
Close modal

Wrapping up decision elements into classes has been suggested by Liu and Stewart [23] to support the reuse of knowledge in decision analysis. However, their object-oriented methodology only considers aggregation and inheritance, which are not enough to capture the relations between the decision elements in concept selection, for example, the relation between the class of design concept and the class of performance.

2.2 The Computing Engine.

To fill the values of weight and ranking instances in the ontologies, the computing engine parses the data from the associated Concept, Criteria, Performance, and Preference instances, calculates the criteria weights, and scores the concept. Weighted fuzzy TOPSIS is used to provide a practical and straightforward way of calculation. TOPSIS is a compromise method proposed by Hwang and Yoon [24]. Its principle is that the chosen alternative should have the shortest distance to the positive ideal solution (PIS) and the farthest distance to the negative ideal solution (NIS). The working process is shown in Fig. 4 and explained with an example of designing remote-controlled golfing machine from Kosky et al. [25]. This example is further developed in Sec. 3. Three concepts (Original, Cannon, and Robogolfer) are assessed by four criteria (Drives well, Putts well, Loader is robust, and Easy to transport).

Fig. 4
The working steps of the engine
Fig. 4
The working steps of the engine
Close modal

Phase I: Calculating the weights of the selection criteria of a DecisionKnowledge instance as follows:

  • Extract criteria and preference values. All the Criterion instances are extracted by tracing the object property hasCriteria from a DecisionKnowledge instance. Their importance judgments are stored under the data property preJudgement of Preference, which are linked to Criterion via object property owningPreference. Let C = {c1, c2, …, ct} be the set of Criterion instances and P = {p1, p2, …, pt} be the set of their corresponding preferences. t is the number of the Criterion instances. Each pi = (li, mi, hi) stands for a linguistic term expressed by a TFN. Table 1 lists the mappings between linguistic terms and TFNs for importance used in this work.

  • Normalize the fuzzy numbers. To limit the values within the unit interval, the fuzzy numbers are normalized to obtain the fuzzy weights for each Criterion instance by Eq. (1) [26,27].
    fwi=(lwi,mwi,hwi)=(lili+k=1,kinhk,mik=1nmk,hihi+k=1,kinlk)
    (1)

Table 1

Judgment scale for importance

Importance definitionTFNsImportance definitionTFNs
Extremely high (EH)(8,9,9)Very high (VH)(7,8,9)
High (H)(6,7,8)Medium high (MH)(5,6,7)
Medium (M)(4,5,6)Medium low (ML)(3,4,5)
Low (L)(2,3,4)Very low (VL)(1,2,3)
Extremely low (EL)(1,1,2)
Importance definitionTFNsImportance definitionTFNs
Extremely high (EH)(8,9,9)Very high (VH)(7,8,9)
High (H)(6,7,8)Medium high (MH)(5,6,7)
Medium (M)(4,5,6)Medium low (ML)(3,4,5)
Low (L)(2,3,4)Very low (VL)(1,2,3)
Extremely low (EL)(1,1,2)
For example, let the TFNs for criteria Drives well, Putts well, Loader is robust and Easy to transport be (4,5,6), (3,4,5), (4,5,6) and (1,1,2), respectively. Using Eq. (1), the fuzzy weights of Drives well is calculated as follows:
fwDrives-well=(44+5+6+2,55+4+5+1,66+3+4+1)=(0.2353,0.3333,0.4286)
  • Defuzzify the normalized fuzzy numbers. The fuzzy weights are first defuzzified into crisp values cwi and then normalized for crisp weights wi, allowing for an intuitive comparison on the importance ranking of the criteria. The crisp weights are obtained based on the centroid method in Eq. (2).
    cwi=lwi+2mwi+hwi4wi=cwii=1ncwi
    (2)

With the example, the defuzzified weight of Drives well is calculated as cwDrives_well = (0.2353 + 2 × 0.3333 + 0.4286)/4 = 0.3326. Its crisp weight wDrives_well = 0.3326/(0.3326 + 0.2667 + 0.3326 + 0.0857) = 0.3269, where 0.2667, 0.3326, and 0.0857 are the defuzzified weights of the other three criteria.

  • The fuzzy and the crisp weights correspond to the data properties wgtFuzzyValue and wgtCrispValue of a Weight instance. They are attached to a Criterion instance via the object property belongingToCriWgt between Criterion and Weight.

Phase II: Scoring the concepts. To deal with the fuzzy datatype in the ontologies, fuzzy TOPSIS is utilized, which replaces the crisp values in traditional TOPSIS with fuzzy numbers for calculation [28]. The steps are as follows:

  • Extract concepts and their performance. All the concepts are extracted following the object property hasConcepts from a DecisionKnowledge instance. Their performance under each criterion can be looked up according to the object properties owningPerformance and pfsUnderCri. Let A = {a1, a2, …, an} be the set of Concept instances, where n is the number of the Criterion instances. Their performance set is organized as a matrix in the next step.

  • Establish the performance matrix. The matrix is denoted by E = [eij]n×t, where eij is the Performance instance of Concept instance i under Criterion instance j. Each eij = (lij, mij, uij) stands for a linguistic term expressed by a TFN. Table 2 shows the mappings between linguistic terms and TFNs for performance.

  • Normalize the matrix. The performance matrix E is normalized by Eq. (3), where rij is the normalized fuzzy performance value. B stands for the benefit criteria that a higher value is expected like “easy to manufacture,” whereas C stands for the cost criteria that a lower is expected such as price. This attribute of a criterion is defined as data property criExpectation of the ontology.
    rij={(lijuj+,mijuj+,uijuj+),jB(ljuij,ljmij,ljlij),jC
    (3)
    where uj+=maxiuij or a predefined maximum boundary value if jB, lj=minilij or a predefined minimum boundary value if jC. Take the three concepts Original, Cannon, and Robogolfer, for example. Let their performance under Drives well be (6,7,8), (7,8,8), and (6,7,8) respectively. The normalization of the entry of concept Original against criterion Drives well, rOriginal-Drives_well = (6/8,7/8,8/8) because this is a benefit criterion and 8 is the maximum value.
  • Construct weighted normalized matrix. The fuzzy weights of the Criterion instances are incorporated into the normalized performance matrix. Let W = (fw1, fw2, …, fwt) be the weight vector. vij is the weighted normalized value obtained by Eq. (4).
    vij=fwj×rij
    (4)

Table 2

Judgment scale for performance

Linguistic expressionsTFNs for benefit indicatorTFNs for cost indicator
Extremely high (EH)(7,8,8)(0,0,1)
Very high (VH)(6,7,8)(0,1,2)
High (H)(5,6,7)(1,2,3)
Medium high (MH)(4,5,6)(2,3,4)
Medium (M)(3,4,5)(3,4,5)
Medium low (ML)(2,3,4)(4,5,6)
Low (L)(1,2,3)(5,6,7)
Very low (VL)(0,1,2)(6,7,8)
Extremely low (EL)(0,0,1)(7,8,8)
Linguistic expressionsTFNs for benefit indicatorTFNs for cost indicator
Extremely high (EH)(7,8,8)(0,0,1)
Very high (VH)(6,7,8)(0,1,2)
High (H)(5,6,7)(1,2,3)
Medium high (MH)(4,5,6)(2,3,4)
Medium (M)(3,4,5)(3,4,5)
Medium low (ML)(2,3,4)(4,5,6)
Low (L)(1,2,3)(5,6,7)
Very low (VL)(0,1,2)(6,7,8)
Extremely low (EL)(0,0,1)(7,8,8)

The weighted normalized entry of Original against Drives well, vOriginal-Drives_well = (0.2353,0.3333,0.4286) × (6/8,7/8,8/8) = (0.1765, 0.2917, 0.4286).

  • Determine the positive and negative ideal solutions (PIS and NIS). PIS and NIS describe the best and the worst concepts, working as references to judge the candidate concepts. They are represented by the two vectors that contain, respectively, the best and the worst performance under the criteria. They correspond to the largest and the smallest ideal values in the weighted normalized matrix as shown in Eq. (5) or are the predefined desired level and worst level, respectively.
    PIS=(v1+,v2+,,vt+)={maxivij|i=1,2,n,j=1,2,,t}NIS=(v1,v2,,vt)={maxivij|i=1,2,n,j=1,2,,t}
    (5)

In the example, after normalization, all the values are bounded by (0, 0, 0) and (1, 1, 1), and thus, the two boundaries combined with the weights of Drives well, Putts well, Loader is robust, and Easy to transport can be used as PIS and NIS, i.e., PIS = ((0.2353,0.3333,0.4286), (0.1765,0.2667,0.3571), (0.2353,0.3333,0.4286), (0.0556,0.0667,0.1538)), and NIS = ((0,0,0), (0,0,0), (0,0,0), (0,0,0)).

  • Calculate the distances to the PIS and NIS. For each Concept instance, Eq. (6) calculates the distances to the PIS and NIS, denoted by di+ and di, respectively.
    di+=j=1md(vij,vj+),i=1,2,,ndi=j=1md(vij,vj),i=1,2,,nd(v1,v2)=13[(lv1lv2)2+(mv1mv2)2+(hv1hv2)2]
    (6)

The distance to the PIS of Original is calculated as follows:
dOriginal+=13[(0.23530.1765)2+(0.33330.2917)2+(0.42860.4286)2]+13[(0.17650.0442)2+(0.26670.1)2+(0.35710.1786)2]+13[(0.23530.1176)2+(0.33330.2083)2+(0.42860.3214)2]+13[(0.05560.0208)2+(0.06670.0333)2+(0.15380.0962)2]=0.3622
  • Compute the final scores. The two distances di+ and di are aggregated to generate a final value by Eq. (7). The Concept instances are ranked in a descending order according to the scores.
    si=didi++di
    (7)

The final score of Original SOriginal = 0.7283/(0.3622 + 0.7283), where 0.7283 is its distance to NIS.

The score and the rank correspond to the data properties score and rank of a Ranking instance. They are further connected to a Concept instance via the object property belongingToCptRnk between Concept and Ranking.

The calculation so far reflects one decision maker. Multiple decision makers can be considered by carrying out the aforementioned calculation independently and then aggregating their scores. Alternatively, their judgments can be aggregated first before a single calculation process is carried out. Arithmetic or geometric mean can be used for the aggregation, as Eq. (8), where q is the number of decision makers, st and (lt,mt,ht) are the crisp and fuzzy judgment values, respectively, of decision maker t.
Arithmeticmeanforcrispvalue:1qt=1qstForfuzzyvalue:(1qt=1qlt,1qt=1qmt,1qt=1qht)Geometicmeanforcrispvalue:(t=1qst)1qForfuzzyvalue:((t=1qlt)1q,(t=1qmt)1q,(t=1qht)1q)
(8)

2.3 Reuse Steps With the Tool.

The tool supports the reuse of a particular concept selection problem in three scenarios as illustrated in Fig. 5.

Fig. 5
Reuse scenarios and steps
Fig. 5
Reuse scenarios and steps
Close modal

Brand new selection problem. The designers start with creating a new DecisionKnowledge instance and then Concept instances. When defining the criteria, the designers could first look up in the instance repository to see whether existing Criterion instances meet their requirements, which could be linked to this new problem via object property hasCriteria. Otherwise, new Criterion instances need to be populated. Preference and Performance instances are then created by recording the linguistic judgments. All the instances are linked via the properties as defined in the ontology (see Fig. 2). To compare the concepts, the designers can invoke the engine, which automatically generates the weights of criteria and the scores and rankings of the concepts (i.e., Weights and Ranking instances).

Changes in a particular problem. When there are changes to the concepts, the criteria, or the judgments on the performance or the weights in a problem, the designers first look for the DecisionKnowledge instance corresponding to the problem and edit them by adding or deleting instances or modifying the values. Once the problem is revised, the engine is invoked for comparison.

New iteration on a problem. Most parts of a selection problem could stay the same in the following iteration. The designers create a new DecisionKnowledge instance, find, and link the instances involved in the previous iterations and fills in the gaps.

Both changes and new iteration reuse the existing solution chunks; however, the changes reuse the entire instance set of a selection, whereas new iteration creates a new DecisionKnowledge instance, which helps track the changes and compare the results among iterations.

3 Application of the Tool

This section illustrates reusability of the decision chunks using the example published by Kosky et al. [25]. The example is adapted to cover the three scenarios.

3.1 Concept Selection for Remote-Controlled Golfing Machine.

The RC portable device must play nine holes of golf at a local golf course with the fewest possible number of strokes. Its functions include driving, chipping, putting, loading balls, and transporting balls. Three concepts as shown in Fig. 6 have been generated that fulfill the design requirements. A comparison will be carried out to select the best concept based on the data in Table 3. Four subjective criteria translated from a customer requirement list are used for evaluation: drives well, putts well, ball loader is robust, and is easy to transport.

Fig. 6
Concept drawing of (a) the Cannon, (b) the Original, and (c) Robogolfer from Kosky et al. [25] (Figures 25.6—25.8).
Fig. 6
Concept drawing of (a) the Cannon, (b) the Original, and (c) Robogolfer from Kosky et al. [25] (Figures 25.6—25.8).
Close modal
Table 3

Data for concept selection, adapted from Kosky et al. [25]

Criteria and WeightDrives wellPutts wellBall loader is robustEasy to transport
ConceptImportantMedium low importantImportantExtremely low important
CannonExtremely highMedium lowMedium highExtremely high
OriginalVery highMedium lowMedium highMedium
RobogolferVery highExtremely highExtremely highMedium
Criteria and WeightDrives wellPutts wellBall loader is robustEasy to transport
ConceptImportantMedium low importantImportantExtremely low important
CannonExtremely highMedium lowMedium highExtremely high
OriginalVery highMedium lowMedium highMedium
RobogolferVery highExtremely highExtremely highMedium

Phase 0—Create knowledge instance. The knowledge instances are created according to the fuzzy ontology model. The three concepts, four criteria, and their subjective judgments are modeled first as outlined in Fig. 7 and saved as an OWL file. The designers can either use Protégé or our software tool—CSelector.3

Fig. 7
The ontology instances and relations for the example
Fig. 7
The ontology instances and relations for the example
Close modal

Phase I: Calculating the weights. The tool extracts the instances from the OWL file, classifies the values as shown in Fig. 8, and calculates the weights using Eqs. (1) and (2). The results are highlighted in the red box.

Fig. 8
Weights of the criteria
Fig. 8
Weights of the criteria
Close modal

The properties of the criteria and the concepts can be viewed and edited using the tool. Any changes will be written back to the OWL file for further iteration.

Phase II: Scoring the concepts. Using Eqs. (3)(7), the final results are generated as shown in Fig. 9. Robogolfer is of the highest score (i.e., 0.8786) followed by Cannon (0.7255) and Original (0.6679).

Fig. 9
Final ranking of the three concepts
Fig. 9
Final ranking of the three concepts
Close modal

3.2 Introducing New Criteria.

A new criterion “easy to manufacture” is introduced, which affects development time. The three concepts perform differently against this criterion. Because the selection in the previous iteration has been recorded in the form of ontology instances, in this iteration, the designers only need to add a new Criterion instance easy_to_manufacture with its two data properties criDescription and criExpectation and then link it to a Preference instance. In the previous example, a Preference instance M that describes “important” has been created for criteria drives_well and loader_is_robust. Thus, the designers can reuse it for the object property owningPreference from easy_to_manufacture. Adding a new criterion requires new judgments on the performance of the concepts, so three new Performance instances, pfsC_etm, pfsO_etm, and pfsR_etm, and their properties are created according to the ontology model.

With the added data, the tool updates the results. Figure 10 shows that the criteria weights have been redistributed and the ranking has moved Original to the second place. This makes sense because of the good performance of Original under this newly introduced important criterion.

Fig. 10
Results of introducing a new criterion
Fig. 10
Results of introducing a new criterion
Close modal

3.3 Introducing New Alternative Concept.

In the previous selection, Robogolfer has the highest score; however, manufacturing the robot arm of the Robogolfer is very complicated. A new concept OriginalUpdate is introduced by replacing the putter of Original with Robogolfer's linear spring putter. This leads to a new iteration of the selection. All the five criteria and the three concepts (and their judgments as well) are passed to this iteration. The designers only need to create a new Concept instance, add associated Performance instances, and link them to the DecisionKnowledge instance. The tool receives the updated ontology file and generates the ranking results, where OriginalUpdate is ranked top (see Fig. 11). The criteria weights and the order of Robogolfer, Original, and Cannon, still stay the same because no changes have been made on their judgments.

Fig. 11
Results of introducing a new concept
Fig. 11
Results of introducing a new concept
Close modal

4 Discussion

4.1 Identification of Reusable Decision Instances.

The proposed ontology model describes the key decision elements in concept selection illustrating that criteria, concepts, their judgments, and results generated for one iteration can be reused in another iteration. The designers can further look up particular reusable instances via the instance of class DecisionKnowledge, check the properties, and then determine if reusing them or not. This allows transparency between iteration loops. As illustrated in the golfing machine example (see Fig. 12), the designers trace the four criteria and the three concepts at the beginning of the iteration of introducing new criteria, which have been further reused in the iteration of introducing new concept. After, the designers apply the calculation to compare concepts using the existing and modified criteria and weights.

Fig. 12
Checking reusable instances
Fig. 12
Checking reusable instances
Close modal

4.2 Reuse Across Decision Problems.

The decision models can also be applied to support different problems with overlapping criteria, as all the instances in different concept selection problems are connected via the semantic relationships defined in the ontology model. This helps designers to setup a new concept selection problem through following the semantic relations between the instances. For example, when designing a golf ball collector where three criteria stand_well, light_weight, and easy_to_transport are considered for three concepts HandPusher, HandPicker, and RoboPicker, other criteria might also be suitable. To explore more available criteria, a new selection problem is created in Protégé according to the proposed template and stored in the ontology instance repository, named SltBallCollector. All the ontology instances form a knowledge graph for selection as shown in Fig. 13. With the graph, the designers will be guided to the RC golfing machine selection via the criterion easy_to_transport, which is common in both selection problems. By looking at the criteria also connected to the same concept, the designers could be prompted to include easy_to_manufacture. This example also reflects the reusability of the proposed selection template and a single element of a decision (i.e., Criterion). As more selection problem instances are added, more reusable information becomes available.

Fig. 13
The knowledge graph for concept selection problem
Fig. 13
The knowledge graph for concept selection problem
Close modal

4.3 Adaption of the Proposed Ontology.

The proposed ontology was designed for concept selection so that the data properties of a concept record the target product, function, and design principle. The ontology includes common-sense decision elements relevant to selection decision problems. It can be modified or extended by changing the definitions of the elements and their properties, adding new elements or removing existing ones. For example, if a design requirement probability of jamming is identified, it can be considered as a new criterion for assessing the concepts for the RC golfing machine. The value of a concept against this criterion is percentage rather than subjective judgment. To adapt the ontology, the data type “fuzzy number” of Performance via data property pfsJudgement is changed to “float.”

People may also use different terms in their problems. For example, when evaluating the novelty of designs, Shah et al. [17] use the terms “idea” and “attribute.” In this case, Concept and Criterion can be replaced by Idea and Attribute as illustrated in Fig. 14(a). Preference is ignored because the weights of attributes are predefined instead of being calculated. Instances can then be created according to the ontology. Figure 14(b) illustrates part of instances related to one idea Entry #1 (highlighted in purple), including the instance NoveltyCalculation for class DecisionKnowledge, four attribute instances, their weights, and performance.

Fig. 14
Adapting the ontology to novelty assessment: (a) the model and (b) example instances
Fig. 14
Adapting the ontology to novelty assessment: (a) the model and (b) example instances
Close modal

5 Conclusion and Future Work

Existing concept selection methods are mainly based on multiple-criteria decision-making (MCDM) methods like analytic hierarchy process (AHP) or analytic network process (ANP). Although new ideas can be updated with these methods, only the calculation methods are reused. The designers have to reset the problem and reorganize the criteria and the concepts. In this work, the proposed ontology wraps up six decision elements into classes, enabling the tool to model concept selection in a reusable manner and allowing the designers to identify the reusable decision instances. This answers RQ1: how to support the reuse of decision elements for concept selection in the design process. Across iterative selection cycles, the tool supports designers to select from the existing criteria, concepts, and values according to their particular scenarios. It maintains consistency and enhances decision transparency. This answers RQ2: how to facilitate the reuse of decision elements across selection iterations in the design process. This reusability feature is particularly beneficial when dealing with similar decision-making scenarios or when modifications are required to existing decision criteria. With the implemented calculation methods, the tool further allows for easy modification and expansion of decision criteria and concepts. The flexibility in reusing and customizing the decision elements and the transparency of the ranking process could also encourage the designers to use the tool in that a more detailed and straightforward selection process could gain more trust from the decision makers [29].

The illustrative example shows the applicability of the proposed tool. To further verify the effectiveness of the tool, experiments in real-world decision-making scenarios will be conducted. Feedback from designers will be gathered in terms of usefulness, ease of use, and the tool's impact on decision-making outcomes. We intend to recruit 15 students in year 3 or 4 from mechanical design. They will first be given the proposed tool to evaluate their designs in the three scenarios in Sec. 4 (stage 1) and then be provided an MCDM method for the same evaluation (stage 2). Their user experience and consumed time in both stages will be recorded and compared. For a better validation, we would like to conduct a case study in companies for expert designers' opinions toward the tool.

This work takes TOPSIS as the fundamental scoring method. However, the introduction or removal of alternatives could change the alternative's order of preference, which is known as rank reversal in decision-making. This exists not only in TOPSIS but also other MCDM methods like AHP and ELECTRE. Using extreme values for normalization and to represent PIS and NIS is an effective solution to the rank reversal problem in TOPSIS [30,31]. This particularly suits the scenario where all the criteria have the same range of values. For instance, the same scale is used for the judgments against the criteria in our example, and thus, the lower and upper boundaries can be used. Otherwise, the decision makers need to predefine appropriate boundaries based on additional information [32] or some information expansion algorithm [33]. An interesting piece of work regarding this problem is to introduce a loop during the calculation procedure, where the user is asked whether they want to change the PIS and NIS. This would make the decision less comparable, but it might be possible to add visualization to complement the results that show the differentiation in priorities.

Many MCDM methods can be used to build various concept selection models, such as AHP, simple additive weighting (SAW), TOPSIS, and ELECTRE. AHP stands out, because of its capability of calculating both criteria weights and alternative scores. While pairwise comparison forces users to reflect about criteria in detail, comparing every two factors/alternatives is effort intensive. Its computational complexity is O(n2), where n is the number of the factors, while that of most others is O(n). SAW, which is most intuitive and easy, is more appropriate for the scenarios where values under different criteria are comparable. ELECTRE more suits eliminating unqualified alternatives. TOPSIS can incorporate both real data and judgment into the calculation but cannot derive criteria weights. It is challenging to determine a decision method. Researchers suggest examining the characteristics of the methods. For example, Wątróbski et al. [34] cover 56 MCDM methods from nine characteristics including preferences, uncertainty, and desired outcome. Cinelli et al. [35] guide the choice among more than 200 of MCDM methods according to a comprehensive set of characteristics. Although focusing on general MCDM problems, these works provide insights for selecting a method for concept selection.

This work assumes that the criteria and their values have been well formulated. However, customer requirements as a main source of the selection criteria are complex and interconnected. It would be important and interesting to elicit and gather customer requirements as well as their preferences from public data such as online product comments, and then to translate them into applicable criteria. Integrating this will help evolve the tool into a more powerful platform.

Footnotes

2

Stanford University, 2013, “Protégé 4.3 Release,” Stanford University, Stanford, CA, https://protegewiki.stanford.edu/wiki/P4_3_Release_Announcement

3

The software tool can be obtained upon reasonable request.

Acknowledgment

The authors would like to thank the anonymous reviewers and the editors for the valuable comments that helped them in improving the quality of this paper.

Funding Data

  • The National Natural Science Foundation of China (Grant No. 62002031).

  • Scientific Research Start-up Fund of Shantou University (Grant No. NTF21042).

  • National Key Research and Development Program of China (Grant No. 2021YFB1714400).

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The authors attest that all data for this study are included in the paper.

Appendix

Table 4

Object properties of the ontology

NameDefinition
hasConceptsLink a DecisionKnowledge to a set of Concepts.
hasCriteriaLink a DecisionKnowledge to a set of Criteria.
hasPerformanceLink a DecisionKnowledge to a set of Performance.
hasPreferenceLink a DecisionKnowledge to a set of Preference.
hasResultsLink a DecisionKnowledge to a Result.
hasWeightsLink a Result to a set of Weights.
hasRankingsLink a Result to a set of Rankings.
owningPerformanceLink a Concept to a Performance.
pfsUnderCriLink a Performance to a Criterion.
owningPreferenceLink a Criterion to a Preference.
belongingToCptRnkLink a Ranking to a Concept.
belongingToCriWgtlink a Weight to a Criterion
NameDefinition
hasConceptsLink a DecisionKnowledge to a set of Concepts.
hasCriteriaLink a DecisionKnowledge to a set of Criteria.
hasPerformanceLink a DecisionKnowledge to a set of Performance.
hasPreferenceLink a DecisionKnowledge to a set of Preference.
hasResultsLink a DecisionKnowledge to a Result.
hasWeightsLink a Result to a set of Weights.
hasRankingsLink a Result to a set of Rankings.
owningPerformanceLink a Concept to a Performance.
pfsUnderCriLink a Performance to a Criterion.
owningPreferenceLink a Criterion to a Preference.
belongingToCptRnkLink a Ranking to a Concept.
belongingToCriWgtlink a Weight to a Criterion
Table 5

Data properties of the ontology

NameDefinitionData type
cptTargetProductThe name of the product a concept is targeted at.String
cptFunctionThe description of the function of the product.String
cptPrincipleThe design principle of a concept.String
criDescriptionThe description of a criterion on what aspect it evaluates.String
criExpectationThe attribute of a criterion—cost (a lower value is expected, i.e., the lower the better) or benefit (a higher is expected, i.e., the higher the better).String
criUnitThe unit of a criterion's measurement.String
preJudgementThe expert's judgment on the importance of a criterion.Fuzzy number
pfsJudgementThe expert's judgment on the performance of a concept under a criterion.Fuzzy number
rnkRankThe ranking of a concept over all the other compared concepts.Integer
rnkScoreThe overall score of a concept by combining the performance under all the criteria with the criteria weights.Float
wgtFuzzyValueThe fuzzy weight of a criterion.Fuzzy number
wgtCrispValueThe crisp weight of a criterion.Float
NameDefinitionData type
cptTargetProductThe name of the product a concept is targeted at.String
cptFunctionThe description of the function of the product.String
cptPrincipleThe design principle of a concept.String
criDescriptionThe description of a criterion on what aspect it evaluates.String
criExpectationThe attribute of a criterion—cost (a lower value is expected, i.e., the lower the better) or benefit (a higher is expected, i.e., the higher the better).String
criUnitThe unit of a criterion's measurement.String
preJudgementThe expert's judgment on the importance of a criterion.Fuzzy number
pfsJudgementThe expert's judgment on the performance of a concept under a criterion.Fuzzy number
rnkRankThe ranking of a concept over all the other compared concepts.Integer
rnkScoreThe overall score of a concept by combining the performance under all the criteria with the criteria weights.Float
wgtFuzzyValueThe fuzzy weight of a criterion.Fuzzy number
wgtCrispValueThe crisp weight of a criterion.Float

References

1.
Prabhu
,
R.
,
Leguarda
,
R. L.
,
Miller
,
S. R.
,
Simpson
,
T. W.
, and
Meisel
,
N. A.
,
2021
, “
Favoring Complexity: A Mixed Methods Exploration of Factors That Influence Concept Selection When Designing for Additive Manufacturing
,”
ASME J. Mech. Des.
,
143
(
10
), p.
102001
.
2.
Scott
,
M. J.
, and
Antonsson
,
E. K.
,
1999
, “
Arrow's Theorem and Engineering Design Decision Making
,”
Res. Eng. Des.
,
11
(
4
), pp.
218
228
.
3.
Wynn
,
D. C.
, and
Eckert
,
C. M.
,
2017
, “
Perspectives on Iteration in Design and Development
,”
Res. Eng. Des.
,
28
(
2
), pp.
153
184
.
4.
Al Handawi
,
K.
,
Andersson
,
P.
,
Panarotto
,
M.
,
Isaksson
,
O.
, and
Kokkolaras
,
M.
,
2021
, “
Scalable Set-Based Design Optimization and Remanufacturing for Meeting Changing Requirements
,”
ASME J. Mech. Des.
,
143
(
2
), p.
021702
.
5.
Guo
,
X.
,
Huang
,
Z.
,
Liu
,
Y.
,
Zhao
,
W.
, and
Yu
,
Z.
,
2023
, “
Harnessing Multi-Domain Knowledge for User-Centric Product Conceptual Design
,”
ASME J. Comput. Inf. Sci. Eng.
,
23
(
6
), p.
060807
.
6.
Burleson
,
G.
,
Herrera
,
S. V. S.
,
Toyama
,
K.
, and
Sienko
,
K. H.
,
2023
, “
Incorporating Contextual Factors Into Engineering Design Processes: An Analysis of Novice Practice
,”
ASME J. Mech. Des.
,
145
(
2
), p.
021401
.
7.
Pahl
,
G.
,
Beitz
,
W.
,
Feldhusen
,
J.
, and
Grote
,
K.-H.
,
2007
,
Engineering Design A Systematic Approach
, 3rd ed.,
Springer
,
London
.
8.
Ulrich
,
K. T.
, and
Eppinger
,
S. D.
,
2011
,
Product Design and Development
, 5th ed.,
McGraw-Hill
,
New York
.
9.
Ayağ
,
Z.
,
2016
, “
An Integrated Approach to Concept Evaluation in a New Product Development
,”
J. Intell. Manuf.
,
27
(
5
), pp.
991
1005
.
10.
Hayat
,
K.
,
Ali
,
M. I.
,
Karaaslan
,
F.
,
Cao
,
B.-Y.
, and
Shah
,
M. H.
,
2019
, “
Design Concept Evaluation Using Soft Sets Based on Acceptable and Satisfactory Levels: An Integrated TOPSIS and Shannon Entropy
,”
Soft Comput.
,
24
(
3
), pp.
2229
2263
.
11.
Gironimo
,
G. D.
, and
Grazioso
,
S.
,
2022
, “
Concept Selection for the Preliminary DTT Remote Maintenance Strategy
,”
Fusion Eng. Des.
,
180
, p.
113161
.
12.
Jing
,
L.
,
He
,
S.
,
Ma
,
J.
,
Xie
,
J.
,
Zhou
,
H.
,
Gao
,
F.
, and
Jiang
,
S.
,
2021
, “
Conceptual Design Evaluation Considering the Ambiguity Semantic Variables Fusion With Conflict Beliefs: An Integrated Dempster-Shafer Evidence Theory and Intuitionistic Fuzzy -VIKOR
,”
Adv. Eng. Inform.
,
50
, p.
101426
.
13.
Qi
,
J.
,
Hu
,
J.
,
Huang
,
H.
, and
Peng
,
Y.
,
2022
, “
New Customer-Oriented Design Concept Evaluation by Using Improved Z-Number-Based Multi-Criteria Decision-Making Method
,”
Adv. Eng. Inform.
,
53
, p.
101683
.
14.
Song
,
W.
,
Niu
,
Z.
, and
Zheng
,
P.
,
2021
, “
Design Concept Evaluation of Smart Product-Service Systems Considering Sustainability: An Integrated Method
,”
Comput. Ind. Eng.
,
159
, p.
107485
.
15.
Gruber
,
T. R.
,
1993
, “
A Translation Approach to Portable Ontology Specifications
,”
Knowl. Acquisition
,
5
(
2
), pp.
199
220
.
16.
Borst
,
W. N.
,
1997
,
Construction of Engineering Ontologies for Knowledge Sharing and Reuse
,
Centre for Telematics and Information Technology (CTIT)
,
The Netherlands
.
17.
Shah
,
J. J.
,
Smith
,
S. M.
, and
Vargas-Hernandez
,
N.
,
2003
, “
Metrics for Measuring Ideation Effectiveness
,”
Des. Stud.
,
24
(
2
), pp.
111
134
.
18.
Gosnell
,
C. A.
, and
Miller
,
S. R.
,
2016
, “
But Is It Creative? Delineating the Impact of Expertise and Concept Ratings on Creative Concept Selection
,”
ASME J. Mech. Des.
,
138
(
2
), p.
021101
.
19.
Ranjan
,
B. S. C.
,
Siddharth
,
L.
, and
Chakrabarti
,
A.
,
2018
, “
A Systematic Approach to Assessing Novelty, Requirement Satisfaction, and Creativity
,”
Artif. Intell. Eng. Des. Anal. Manuf.
,
32
(
4
), pp.
390
414
.
20.
Siddharth
,
L.
,
Madhusudanan
,
N.
, and
Chakrabarti
,
A.
,
2020
, “
Toward Automatically Assessing the Novelty of Engineering Design Solutions
,”
ASME J. Comput. Inf. Sci. Eng.
,
20
(
1
), p.
011001
.
21.
Huitzil
,
I.
,
Bobillo
,
F.
,
Gómez-Romero
,
J.
, and
Straccia
,
U.
,
2020
, “
Fudge: Fuzzy Ontology Building With Consensuated Fuzzy Datatypes
,”
Fuzzy Sets Syst.
,
401
, pp.
91
112
.
22.
Bobillo
,
F.
, and
Straccia
,
U.
,
2011
, “
Fuzzy Ontology Representation Using OWL 2
,”
Int. J. Approximate Reasoning
,
52
(
7
), pp.
1073
1094
.
23.
Liu
,
D.
, and
Stewart
,
T. J.
,
2004
, “
Integrated Object-Oriented Framework for MCDM and DSS Modelling
,”
Decis. Support Syst.
,
38
(
3
), pp.
421
434
.
24.
Hwang
,
C. L.
, and
Yoon
,
K.
,
1981
,
Multiple Attribute Decision Making, Methods and Applications (Lecture Notes in Economics and Mathematical Systems)
,
Springer-Verlag
,
Now York
.
25.
Kosky
,
P.
,
Balmer
,
R.
,
Keat
,
W.
, and
Wise
,
G.
,
2021
, “Chapter 25 – Design Step 3: Evaluation of Alternatives and Selection of a Concept,”
Exploring Engineering
, 5th ed.,
P.
Kosky
,
R.
Balmer
,
W.
Keat
, and
G.
Wise
, eds.,
Academic Press
,
Cambridge, MA
, pp.
523
539
.
26.
Wang
,
Y.-M.
, and
Elhag
,
T. M. S.
,
2006
, “
On the Normalization of Interval and Fuzzy Weights
,”
Fuzzy Sets Syst.
,
157
(
18
), pp.
2456
2471
.
27.
Calabrese
,
A.
,
Costa
,
R.
, and
Menichini
,
T.
,
2013
, “
Using Fuzzy AHP to Manage Intellectual Capital Assets: An Application to the ICT Service Industry
,”
Expert Syst. Appl.
,
40
(
9
), pp.
3747
3755
.
28.
Liu
,
Y.
,
Eckert
,
C.
,
Yannou-Le Bris
,
G.
, and
Petit
,
G.
, “
A Fuzzy Decision Tool to Evaluate the Sustainable Performance of Suppliers in an Agrifood Value Chain
,”
Comput. Ind. Eng.
,
127
, pp.
196
212
.
29.
Zheng
,
X.
,
Ritter
,
S. C.
, and
Miller
,
S. R.
,
2018
, “
How Concept Selection Tools Impact the Development of Creative Ideas in Engineering Design Education
,”
ASME J. Mech. Des.
,
140
(
5
), p.
052002
.
30.
García-Cascales
,
M. S.
, and
Lamata
,
M. T.
,
2012
, “
On Rank Reversal and TOPSIS Method
,”
Math. Comput. Modell.
,
56
(
5
), pp.
123
132
.
31.
Aires
,
R. F. D. F.
, and
Ferreira
,
L.
,
2019
, “
A New Approach to Avoid Rank Reversal Cases in the TOPSIS Method
,”
Comput. Ind. Eng.
,
132
, pp.
84
97
.
32.
Yang
,
W.
,
2020
, “
Ingenious Solution for the Rank Reversal Problem of TOPSIS Method
,”
Math. Comput. Modell.
,
2020
, p.
9676518
.
33.
Yang
,
B.
,
Zhao
,
J.
, and
Zhao
,
H.
,
2022
, “
A Robust Method for Avoiding Rank Reversal in the TOPSIS
,”
Comput. Ind. Eng.
,
174
, p.
108776
.
34.
Wątróbski
,
J.
,
Jankowski
,
J.
,
Ziemba
,
P.
,
Karczmarczyk
,
A.
, and
Zioło
,
M.
,
2019
, “
Generalised Framework for Multi-Criteria Method Selection
,”
Omega
,
86
, pp.
107
124
.
35.
Cinelli
,
M.
,
Kadziński
,
M.
,
Miebs
,
G.
,
Gonzalez
,
M.
, and
Słowiński
,
R.
,
2022
, “
Recommending Multiple Criteria Decision Analysis Methods With a New Taxonomy-Based Decision Support System
,”
Eur. J. Oper. Res.
,
302
(
2
), pp.
633
651
.