Abstract

Existing literature on information sharing in contests has established that sharing contest-specific information influences contestant behaviors, and thereby, the outcomes of a contest. However, in the context of engineering design contests, there is a gap in knowledge about how contest-specific information such as competitors’ historical performance influences designers’ actions and the resulting design outcomes. To address this gap, the objective of this study is to quantify the influence of information about competitors’ past performance on designers’ belief about the outcomes of a contest, which influences their design decisions, and the resulting design outcomes. We focus on a single-stage design competition where an objective figure of merit is available to the contestants for assessing the performance of their design. Our approach includes (i) developing a behavioral model of sequential decision making that accounts for information about competitors’ historical performance and (ii) using the model in conjunction with a human-subject experiment where participants make design decisions given controlled strong or weak performance records of past competitors. Our results indicate that participants spend greater efforts when they know that the contest history reflects that past competitors had a strong performance record than when it reflects a weak performance record. Moreover, we quantify cognitive underpinnings of such informational influence via our model parameters. Based on the parametric inferences about participants’ cognition, we suggest that contest designers are better off not providing historical performance records if past contest outcomes do not match their expectations setup for a given design contest.

1 Introduction

1.1 The Role of Information in Design-Under-Competition.

Engineering design-under-competition can be viewed as a contest where the designers are the contestants [13]. Designers compete as individuals or in teams to solve design problems with certain design objectives. Their reward (e.g., a prize or the profit from product sale) depends on how well they achieve the design objectives as compared to their competition. Examples include product design competitions in the market [4,5] and organized competitions such as R&D competitions and innovation contests [6,7].

Existing literature in contest theory has established that the design of a contest influences contestant behaviors and, thereby, the outcomes of a contest [811]. There are several aspects of contest design that affect contestant behavior such as design of incentives [1214], contest stages [1517], and decisions such as what and how much information to share with the contestants [18]. Much of existing work on contest design has focused on incentive design as well as design of multistage contests. However, much less attention is given to other aspects such as how to frame and partition the problem [19], how to characterize contestants [20], what contest-specific information to present to the contestants [18] and its influence on cognition [21,22]. Such aspects become increasingly important while understanding engineering design competition scenarios.

The design literature has established that designers’ decisions are influenced by the information that is presented to them [2325], and how it is presented [2628], which in turn affects the design outcomes. In the context of design-under-competition, since the designers are the contestants, sharing various types of contest-specific information such as knowledge about the sponsors of the contest, the reputation of the contest, and the competitors in the contest [6,29] also affects their cognition, behaviors, and outcomes. Consider, for example, the publicly available data on a crowdsourcing platform called GrabCAD [30,31]. GrabCAD hosts crowdsourced engineering design competitions via sponsor organizations such as NASA and GE. The publicly available GrabCAD data included information about past contests such as the contest sponsors, the past winning solutions, the associated winners, and the overall contestants [30]. Availability of such information has the potential to impact contestants’ beliefs about the outcomes of the current contest. Similarly, information sharing in competitive contracting influences design decision making affecting outcomes, design costs, and sustainability [32].

There is extensive literature in contest theory on information sharing in contests [3335]. It is established that contests with past competitors who have had “strong” past performance records would be considered more competitive than contests where past competitors have had “weak” performances [36,37]. However, in the context of engineering design, there is a lack of understanding of the cognitive processes that underpin such behaviors and contest outcomes. Since design cognition greatly influences design decisions, understanding cognition in design-under-competition can help contestants make better decisions as well as improve design outcomes. To address these gaps, our objective in this paper is to quantify the influence of information about competitors’ past performance on designers’ beliefs about the outcomes of a contest, which influences their design decisions, and the resulting design outcomes.

1.2 Scope of this Study.

There are several types of contests studied in existing literature and they are broadly categorized into (1) naturally occurring contests due to disputes or conflicts such as market competitions, lobbying, wars, and court trials and (2) organized competitions by contest sponsors to achieve some goal such as public procurement, R&D competitions, innovation contests, scholarships, and sports [38]. In the context of engineering design, both naturally occurring contests such as product design competitions between firms [4,5] and organized contests such as crowdsourcing contests exist [6,7]. Each type of contest scenario has its nuances that may influence designer behaviors in a manner that begets the need to establish several different models of design behaviors under competition. Moreover, engineering-design-specific characteristics such as the type of design problems [39], the type of design process leveraged [40,41], whether there are individual designers or design teams [42], design cognition [43,44], and the knowledge and expertise of the designers also influence the design outcomes [23] and could thus influence the outcomes of the contests.

Furthermore, there are various classes of design problems that can be utilized for a design competition. Consider the class of design optimization problems [45] which are types of design problems typically focused on physics-based metrics of performance, such as weight or strength of material, that can be evaluated using objective measures. On the other hand, design creativity problems [46] cannot be evaluated a priori or by the contestant. Thus, historical information about competitor’s past performance is one of the several factors that influence design behaviors and contest outcomes. Other factors such as the type of design problem can also affect the availability or relevance of historical information on design behaviors and contest outcomes. For example, historical information about objective metrics is typically available in product design competitions between firms via past product versions. However, subjective evaluations such as a product’s creativity or novelty may not be available and such information may not be relevant to another design contest.

Clearly, there is a broad range of engineering design contest types and problem types. Thus, for this study, we focus on a specific scenario by selecting a specific type of contest, the design problem, the design process, and the decisions made by the designers. Specifically, we focus only on organized contests where availability of information such as past performance of competitors can be controlled by the contest organizers along with other contest design aspects such as rewards, stages, participant pool, and costs. For example, consider a periodic innovation contest organized by NASA for the design of waste recycling systems in space. In such a scenario, information about past winning designs may influence contestants’ design thinking and contest organizers need to be able to make an informed decision about the impact of sharing past performance information.

Assumptions: To formalize such a scenario, we first assume that the contest is nondynamic, and nonstrategic. This assumption implies that a designer does not get to observe the design decisions being made by their competitors for the current contest. Consequently, their decisions are not in direct response to the decisions being made by their competitors and thus, they are not strategic from a game-theoretic standpoint. Second, we assume that the only signals players receive about the competition are via the past performance records of previous competitors. This implies that we do not consider the effect of learning about competitors in a given contest or updating one’s state of knowledge about the opponent. These two assumptions are made to isolate the effect of past performance history of the contest on a designer’s decisions. Third, we focus on a single-stage single-prize contest. Since the incentives and contest stages are also factors that influence contestant behaviors, they are controlled for in this study. Fourth, we focus on a class of design problems where contestants have an objective performance measure that they can use to evaluate their designs. Contestants can generate new designs within a potentiality large but bounded design space and can evaluate the design performance at an a-priori known cost. The resource, cost, and time constraints prevent designers from exhaustively searching the design space. Therefore, they efficiently explore the design space while minimizing the incurred cost of design and maximizing the performance. Fifth, the engineering design process is assumed to be a cognitive sequential information acquisition and decision-making (SIADM) process [23] (further details of the specific assumptions are provided in Sec. 3).

1.3 Approach and Rationale.

Our approach to address the research objective consists of two steps. First, we modeled the influence of past performance record of competitors on a designer’s cognition and information acquisition decisions. We extended our previous framework on modeling a sequential information acquisition activity in an engineering design process [23] by considering a design contest. The framework enabled us to instantiate the cognitive model for this study. Second, we designed and executed a behavioral experiment where participants of the experiment were given information about past competitors’ historical performance records with strong or weak performance.

Since cognition is not directly observable, there is a need for computational models that quantify the influence of information such as past performance records on designers’ cognition and decision-making behaviors. Such models can aid in developing a theory of mind about the designers [47,48]. Theory of mind is the humanistic ability to infer others’ mental models, preferences, and intent [49]. While humans can infer cognition via the theory of mind, there is a lack of computational tools that can do so. As design cognition strongly influences behaviors and outcomes, inferring contestants’ cognition can aid the contest organizers in making better decisions while designing such contests to influence design behavior change [50,51]. This can result in better predictions about the outcomes of design contests as well as achievement of desired contest outcomes. Such models can also help competitors in a market make informed decisions.

The paper is structured as follows. We first discuss the experiment details which are provided in Sec. 2. Then, we describe the model in Sec. 3. It is based on the assumptions that individuals strive to maximize their expected payoff and use the Bayesian approach to update their cognitive state of knowledge based on new information. We utilized experimental data to estimate the parameters in the model and to test hypotheses about the influence of past performance record on design cognition, behavior, and outcomes. The results are discussed in Sec. 4. Finally, we discuss the implications of this study and the validity of the modeling assumptions in Sec. 5. It is to be noted that references to a contestant, designer, individual, and participant are, respectively, made in the context of a contest, the design scenario, the computational model, and the behavioral experiment.

2 Experimental Study

In this section, we describe the experimental study. Specifically, we discuss the problem statement shown to the participants, our design of experiment (DOE) rationale, the hypotheses formulation and operationalization. Throughout this section, the participants in the behavioral experiment are considered as the contestants for a design contest.

2.1 The Track Design Game.

Participants were told that they will participate in a series of contests organized by a firm that is interested in designing roller coasters. In every contest, they were required to design a track. They were informed that they are competing against an opponent while solving the track design problem as described in Sec. 2.1.1. The contestant that achieved a higher value of the design objective for a given contest won the corresponding prize amount for that contest.

In reality, the “opponent” was a modeled agent that was designed with a past performance record. The modeled agent either had a strong past performance record or a weak past performance record. Moreover, the participants were either given information about their performance record or not. The authors’ decision to design the competitors as a modeled agent was made to achieve experimental control in order to quantify the influence of historical information about competitors’ performance on a participant’s design behaviors and outcomes. The modeled agent was also designed to be consistent with their past performance while competing against a participant in a given contest. Neither the participant nor the modeled agent could observe the design decisions of the other. Further details about the design of the agent and the controlled factors are provided in Sec. 2.3.

2.1.1 The Task.

We utilized the track design problem statement from our previous study, which has been designed to be representative of a design search problem [23]. The task was to design a roller coaster track where the objective E(x)( = f(x)) of the designer was to “maximize enjoyment experienced by the rider of the track.” To achieve the objective, a participant needed to design a circular valley segment of the track with an appropriate width w( = x) as shown in Fig. 1. The participants were not provided an explicit mathematical form of the “enjoyment function” E(w). The rationale was that in real design scenarios, design objectives are a combination of qualitative and quantitative factors that seldom have a mathematical form explicitly known to the designers. What the designers may know is the influence of various design parameters such as the width w of the track on the design outcome, that is, “enjoyment.” Thus, the participants were informed that a small valley width would make the ride uncomfortable due to high g forces and a wide valley has a high radius of curvature, that is, a “flat” track. Both cases result in reduced enjoyment, which implied that there is an optimal width w for which the enjoyment for the rider is maximized.

Fig. 1
The actual interface of track design game used by the participants in the experiment. The interface reflects the moment after a participant executes a “try” and submits a width value of 489 for which the enjoyment value of 26.274 is shown to the participant.
Fig. 1
The actual interface of track design game used by the participants in the experiment. The interface reflects the moment after a participant executes a “try” and submits a width value of 489 for which the enjoyment value of 26.274 is shown to the participant.
Close modal
We designed the objective function E(w) such that it satisfies requirements such as concavity, nonnegativity, function parameterization, and function asymmetry in order to control for factors such as incentivization, intuition, guessing, and problem difficulty to avoid interference with the experimental results. Considering such characteristics, we modeled the enjoyment function through a log-normal function. The enjoyment (E(w)) of the track is defined as
(1)
The maximum value of enjoyment function occurs at the width value wmax. We modeled wmax as a function of the track height H and a factor f such that wmax = fH. The corresponding maximum enjoyment value Emax is
(2)

The function is normalized to have a maximum value dependent on the height of the track. We did so because intuitively a “taller” ride should have a higher maximum possible enjoyment. To reduce the effect of learning about optimal width value wmax as a function of height H, we introduced a factor f that was uniformly sampled for every new contest (refer to Sec. 2.3) from the range of [0.6, 0.9]. In the experiment, the height values H are uniformly chosen for every new contest from the range of 600 to 1000 units. Thus, Emax values range between 70 and 170. The Emax range was carefully chosen to ensure that participants did not develop misconceptions about the maximum achievable enjoyment value. For example, participants may believe that the maximum achievable enjoyment value for a track is 100 due to the special significance of this number to represent a full score. By randomizing objective function parameters, we reduced the influence of such preconceived notions.

Participants were expected to iteratively search for width w values of the track such that it maximizes the enjoyment experienced by the rider. A try is defined to be a submission of one w value. For each trial, participants incurred a cost, and they were shown the corresponding enjoyment value, that is, the value of the objective function. For example, Fig. 1 shows that a participant has tried eight times, the cost for which is 200 cents. Further details on the design of the incentive structure are provided in Sec. 2.3. A table and a graph of search data are also provided to the participants to reduce the cognitive load of having to remember their search history. The participants are also provided with an initial height H of the track and are informed that the circular valley has a constant depth of 50 units. Participants are explicitly provided the feasible design space in this study, and the information appears as “Try values for width greater than X” as shown in Fig. 1. We did so for experimental control such that we reduce the influence of problem-specific information on a participant’s design behaviors and outcomes. In other words, there is reduced variance in participants’ knowledge about the design space. (Refer to our previous work [23], on which we also build the model in Sec. 3, where we consider the influence of participants” knowledge about the design space.)

2.2 Contest-Specific Information: Past Design Ratings.

The information about competitors’ past performance record was termed as the “design ratings” given by the firm to the design solutions generated by the competitors in the past. A design rating is the firm’s assessment of the goodness of the competitors’ past design solutions. The rating was given on a Bad-Average-Fair-Good-Great scale, where “Bad” rating is the worst possible rating, and “Great” rating is the best possible rating. If information about the competition history was provided, then participants are shown a histogram of the design ratings of the best design solutions submitted by the competitors in the past 10–15 contests as shown in the top-right in Fig. 1. Such information was intended for the participants to develop judgment about the competitiveness of the contest via past performance records. Moreover, to control for the effect of design fixations, we did not provide information about the design artifact. Instead, the design ratings provided qualitative information about the design solutions without explicitly revealing the past designs. The participants also do not observe any real-time actions or decisions made by an opponent in the currently played contest.

To generate past performance data, we utilized a quantitative measure of design ratings and created a modeled agent’s performance distribution. From such a distribution, performance data were sampled and then converted to a qualitative scale as seen by the participants. We note that the performance distribution of the competitors is static implying that it is not being updated for them.

For design search problems, theoretically, the true design quality achievement (TDQ) can be quantitatively formulated as follows:
(3)

In reality, assessments of design quality in the context of design search problems are not trivial. The firms organizing design contests themselves do not know the maximum achievable design objective value for a problem. However, we assumed that the contest organizers (in this case, the firm) are capable of making an accurate assessment of the true design quality of the design solutions generated by the competitors. We term this assessment by the firm of the true design quality achieved by an individual, as the “quantified design rating.” For example, a quantified design rating of 90% for a track design problem with maximum enjoyment value of 120 implies that a player achieved an enjoyment value of 108.

For the competitors’ past performance data, quantified design ratings were sampled from a Gaussian distribution with some mean design rating μopp and a standard deviation of σopp=3%. As discussed earlier, such ratings are purely theoretical. In order to realistically reflect the past assessments by the firm of the design solutions generated by the competition, the quantitatively sampled ratings are categorized into a qualitative scale through a mapping scheme such that μopp95% is “Great,” 95%>μopp90% is “Good,” 90%>μopp85% is “Fair,” 85%>μopp80% is “Average,” and 80%>μopp is “Bad” rating. It is to be noted that the participants are not aware of the quantitative design rating and the mapping scheme. Such metrics were developed for internal analysis by the authors for various experimental control scenarios as described in Sec. 2.3.

The quantitative distribution utilized to generate past performance design ratings is also used to sample the competitors’ (modeled agent’s) enjoyment value achieved for a given contest. The evaluated performance value of the modeled agent is then utilized to decide whether a participant wins or loses a given contest. When a participant decided to stop their search in a contest, their corresponding performance, as well as their competitors’ performance was shown and the winner was displayed for the contest.

2.3 Experiment Design.

The experiment was a within-subjects study such that each participant experienced three levels of information about their competitors such that they either had a “strong” past history, a “weak” one, or no information was provided. We did so because we wanted to infer each individual’s change in behavior based on past performance information of a contest. This was important for our model, as discussed in Sec. 3, because the model parameters quantify individual-specific behavioral aspects and we do not aggregate information on a group or population level in the model.

The experiment involved a total of 36 participants. These participants were undergraduate and graduate students at Purdue University, predominantly from mechanical engineering backgrounds. There were a total of 14 females and 22 males. The experiment was divided into two parts, namely, with information (WI) of a “strong” past history or a “weak” one and without information (WOI) of the past history of the competitors. As the parts name suggests, WI part is one where the information about the competitors’ past performance was provided, and WOI part is one where it was not. The WOI part had 10 contests and the WI part had 20 contests. Overall, every participant played a total of 30 contests. The experiment was executed with the two possible orders (WI-WOI and WOI-WI) of the two parts. Each participant experienced only one of the two orders. Such ordering of the tasks is done to eliminate order effects [52].

A mean design rating μopp in the range of [80,84]% was utilized for generating “weak” competition history and a range of [95–99]% for generating “strong” competition history. These ranges were chosen based on observations of past performances of human subjects in our design search problems and their achievement of true design quality TDQ. It might seem that, in reality, a range of [80,84]% for design rating is a “strong” performance. However, in the context of a convex search problem, we observed in our pilot experiment that human subjects are able to achieve such quality (TDQ) in 2–3 tries on an average. For such low effort, we consider this range of TDQ achievement to be a “weak” performance.

For the WI part, we randomized a total of 10 strong and 10 weak past performance information about the competitors making it a total of 20 contests. Thus, overall, every participant played ten contests given strong past performance information, weak past performance information, and no information about the past performance, respectively. We minimize the effect of repetition of contests on learning about the solution space by randomizing the parameters of the objective function. Thus, every contest a participant plays is randomly generated such that it is not correlated with previously generated solutions in a different contest (refer to Sec. 2.1.1). Moreover, we randomized strong and weak performance information in the WI part in order to reduce anchoring effects of successively presenting weak (or strong) competition history. Moreover, it might result in an apparent belief of high (or low) probability of wins, thereby, further compounding the effect of anchoring bias along with the gambler’s fallacy. With 36 participants, we collected data of 1080 contests or 360 for each of the three conditions. This sample size of 360 per group is greater than the minimum of 64 suggested by an a priori power analysis which was conducted using G Power [53] assuming a small effect size (0.2) of past performance information on design behaviors, 95% power of repeated ANOVA hypothesis tests for a within-subjects study assuming a low correlation (0.2) of estimated parameters from the same individual. To reiterate our objective, we want to study the influence of information about the past performance of competitors on an individual’s design behaviors. By designing experimental treatments where strong, weak, and no information about the past performances is provided, we can generate controlled data sets of participant’s decision-making behaviors in various treatments.

The incentive structure for the experiment was designed as follows. The prize for winning a contest was set to be $ 7. The cost for a try was set to be $0.25. The prize to cost ratio was deliberately high to reduce the influence of a high cost of experimentation on design behaviors. For the payments, the net gain or loss for any three contests out of the 30 contests was chosen at random. This was done in order to minimize the wealth effects [54]. The theoretical maximum net gain was calculated to be $20.25. This calculation was done by considering the best case scenario such that the participant tries once, costing them $0.25 cents, and they win the contest such that it gives them a maximum gain of $7 − $0.25 = $6.75 for the contest and 6.75 × 3 = 20.25. Moreover, participants were given a show-up fee of $5. Theoretically, participants could earn a maximum total of $25.25 for a session that lasted for approximately 75 min.

2.4 Metrics Utilized for Hypothesis Formulation and Testing.

We summarize the dependent and independent variables utilized in this study in Table 1. These variables are used to test the experimental hypotheses discussed in Sec. 2.5. Moreover, there are several control variables such as cost of the effort C and the prize of the contest π which are held constant in the study. For further details regarding experimental control refer to Sec. 2.3.

Table 1

The variables of the behavioral experiment, their type, quantification, and description

VariableVariable typeQuantificationDescription
Competitors’ past performance recordIndependent variableA Gaussian distribution with some mean design rating μopp and a standard deviation of σopp=3%A mean design rating μopp between 95% and 99% is considered strong past performance and μopp between 80%and85% is considered weak past performance. Refer to Sec. 2.2 for further details
Belief about the competitors’ performance in a contest (cognition)Dependent variableBelief is quantified as a Gaussian distribution with parameters [μb, σb]Belief about an event is defined as probability distribution over the outcomes of an event. Refer to the discussion of the model parameters in Sec. 3 for details on belief about the competitors’ performance
Effort (behavior)Dependent variableNumber of tries TThe number of iterations in a sequential design process is considered as the effort
Performance (outcome)Dependent variableMaximum enjoyment value EPerformance is measured by using the maximum enjoyment value achieved by a participant in a contest and normalizing the value according to Eq. (3)
VariableVariable typeQuantificationDescription
Competitors’ past performance recordIndependent variableA Gaussian distribution with some mean design rating μopp and a standard deviation of σopp=3%A mean design rating μopp between 95% and 99% is considered strong past performance and μopp between 80%and85% is considered weak past performance. Refer to Sec. 2.2 for further details
Belief about the competitors’ performance in a contest (cognition)Dependent variableBelief is quantified as a Gaussian distribution with parameters [μb, σb]Belief about an event is defined as probability distribution over the outcomes of an event. Refer to the discussion of the model parameters in Sec. 3 for details on belief about the competitors’ performance
Effort (behavior)Dependent variableNumber of tries TThe number of iterations in a sequential design process is considered as the effort
Performance (outcome)Dependent variableMaximum enjoyment value EPerformance is measured by using the maximum enjoyment value achieved by a participant in a contest and normalizing the value according to Eq. (3)

We describe the qualitative and quantitative design ratings in detail in Sec. 2.2. The strong and weak past performance record was created by utilizing a modeled agent with a performance distribution that is Gaussian with parameters [μopp, σopp]. It is to be noted that an individual’s belief about the historical competition was also modeled as a Gaussian distribution but with parameters [μb, σb]. The parameters [μopp, σopp] served as independent variables to vary competitors’ past performance record while [μb, σb] were the dependent variables estimated as model parameters using experimental data. Refer to the Appendix, where we illustrate how we leveraged experimental data with our developed model. In the context of a design search problem, we refer to an individual’s effort as the outcome of their decision-making behavior, that is, their decision to stop acquiring information. The individual’s effort was measured as their number of tries T in a design search problem.

2.5 Hypotheses Operationalization.

We list all the hypotheses and their corresponding operationalization in Table 2. We recall the discussion in Sec. 1 and reiterate that the competition-specific information influences design outcomes (H1), designer behaviors (H2), and cognition (H3). Hypothesis H3 is formulated based on our modeling considerations of how designer behaviors are affected such that the information about competitors’ past performance influences a contestant’s belief about their competitors’ performance. Such a belief represents the hidden mental state of the designer or their cognition. In the following, we discuss our hypothesis formulation.

Table 2

Hypotheses and their corresponding operationalization based on the influence of competitors’ past performance on an individual’s efforts, performance, and beliefs

HypothesesOperationalized hypotheses
H1-Outcome-specific hypothesis: Competitors’ past performance information influences a contestant’s performance in a design contestH1.1*: The maximum enjoyment value achieved by a participant in a contest is higher when they are given that competitors had a strong past performance record (μopp between 95% and 99%) as compared to when they are given that competitors had a weak past performance record (μopp between 80% and 85%)
H1.2*: The maximum enjoyment value achieved by a participant in a contest is higher when no information is given about the competitors as compared to when they are given that competitors had a weak past performance record (μopp between 80% and 85%).
H1.3*: The maximum enjoyment value achieved by a participant in a contest is lower when no information is given about competitors as compared to when they are given that competitors had a strong past performance record (μopp between 95% and 99%)
H2-Behavior-specific hypothesis: Competitors’ past performance information influences a contestant’s efforts in a design contestH2.1*: The number of tries (T) by a participant is higher when they are given that competitors had a strong past performance record (μopp between 95% and 99%) as compared to when they are given that competitors had a weak past performance record (μopp between 80% and 85%).
H2.2*: The number of tries (T) by a participant is higher when no information is given about the competitors as compared to when they are given that competitors had a weak past performance record (μopp between 80% and 85%)
H2.3*: The number of tries (T) by a participant is lower when no information is given about the competitors as compared to when they are given that their competitors had a strong past performance record (μopp between 95% and 99%)
H3-Cognition-specific hypothesis: Competitors; past performance information influences a contestant’s belief about the competitors’ achievement of the design objective value in a contestH3.1*: The μb value estimated for a participant when they are given that their competitors had a strong past performance record (μopp between 95% and 99%) is higher as compared to the μb value estimated when they are given that their competitors had a weak past performance record (μopp between 80% and 85%)
H3.2*: The difference between the μb value estimated for a participant when they do not know that their competitors had a strong past performance record (μopp between 95% and 99%) and the μb value estimated when they do not know that their competitors had a weak past performance record (μopp between 80% and 85%) is zero
HypothesesOperationalized hypotheses
H1-Outcome-specific hypothesis: Competitors’ past performance information influences a contestant’s performance in a design contestH1.1*: The maximum enjoyment value achieved by a participant in a contest is higher when they are given that competitors had a strong past performance record (μopp between 95% and 99%) as compared to when they are given that competitors had a weak past performance record (μopp between 80% and 85%)
H1.2*: The maximum enjoyment value achieved by a participant in a contest is higher when no information is given about the competitors as compared to when they are given that competitors had a weak past performance record (μopp between 80% and 85%).
H1.3*: The maximum enjoyment value achieved by a participant in a contest is lower when no information is given about competitors as compared to when they are given that competitors had a strong past performance record (μopp between 95% and 99%)
H2-Behavior-specific hypothesis: Competitors’ past performance information influences a contestant’s efforts in a design contestH2.1*: The number of tries (T) by a participant is higher when they are given that competitors had a strong past performance record (μopp between 95% and 99%) as compared to when they are given that competitors had a weak past performance record (μopp between 80% and 85%).
H2.2*: The number of tries (T) by a participant is higher when no information is given about the competitors as compared to when they are given that competitors had a weak past performance record (μopp between 80% and 85%)
H2.3*: The number of tries (T) by a participant is lower when no information is given about the competitors as compared to when they are given that their competitors had a strong past performance record (μopp between 95% and 99%)
H3-Cognition-specific hypothesis: Competitors; past performance information influences a contestant’s belief about the competitors’ achievement of the design objective value in a contestH3.1*: The μb value estimated for a participant when they are given that their competitors had a strong past performance record (μopp between 95% and 99%) is higher as compared to the μb value estimated when they are given that their competitors had a weak past performance record (μopp between 80% and 85%)
H3.2*: The difference between the μb value estimated for a participant when they do not know that their competitors had a strong past performance record (μopp between 95% and 99%) and the μb value estimated when they do not know that their competitors had a weak past performance record (μopp between 80% and 85%) is zero

We formulated H1 to investigate the influence of competition history on design contest outcomes. In the context of a design contest, we consider the quality of the design solution as design performance. In the experiment, the maximum enjoyment value achieved by a participant is considered representative of their design performance. We hypothesized (H1.1* and H1.3*) based on existing literature in sports where competition against competitors with strong past performance results in better player performance as compared to ones with weak past performance [55,56]. Conversely, we operationalize hypothesis H1.2* such that participants would exhibit conservative behavior when competition history was unknown which would result in higher performance than when the competitors were known to have a weak past performance record.

We formulated H2 to further investigate behavioral implications of sharing past competition specific information. H2 formulation is based on existing literature on information sharing in contests which shows that there is significant over-expenditure of efforts (compared with theoretical predictions) when information about a strong competition history is known [18,57,58]. This resulted in the formulation of three operationalized hypotheses, namely, H2.1*, H2.2*, and H2.3*.

H3 is formulated to investigate cognitive implications of sharing past performance information. Our S-SIADM model, as discussed in Sec. 3.2.2, quantifies the cognitive influence of past performance information on a participant’s SIADM behaviors via a belief (probability distribution) about the competitors’ design performance. We use the model parameters to operationalize H3 as H3.1* and H3.2*. We note that while H1 and H2 are tested using only experimental data, H3 requires model-based inferences. This is because testing H3 requires data on an individual’s cognition about the competitiveness of the competition which is not readily available through experimental observations only. Thus, a cognitive model of an individual’s decision making is required. Furthermore, such a model provides explainability of decision making behaviors. This implies that testing H3 can provide insight into why an individual made the decisions observed whereas testing H1 and H2 provides only provides insights on the contributing factors that influence design outcomes.

3 A Descriptive Model of Sequential Information Acquisition and Decision-Making Process Under Competition

In this section, we abstract the design contest scenario utilized in the experiment, and our specific modeling choices for such a scenario including the contest type, the problem type, and the individual’s type in the design scenario in line with our experiment (Sec. 3.1). Then, we have formulated a cognitive model of sequential information acquisition and decision making. We describe our modeling choices to represent a player’s state of knowledge, the decision-making process for a design optimization scenario, and the causal influences of providing past performance information on a player’s decision-making process (Sec. 3.2). To do so, we leverage Bayesian causal modeling which enables us to then inverse infer the modeled parameters given data ( Appendix).

3.1 The Design Contest Scenario.

In order to model a design contest scenario, we abstracted the class of design problems, the activities of the designer as a contestant, and the type of contest in line with our experiment. In the following, we have made contest-specific, problem-specific, and individual-specific modeling choices by considering a single-stage single-prize contest for engineering design problems as a parsimonious contest scenario that acts as a starting point in the context of computational modeling for design-under-competition.

3.1.1 Contest-Specific Modeling Considerations.

In this study, we have modeled the design contest by assuming that the contestant is competing in a nondynamic and nonstrategic contest. We made such an assumption for the following reasons. First, a dynamic game would imply sequential actions between players which typically is not the case for design competitions where individuals or teams work independently toward design objectives. While the decisions made by the designers are sequential in the context of design iterations they are not sequential in terms of turn taking. Thus, our contest is nondynamic. Second, the term “strategy” has a specific significance in game theory literature and it refers to the response of a player to an opponent’s actions. In the context of engineering design under competition, designers may not have information about the design decisions being made by their competitors. Thus, from a game theoretic standpoint a design contest may not be “strategic” if information about competitors” responses is unavailable. Third, we wanted to control for the effect of participants learning about each other in real time which also influences contestant behaviors since the focus of this study is to understand the influence of competition history.

Moreover, the information about the past performance records of contests typically comprises of the best past design solutions generated by the winner. Such information influences a contestant’s belief about the quality of the best competing solutions that may be generated in a contest and by extension, the belief about the best competitor in the “crowd” or a contestant population. Such information influences a contestant’s design decisions. There are several other contest-specific factors that influence participant behaviors and contest outcomes such as the rewards, the reward structure, the number of stages of the competition, as well as the reputation of the contest organizers [1214,59]. For the purpose of this study, such factors are assumed to be held constant for our modeling choices in consistency with the design of our experiment.

3.1.2 Problem-Specific Modeling Considerations.

A class of design problems was considered where designers are required to optimize a given design objective with a clear figure of merit. We assume that the figure of merit is an objective quantity that measures the performance of a system as opposed a subjective quantity such as novelty or creativity of the outcome. For example, crowdsourcing contests organized by NASA provide a clear figure of merit, such as the weight for a given design artifact, that needs to be minimized. In such scenarios, designers typically utilize an engineering design process where they perform information acquisition activities, such as executing simulation models and experiments. In such activities, designers make decisions about what new information to acquire and when to stop acquiring information. Such information acquisition decisions heavily influence design outcomes and consequently, the success of a design contest.

We considered a design scenario where a designer has a design x that affects the design performance f(x). The designer’s objective is to achieve the best design outcome. The designer does not explicitly know the mathematical relationship between the design variables and the design outcome, i.e., the function f(x). However, they may know the qualitative relationship between the design x and the design outcome f(x) due to factors such as their domain knowledge. In such a scenario, a designer needs to acquire information about the impact of design x on the design outcome f(x). Such information can be acquired by running (physical or computational) experiments, which incur a certain cost. Moreover, the information can be acquired sequentially or in parallel. In Sec. 3.1.3, we make modeling choices about how an individual acquires information.

We assumed that the designers are aware of the feasible design space and the qualitative relationship between the design variables and the design outcome. We made such an assumption to control for the influence of domain knowledge on designer cognition, behaviors, and outcomes in our experiment. We also aligned the modeled scenario with our experiment by ensuring that the function f(x) is unknown to the participants. Further details are provided about the design of the experiment in Sec. 2.

3.1.3 Individual-Specific Modeling Considerations.

An individual’s information acquisition process can be broadly categorized into sequential or parallel processes [60]. An information acquisition process is sequential when information is acquired in steps, and in each step, the acquired information is used to update prior knowledge, resulting in a new state of knowledge at the end of that step. Hence, the information acquired in a sequential process affects subsequent information acquisition decisions. For example, when a designer decides what next experiment to conduct based on the result of previous experiments, the process is sequential. In parallel processes, all acquired information is analyzed at the end of the process [60]. For example, the information acquisition process is parallel when a designer executes a preplanned set of experiments and analyzes the results of the entire set at the end. Within the context of engineering systems design, we recognize that both sequential and parallel information acquisition processes exist. However, in this study, we focused on modeling a single designer as a decision-maker who sequentially acquires information to search for an optimal design solution.

In our previous work [23], we modeled an individual’s sequential information acquisition and decision-making (SIADM) behavior. The SIADM framework consists of three main activities: acquiring information, processing information, and making decisions about where to search and when to stop the search. These activities are repeated over a sequence of steps, t = 1, …, T. Any sequential information acquisition activity in the design process can be represented using this framework.

In this study, we leverage the SIADM framework to illustrate the influence of contest-specific information on an individual’s SIADM process. Thus, we call the model developed in this study a sequential information acquisition and decision-making model under competition (SIADM-C). We summarize our previous work in Sec. 3.2.3 for further details. We assumed that the cost associated with acquiring information is independent of the information that is acquired. That is, the value of the “next x” to choose and the experiment cost did not influence each other. Moreover, we assumed that the decision to choose x is a problem-specific decision which does not get influenced by the contest-specific information, such as an opponent’s historical performance records. The decision to stop, on the other hand, did get influenced by an opponent’s historical performance records. For example, a contestant may decide to stop in the very beginning (not participate) in a contest if they know that their opponent’s history is “very strong.” The decision to stop the search influences the total cost incurred for the search problem, that is, the greater the number of experiments, the higher the cost. We controlled for the variability of the experimental costs by assuming that the cost associated with each information acquisition step is constant.

3.2 Information Acquired at Each Step.

At each decision-making step, t = 1, 2,…, T, the participant chose a design Xt and received information about the value of the objective function to be maximized
(4)
They also decided whether to stop or not at time-step t, St = 1 or 0.
We assumed that an individual begins the SIADM-C process at step t = 0 with some initial information history H0 which includes a single design X0 and the associated performance Y0 = f(X0). At t = 0 they were given a choice to enter the contest or not (S0=1or0), which is a special case of the stopping decision they consider at any other time-step. Thus,
(5)
The information history Ht that the individual has by the end of step t is
(6)
The best performance (quality) Qt of the individual at time-step t is given by
(7)
It is to be noted that the initial information history H0 at time-step t = 0 is not considered to calculate Qt as participants did not expend any effort for the given information. In other words, if participants did not enter the contest, their best quality is null.

3.2.1 The Type of an Individual.

We define the type θ of an individual such that it (i) fully specifies their prior state of knowledge about the opponent, the design objective, and how they are represented in the model, (ii) influences how they updated their state of knowledge after observing Ht, (iii) influences how they decided to acquire information at each time-step, and (iv) influences how they decided to stop. In what follows, we have made specific modeling choices, trying to be parsimonious (to keep the number of model parameters as small as possible), while taking into account some of the cognitive limits of humans. Such a definition of an individual’s type enables us to incorporate model parameters that encapsulate our beliefs of the behavioral characteristics of individuals such as (i)–(iv) discussed above. Since the type θ of an individual encapsulates our beliefs about an individual, we leverage experimental data as evidence to infer the type θ of an individual via model parameters. For further details on the inference of the type parameters, refer to the  Appendix.

We utilized our previous work [23] to model (i) an individual’s state of knowledge about the objective function, (ii) how they decided to choose the “next x,” and (iii) how they updated their state of knowledge about the objective function. It is to be noted that these activities are problem-specific. However, an individual’s state of knowledge about the opponent’s history and their decision to stop are a part of their contest-specific decision-making, and its modeling is an extension to our previous work.

3.2.2 Modeling an Individual’s State of Knowledge.

We have modeled the influence of providing information about the historical performance record R of an opponent as follows. By observing past information, an individual develops “belief” about the opponent’s best solution B that they are capable of generating in a contest. We modeled the belief about the best solution B as a sample from a Gaussian distribution with a mean best performance μb and deviation σb
(8)
where μb and σb are hyperparameters which are a part of the individual’s type θ. Thus, μb and σb are individual-specific as well as cognitive in nature as they abstract an individual’s mental state about their opponents in the form of belief parameters.

3.2.3 Summary of Our Previous Work.

As in our previous work [23], we modeled an individual’s belief about the objective function as a Gaussian process (GP)
(9)
where m and c are the mean and covariance functions.
We utilized a convex mean function m(x) to model the prior belief about the objective function given by
(10)
where xR and takes values in the range [350, 1000]. The mean function m(x) was generated from a general quadratic equation ax2 + bx + c, by setting the values of the parameters a, b, and c such that the domain of x is consistent with the domain of the design parameter given to the participants in the experiment. Qualitatively, this is equivalent to participants considering a convex (parabolic) objective function that has a maximum in the range [350, 1000]. We assume this consideration as participants were informed in the experiment that the objective function is convex and the interface allowed the participants to explore width values in the range of [350, 1000].
The covariance function c(x, x′) defines the Gaussian process’ behavior between any two points x and x′. Consistent with our previous work, we assumed that the individuals use a squared exponential covariance function
(11)
with unspecified signal strength s > 0 and length scale ℓ > 0, i.e., they assign flat priors. We have also assumed that the individuals identified the best signal strength and length scale ℓ by maximizing the likelihood of the data, i.e., by solving
(12)
where N(|μ,Σ) denotes the PDF of the multivariate normal distribution with mean μ and covariance Σ. Also, we have introduced the notation X1:t = (X1,…, Xt) and Y1:t=(Y1,,Yt) for the collection of all observed designs and the corresponding performances up to step t. Furthermore, we use m(X1:t) = (m(X1), …, m(Xt)) for the mean function evaluated at all designs, and c(X1:t) = {c(Xi, Xj)} is the covariance matrix of the designs. Finally, the matrix It is the t × t identity matrix, and λ is a fixed parameter (we use λ = 10−6) added to the diameter to ensure numerical stability.
The posterior state of knowledge of the individual about f(x) is also a GP
(13)
where mt and ct are the posterior mean and covariance functions of the GP [61] when it is conditioned on Ht. Specifically, the posterior mean is given by
(14)
where the row vector c(x,X1:t)=(c(x,X1),,c(x,Xt)) is the cross covariance between the test point x and the designs observed so far X1:t. The posterior covariance is
(15)
where AT is the transpose of matrix A.
We used maximization of expected improvement (EI) to model how humans make search decisions. EI is defined as the improvement in design performance at x over the current best quality Qt integrated over the possible values of f(x). The mathematical definition of EI is given by
(16)
Borji and Itti [62] show that maximization of expected improvement is indicative of how humans make search decisions. Thus, we modeled the point the participant chose next by
(17)
where Zt are independent standard normal random variables, and σ > 0 sets the level of the deviation of an individual from EI, that is, the modeled decision to “choose x.”

3.2.4 Modeling How Individuals Cognitively Made Stopping Decisions.

We assumed that the individual is rational from the perspective that they tried to maximize their payoff Π in the contest. The stopping payoff Πt, that is, the payoff a participant would receive if they were to stop at time-step t is given by
(18)
where π is the prize, 1[B,)(Qt) is an indicator function such that its value is unity if the best quality Qt at time-step t is higher than opponent’s best quality B, K is the assumed constant cost associated with each time-step t.

With the specification of the contest-specific parameters such as prize and cost, the reader is now in a position to visualize the plate diagram of the SIADM-C model and the influence of various model parameters on the information acquired by the individual as illustrated in Fig. 2.

Fig. 2
Graphical illustration of the sequential information acquisition and decision making under competition (SIADM-C) model at step t of the process. Participant observes an opponent’s past performance ratings (R). R is qualitative and takes discrete values of bad, average, fair, good, and great. Parameters such as contestant’s belief about an opponent’s quality B and objective function l, and s are inferred by the individual. Parameters μb, σb, and σ are a part of an individual’s type θ. Based on the inferred parameters, information ht about the function till step t, the information about the cost K of each try, and prize Π, the individual decides to stop Si or not.
Fig. 2
Graphical illustration of the sequential information acquisition and decision making under competition (SIADM-C) model at step t of the process. Participant observes an opponent’s past performance ratings (R). R is qualitative and takes discrete values of bad, average, fair, good, and great. Parameters such as contestant’s belief about an opponent’s quality B and objective function l, and s are inferred by the individual. Parameters μb, σb, and σ are a part of an individual’s type θ. Based on the inferred parameters, information ht about the function till step t, the information about the cost K of each try, and prize Π, the individual decides to stop Si or not.
Close modal
We modeled a contestant’s stopping decision as follows. If the expected marginal improvement in their payoff from step t to (t + 1) is negative, then they are more likely to stop. The expected marginal improvement in the payoff ΔΠt is given as
(19)
The conditioning on the history at time t indicates that the individual constructed ΔΠt after having observed it. In the language of probability theory, we say that the stochastic process ΔΠt is filtered by the history Ht. In other words, ΔΠt is known by time t. Note that the payoff at time t, Πt, is not completely determined from the history Ht up to that point because the performance of the opponent, B, has not yet been observed.
We now proceed to calculate ΔΠt. We have for the first term
(20)
where P(QtB|Ht) is the probability that the individual assigned to winning at step t. It is given by
(21)
where Φ is the cumulative distribution function of the standard normal. Note the dependence of the right-hand side on the best performance Qt which, at time t, is completely determined by the history Ht. For the other term defining ΔΠt, we have
(22)
where P[Qt+1B|Ht] is the probability that the individual assigned to winning at step (t + 1). This is given by
(23)
where N(|μ,σ2) denotes the probability density function of a standard normal. Note that the integration in the last step is over the point predictive probability density of the GP at Xt+1 with mean given by Eq. (14) and variance given by Eq. (15) representing the individual’s knowledge about f(x). Furthermore, the next point to choose Xt+1 is completely determined from the history at time t, Ht, see Eq. (17). The integral is evaluated via Monte Carlo integration using 10, 000 random samples from the point predictive probability density.
Having fully specified the expected marginal payoff after stopping, ΔΠt, we modeled the individual’s decision to stop. Our premise was that the probability of stopping increases as ΔΠt decreases. To reflect this, we modeled the stochastic process St as follows:
(24)
and, the stopping probability is given by
(25)
where α and β are type parameters to be determined.

Given all of the above modeling assumptions and equations, we can infer model parameters given behavioral data. The details of inverse inference are provided in the  Appendix.

4 Results and Discussion

We utilized the data, collected from the experiment described in Sec. 2, to infer the model parameters θ. Based on these parameters and the experimental data, we tested hypotheses H1.1* to H3.2* and present it in this section. We discuss the implications of each of the hypothesis test results.

4.1 Hypotheses Testing: Influence of Past Performance Information.

To test operationalized hypotheses H1 and H2 we conducted a single factor repeated ANOVA test across the three treatments: (1) where they knew that their competitors had a strong performance record, (2) where they knew that their competitors had a weak performance record, and (3) where they did not know the competition history. For H3, we conducted a single factor repeated ANOVA test to compare the estimated belief parameters μb of the participants across contests where they knew that their competitors had a strong past performance record and a weak past performance record. In all the analyses, we refer to the null hypothesis of a particular hypothesis as the statement that there is no observed relationship as hypothesized.

4.1.1 Influence on Performance.

To test H1.1* to H1.3*, we compared the average of the normalized maximum enjoyment value E achieved by the participants across the three treatments introduced above. There was no significant effect of past performance information on current contest performance at the p < 0.05 level for the three treatments [F(2, 718) = 0.42, p = 0.62].

The mean μ and standard deviation σ of the normalized average Enjoyment value achieved by the participants when they know the competition history is Good is (μEG=95.27%,σEG=10.49), when they know the competition history is Bad is (μEB=95.71%,σEB=4.27) and when they have No information is (μEN=96.11%,σEN=5.73). We note the high variance in performance distribution σEG when the participants knew that their competitors had a strong past performance record. We believe that such high variance in performance is due to the influence of the information about the “goodness” of competition which results in a few participants choosing not to participate resulting in a zero design rating whereas some expend greater efforts to increment their existing performance. On an average across individuals, the mean of the performances did not significantly vary, however, it did result in a spread (high variance) of design quality.

Figure 3(a) illustrates the histogram of normalized maximum performance achieved the participants across the three treatments. We note that participants were able to achieve the best performance for the experimental SIADM tasks frequently. The ability of the participants to do so is dependent on the nature of the design problem such as the task complexity which was controlled in the experiment. However, we observe that participants achieved best performance in greater number of contests when they knew that their competitors had a good past performance as compared to when the competitors had a weak past performance and when they did not have information about the competition history.

Fig. 3
Histograms for H1 and H2: (a) Histogram of performance across treatments and (b) Histogram of efforts across treatments
Fig. 3
Histograms for H1 and H2: (a) Histogram of performance across treatments and (b) Histogram of efforts across treatments
Close modal

4.1.2 Influence on Efforts.

To test H2.1* to H2.3*, we compare the average of number of tries T achieved by the participants across the three treatments introduced above. By testing H2.1* to H2.3*, we find that there was a significant effect of past performance information on the number of tries at the p < 0.05 level for the three treatments [F(2, 718) = 29.13, p < 0.00001].

The mean μ and standard deviation σ of the average number of Tries of the participants when they know the competition history is Good is (μTG=5.17, σTG=2.00), when they know the competition history is Bad (μTB=4.15, σTB=1.8), and when they have No information (μTN=4.77, σTN=2.12). Given that H2 test is statistically significant, we needed to compute a post hoc test to determine where our differences came from. We selected the Tukey post hoc test which is designed to compare the dependent variable (in this case, efforts) in each of the treatments to every other treatment. The results are shown in Table 3.

Table 3

Summary of the post hoc Tukey test for H2

HypothesisTreatment ATreatment BMean diff.Std. errorT-valuep-ValueEffect size
H2.1*WeakStrong−1.0190.1861−5.480.0010−0.41
H2.2*WeakNo info−0.6190.1861−3.330.0025−0.25
H2.3*StrongNo info0.4000.18612.150.08050.16
HypothesisTreatment ATreatment BMean diff.Std. errorT-valuep-ValueEffect size
H2.1*WeakStrong−1.0190.1861−5.480.0010−0.41
H2.2*WeakNo info−0.6190.1861−3.330.0025−0.25
H2.3*StrongNo info0.4000.18612.150.08050.16

The results for H2.1* indicate that the participants indeed tried higher number of times when they knew that their competitors had a strong past performance record as compared to a weak one (p = 0.001). Therefore, knowledge about competitors’ past information did influence a participant’s efforts. We also reject the null for H2.2* (p < 0.01) which implies that the participants expended higher effort when no information was provided to them about their competitors as compared to when they know that their competitors had a weak performance record. However, we failed to reject the null for H2.3* (p > 0.05) that the participants expended higher effort when they had information that the competitors had a strong past performance record as compared to when there is no information provided to them about their competitors. In other words, the difference between the expenditure of efforts when the competitors are known to have a strong performance record and when the competitors are unknown is not statistically significant. Figure 3(b) illustrates the histogram of efforts of the participants across contests in the three treatments.

The results for H2.2* and H2.3* indicate that individuals behaved as if they were competing against competitors with strong past performance records while making strategic decisions against an unknown competitors. While theoretically a total lack of information about the competitors is possible, in reality, information about past contests and by extension, information about the best past performances, is typically available. Our results suggest that if such information is available and the participants enter the contest, then they will expend higher effort when they know that the competitors in the past have had a strong performance record.

4.1.3 Influence on Cognition.

The hypothesis test results for H3.1* indicate that the participants indeed believe that their competitors’ performance is better when they have had a strong past performance record as compared to a weak past performance record [F(1, 359) = 4.41, p < 0.05]. This implies that the modeled parameters are sensitive to the information provided to the participants about their competitors past performances. To further test the sensitivity of the modeled parameters, we test H3.2*. We compare the estimated belief parameters μb of the participants across contests where their competitors had a strong past performance record and a weak past performance record but the participants did not know about the competitors’ performance record. The hypothesis test results for H3.2* indicate that there is no statistically significant difference between the estimated belief parameters μb in the two scenarios [F(1, 179) = 0.01, p = 0.92]. This further supports the claim that the modeled parameters are influenced based on the information provided to the participants about their competitors’ past performance record.

The mean μ and standard deviation σ of the average of the estimated Belief parameters of the participants about their competitors’ performance With Information that the competition history is Good are (μBWIG=90.51%, σBWIG=17.62), With Information that the competition history is Bad are (μBWIB=87.70%, σBWIB=16.68), With Out Information that the competition history is Good are (μBWOIG=88.59%, σBWOIG=16.63), and WithOut Information that the competition history is Bad are (μBWOIB=88.77%, σBWOIB=18.55).

4.2 Hypothesis Tests: Discussion.

The hypotheses test results from H1.1* to H1.3* and H2.1* to H2.3* indicate that information about the past performance record influences a participant’s decision to stop a sequential search process. However, such information did not affect the mean performance outcomes. The variance of the performance, however, was high when the participants knew that the competition history was good. We observed that some participants quit before trying when the competition history was good resulting in a 0 performance value. Such an observation point toward the need to investigate the influence of competition history specific information on drop out rates of the contestants. This can inform contest organizers’ predictions on the quality and number of solutions generated from a contest.

If we consider a causal chain such that the independent variable (competitors’ past performance information) influenced participant efforts, then the efforts should have influenced performance. However, we analyze that the average maximum performance value across participants is not significantly influenced by competition history information despite a significant difference in efforts. We believe that this result is due to the stochastic nature of the SIADM problem used in the behavioral experiment as well as the ability of the participants to achieve the maximum performance relatively easily. While on an average there is a positive relationship between effort and performance for an SIADM task, we pooled the data across subjects and contests. The variance of search decisions strategies across subjects and the normalization of performance across different contests may have also influenced the observed effort–performance relationship. Existing literature on open-innovation has investigated contest scenarios where variance dominates quality of solutions [63]. Our results highlight the importance of understanding the nature of design problems that can influence effort–performance relationships which is typically assumed to follow some deterministic positive relationship in the existing literature on contest theory.

The test results from H3.1* and H3.2* indicated that we can quantify the impact of competition-specific information on an individual’s cognition. We do so by representing an individual’s belief about the competition through the parameter μb such that higher the μb parameter greater was the belief about competitors’ performance in a given contest. Such parameters can be utilized by contest designers to incorporate the influence of participant beliefs about the competitiveness of a contest based on its participants and to predict the corresponding influence on their design behaviors and contest outcomes. In our previous work [64], we illustrate how cognitive parameters and models can be utilized to simulate SIADM behaviors. The model parameters in this paper contribute to the much needed quantification of cognition for the design of contests for engineering design scenarios. Moreover, the results indicated that the contest designers are better off not providing information to the participants about past contests if the corresponding winning design solutions do not meet the standards (low past performance) defined by the organizers for the given contest. However, regulations may prevent contest organizers from withholding such information from the participants. In such scenarios, the contest designers need to account for the influence of such information on participant behaviors and contest outcomes. Further research is required toward understanding how to catalyze participant motivations toward generating higher design quality given that they have knowledge about the historical information.

4.3 Observations.

We analyzed the cognitive belief parameters (μBWIG,μBWIB) estimated for every individual when they had information about the strong and weak performing competitors in the past. We did so to categorize every individual’s sensitivity to the information provided to them about their competitors. Individuals whose average belief about competitors with strong past performance was higher than that of the weak past performing competitors are labeled as sensitive to the given information as estimated by the model while others are termed as insensitive. Our model estimates 22 individuals out of 36 as sensitive. Thus, the model estimates approximately 60% of the participants as sensitive to the provided information about their competitors based on their estimated beliefs from experimental data. Such categorization of individuals can inform design decisions of contest designers such as the need to conservatively frame the problem if majority of the participants are insensitive to particular types of information. To do so, further model validation is required by incorporating other informational factors that influence participant behaviors as well as develop confidence that the model predictions are representative of people behavior.

5 Closing Remarks

We model a strategic SIADM process and make specific modeling choices for the three activities as discussed in Sec. 3.1. Specifically, we assume a two-player contest where individuals maximize their improvement in payoff, decide to stop when they do not see an improvement in their payoff, and follow a myopic one-step look-ahead strategy for design search. Based on these assumptions, we study the influence of past performance records, on the SIADM outcomes and behaviors.

This study contributes to the diagnostic evidence of the factors that influence designer behaviors under competition by the virtue of experimental control over the information that is available to the designers about their competitors. Our results indicate that competitors’ past performance information influences contestant’s efforts. Such a result is consistent with existing literature in contest theory where it is established that contests with competitors who have “strong” past performance records would be more competitive than contests where competitors’ past performances have been “weak’” [36,37]. Moreover, we find that such an influence on efforts does not equivalently translate to influence on performance as the nature of the design problem affects the relationship between efforts and performance. Such a finding is also consistent with existing literature in contest theory where Loch et al. [65] discuss problem types and their influence on contest outcomes. For engineering design contests, a characterization of engineering problem types with varying problem complexities and its impact on designers’ competitive behaviors needs further investigation. We did not find existing literature, however, on computational quantification of a participant’s beliefs about their competitors.

Our SIADM-C model quantifies the causal influence of competition-specific information on a contestant’s belief about their competitors, their stopping decisions for information acquisition and thereby, the design outcomes. Such a quantification, to the best of our knowledge, is the first attempt to computationally incorporate cognitive, unobservable, and individual-specific factors in engineering design competition scenarios. Based on the inference of the modeled parameters on contestants’ beliefs about their competitors, we suggest that contest designers are better off not providing historical performance records if past design qualities do not match the expectations setup by the sponsors for a given design contest. Thus, the primary contribution of this study is the understanding and quantification of how competition-specific information influences design contest outcomes, behaviors, and cognition.

We acknowledge that, in reality, design competitions may involve participants learning about competitors’ decisions for the current contest rather than just the past, and the contests may have more than one prize. Similarly, there are different problem types which influence the contest outcomes [65,66]. Our choice of focusing on a specific class of design contest scenarios is a deliberate one because such contests are a natural starting point [59] in the larger space of the various types of design contests. These extensions are potential avenues for future work.

Acknowledgment

The authors gratefully acknowledge financial support from the National Science Foundation through NSF CMMI (Grant 1662230).

Conflict of Interest

There are no conflicts of interest.

Inferring an Individual’s Type From Experimental Observations

The goal of this section is to describe how one can infer the type of an individual θ given a set of experimental history observations
(A1)
We proceeded in a Bayesian way which required the specification of a prior for θ, p(θ), a likelihood for ht given θ, p(ht|θ). The posterior state of knowledge about the type θ is simply given by Bayes’ rule
(A2)
and we characterized it approximately via sampling. We now describe each of these steps in detail.

Following the discussion of the previous section, we associated the type with the vector of parameters θ = (μb, σb, σ, α, β), all of which have already been defined. From a Bayesian perspective, we described our prior state of knowledge about θ by assigning a probability density function to them, i.e., θ became a random vector modeling our epistemic uncertainty about the actual type. However, to highlight the distinction between θ and the random variables we defined in the previous section, we did not capitalize θ. Specifically, the random variables, Xt, Yt, St, are associated with the subject’s behavior, whereas θ is associated with our beliefs about the statistics of Xt, Yt, St.

Having no reason to believe otherwise, we assumed that all components are a priori independent, i.e., the prior probability density (PDF) factorizes as
(A3)
where σb2, σ2, α, and β are assigned an uninformative Jeffrey’s prior, i.e., p(σλ)1/σλ, and
(A4)
The range of the uniform distribution was chosen based on the design of the experiment. Note that here we have silently introduced a convenient notational convention, namely p(v), which is the PDF of the related random variable evaluated at a given point v.
The second ingredient required for Bayesian inference of the type is the likelihood of the data ht conditioned on θ. This was implicitly defined in the previous section. We have
(A5)
since the model is Markovian. For each term within the product, we have
(A6)
where we simplified using the definition of hr, and the fact that, according to our model, the next design point is fully determined by the previous history, the next observed performance fully determined by the design, and the stopping decision fully determined by all design–performance pairs observed thus far.
We note that while an individual’s belief about the design performance Y is dependent on their type θ, the inference about an individual’s type does not depend on the value of the design performance yr which is beyond the participant’s control. From a decision-making perspective, a participant decides to choose x and decides to stop sr. However, they did not decide the design performance. Thus, the middle term is constant with respect to theta, and it is dropped from Eq. (A6). The first term in Eq. (A6) is
(A7)
where the range of x is based on the design range available to the participants in the experiment. From Eq. (25), we get that the last term is
(A8)
where δπr(μb, σb) is the realization of ΔΠr of Eq. (19) when Xr=xr,Yr=yr,Hr1=hr1 and for μb and σb as in the conditioning θ.

We sampled from the posterior using the No-U-Turn Sampler (NUTS) [67], a self-tuning variant of Hamiltonian Monte Carlo [68] from the PyMC3 [69] python module. We ran the MCMC chain for 4000 iterations with a burn-in period of 500 samples that are discarded. Equation (A2) is used to estimate the researcher’s posterior over θ for an individual given their (individual’s) search data.

References

1.
Sha
,
Z.
,
Kannan
,
K. N.
, and
Panchal
,
J. H.
,
2015
, “
Behavioral Experimentation and Game Theory in Engineering Systems Design
,”
ASME J. Mech. Des.
,
137
(
5
), p.
051405
.
2.
Panchal
,
J. H.
,
Sha
,
Z.
, and
Kannan
,
K. N.
,
2017
, “
Understanding Design Decisions Under Competition Using Games With Information Acquisition and a Behavioral Experiment
,”
ASME J. Mech. Des.
,
139
(
9
), p.
091402
.
3.
Bayrak
,
A. E.
, and
Sha
,
Z.
,
2021
, “
Integrating Sequence Learning and Game Theory to Predict Design Decisions Under Competition
,”
ASME J. Mech. Des.
,
143
(
5
), p.
051401
.
4.
Che
,
Y.-K.
,
1993
, “
Design Competition Through Multidimensional Auctions
,”
RAND J. Econ.
,
24
(
4
), pp.
668
680
.
5.
Shiau
,
C.-S. N.
, and
Michalek
,
J. J.
,
2009
, “
Optimal Product Design Under Price Competition
,”
ASME J. Mech. Des.
,
131
(
7
), p.
071003
.
6.
Panchal
,
J. H.
,
2015
, “
Using Crowds in Engineering Design—Towards a Holistic Framework
,”
Proceedings of the 20th International Conference on Engineering Design
,
Milan, Italy
,
July 27–30
,
The Design Society
, pp.
041
050
.
7.
Shergadwala
,
M.
,
Forbes
,
H.
,
Schaefer
,
D.
, and
Panchal
,
J. H.
,
2020
, “
Challenges and Research Directions in Crowdsourcing for Engineering Design: An Interview Study With Industry Professionals
,”
IEEE Trans. Eng. Manage.
, pp.
1
13
.
8.
Dixit
,
A.
,
1987
, “
Strategic Behavior in Contests
,”
Am. Econ. Rev.
,
77
(
5
), pp.
891
898
.
9.
Deck
,
C.
, and
Sheremeta
,
R. M.
,
2012
, “
Fight Or Flight? Defending Against Sequential Attacks in the Game of Siege
,”
J. Conflict Resol.
,
56
(
6
), pp.
1069
1088
.
10.
Mago
,
S. D.
, and
Sheremeta
,
R. M.
,
2017
, “
Multi-Battle Contests: An Experimental Study
,”
South. Econ. J.
,
84
(
2
), pp.
407
425
.
11.
Gelder
,
A.
,
2014
, “
From Custer to Thermopylae: Last Stand Behavior in Multi-Stage Contests
,”
Games Econ. Behav.
,
87
, pp.
442
466
.
12.
Nalebuff
,
B. J.
, and
Stiglitz
,
J. E.
,
1983
, “
Prizes and Incentives: Towards a General Theory of Compensation and Competition
,”
Bell J. Econ.
,
14
(
1
), pp.
21
43
.
13.
O’Keeffe
,
M.
,
Viscusi
,
W. K.
, and
Zeckhauser
,
R. J.
,
1984
, “
Economic Contests: Comparative Reward Schemes
,”
J. Labor Econ.
,
2
(
1
), pp.
27
56
.
14.
Moldovanu
,
B.
, and
Sela
,
A.
,
2001
, “
The Optimal Allocation of Prizes in Contests
,”
Am. Econ. Rev.
,
91
(
3
), pp.
542
558
.
15.
Sheremeta
,
R. M.
,
2010
, “
Experimental Comparison of Multi-Stage and One-Stage Contests
,”
Games Econ. Behav.
,
68
(
2
), pp.
731
747
.
16.
Parco
,
J. E.
,
Rapoport
,
A.
, and
Amaldoss
,
W.
,
2005
, “
Two-Stage Contests With Budget Constraints: An Experimental Study
,”
J. Math. Psychol.
,
49
(
4
), pp.
320
338
.
17.
Schmitt
,
P.
,
Shupp
,
R.
,
Swope
,
K.
, and
Cadigan
,
J.
,
2004
, “
Multi-Period Rent-Seeking Contests With Carryover: Theory and Experimental Evidence
,”
Econ. Governance
,
5
(
3
), pp.
187
211
.
18.
Mago
,
S. D.
,
Samak
,
A. C.
, and
Sheremeta
,
R. M.
,
2016
, “
Facing Your Opponents: Social Identification and Information Feedback in Contests
,”
J. Conflict Resol.
,
60
(
3
), pp.
459
481
.
19.
Vrolijk
,
A.
, and
Szajnfarber
,
Z.
,
2015
, “
When Policy Structures Technology: Balancing Upfront Decomposition and In-Process Coordination in Europe's Decentralized Space Technology Ecosystem
,”
Acta Astronautica
,
106
, pp.
33
46
.
20.
Szajnfarber
,
Z.
,
Zhang
,
L.
,
Mukherjee
,
S.
,
Crusan
,
J.
,
Hennig
,
A.
, and
Vrolijk
,
A.
,
2020
, “
Who Is in the Crowd? Characterizing the Capabilities of Prize Competition Competitors
,”
IEEE Trans. Eng. Manage.
, pp.
1
15
.
21.
Simon
,
H. A.
,
1979
, “
Information Processing Models of Cognition
,”
Annu. Rev. Psychol.
,
30
(
1
), pp.
363
396
.
22.
Kreuzbauer
,
R.
, and
Malter
,
A. J.
,
2005
, “
Embodied Cognition and New Product Design: Changing Product Form to Influence Brand Categorization
,”
J. Product Innov. Manage.
,
22
(
2
), pp.
165
176
.
23.
Shergadwala
,
M.
,
Bilionis
,
I.
,
Kannan
,
K. N.
, and
Panchal
,
J. H.
,
2018
, “
Quantifying the Impact of Domain Knowledge and Problem Framing on Sequential Decisions in Engineering Design
,”
ASME J. Mech. Des.
,
140
(
10
), p.
101402
.
24.
Cash
,
P.
, and
Gonçalves
,
M.
,
2017
, “Information-Triggered Co-Evolution: A Combined Process Perspective,”
Analysing Design Thinking: Studies of Cross-Cultural Co-Creation
,
B. T.
Christensen
,
L. J.
Ball
, and
K.
Halskov
, eds.
CRC Press
,
London, UK
, pp.
501
520
.
25.
Gao
,
S.
, and
Kvan
,
T.
,
2004
,
“An Analysis of Problem Framing in Multiple Settings
,”
Design Computing and Cognition ’04
,
J. S.
Gero
, ed.,
Springer
,
The Netherlands
, pp.
117
134
.
26.
Schön
,
D. A.
,
1987
,
Educating the Reflective Practitioner: Toward a New Design for Teaching and Learning in the Professions
,
Jossey-Bass
,
San Francisco, CA
.
27.
Schön
,
D. A.
,
1984
, “
Problems, Frames and Perspectives on Designing
,”
Des. Stud.
,
5
(
3
), pp.
132
136
.
28.
Cardoso
,
C.
,
Badke-Schaub
,
P.
, and
Eris
,
O.
,
2016
, “
Inflection Moments in Design Discourse: How Questions Drive Problem Framing During Idea Generation
,”
Des. Stud.
,
46
, pp.
59
78
.
29.
Zheng
,
H.
,
Li
,
D.
, and
Hou
,
W.
,
2011
, “
Task Design, Motivation, and Participation in Crowdsourcing Contests
,”
Int. J. Electron. Commer.
,
15
(
4
), pp.
57
88
.
30.
Chaudhari
,
A. M.
,
Sha
,
Z.
, and
Panchal
,
J. H.
,
2018
, “
Analyzing Participant Behaviors in Design Crowdsourcing Contests Using Causal Inference on Field Data
,”
ASME J. Mech. Des.
,
140
(
9
), p.
091401
.
31.
GrabCAD, 2019, https://grabcad.com/challenges/finished, Accessed September 10, 2019.
32.
Aydinliyim
,
T.
, and
Murthy
,
N. N.
,
2016
, “
Managing Engineering Design for Competitive Sourcing in Closed-Loop Supply Chains
,”
Decis. Sci.
,
47
(
2
), pp.
257
297
.
33.
Milgrom
,
P.
, and
Roberts
,
J.
,
1986
, “
Relying on the Information of Interested Parties
,”
RAND J. Econ.
,
17
(
1
), pp.
18
32
.
34.
Toma
,
C.
, and
Butera
,
F.
,
2015
, “
Cooperation Versus Competition Effects on Information Sharing and Use in Group Decision-Making
,”
Soc. Pers. Psychol. Compass
,
9
(
9
), pp.
455
467
.
35.
Li
,
J.
,
Sikora
,
R.
,
Shaw
,
M. J.
, and
Tan
,
G. W.
,
2006
, “
A Strategic Analysis of Inter Organizational Information Sharing
,”
Decis. Support Syst.
,
42
(
1
), pp.
251
266
.
36.
Folgado
,
H.
,
Duarte
,
R.
,
Fernandes
,
O.
, and
Sampaio
,
J.
,
2014
, “
Competing With Lower Level Opponents Decreases Intra-Team Movement Synchronization and Time-Motion Demands During Pre-season Soccer Matches
,”
PLoS One
,
9
(
5
), p.
e97145
.
37.
Epstein
,
J. A.
, and
Harackiewicz
,
J. M.
,
1992
, “
Winning Is Not Enough: The Effects of Competition and Achievement Orientation on Intrinsic Interest
,”
Pers. Soc. Psychol. Bull.
,
18
(
2
), pp.
128
138
.
38.
Corchón
,
L. C.
,
2007
, “
The Theory of Contests: A Survey
,”
Rev. Econ. Des.
,
11
(
2
), pp.
69
100
.
39.
Dorst
,
K.
,
2004
, “
On the Problem of Design Problems—Problem Solving and Design Expertise
,”
J. Des. Res.
,
4
(
2
), pp.
185
196
.
40.
Whitney
,
D. E.
,
1990
, “
Designing the Design Process
,”
Res. Eng. Des.
,
2
(
1
), pp.
3
13
.
41.
Roozenburg
,
N. F.
, and
Cross
,
N.
,
1991
, “
Models of the Design Process: Integrating Across the Disciplines
,”
Des. Stud.
,
12
(
4
), pp.
215
220
.
42.
Hoogveld
,
A. W.
,
Paas
,
F.
, and
Jochems
,
W. M.
,
2003
, “
Application of an Instructional Systems Design Approach by Teachers in Higher Education: Individual Versus Team Design
,”
Teach. Teach. Educ.
,
19
(
6
), pp.
581
590
.
43.
Cross
,
N.
,
2001
, “Design Cognition: Results From Protocol and Other Empirical Studies of Design Activity.”
Design Knowing and Learning: Cognition Design Education
,
C.
Eastman
,
W.
Newstetter
, and
M.
McCracken
, eds.,
Elsevier
,
Oxford, UK
, pp.
79
103
.
44.
Lu
,
C.-C.
,
2015
, “
The Relationship Between Student Design Cognition Types and Creative Design Outcomes
,”
Des. Stud.
,
36
, pp.
59
76
.
45.
Papalambros
,
P. Y.
, and
Wilde
,
D. J.
,
2000
,
Principles of Optimal Design: Modeling and Computation
,
Cambridge University Press
,
Cambridge, UK
.
46.
Sarkar
,
P.
, and
Chakrabarti
,
A.
,
2011
, “
Assessing Design Creativity
,”
Des. Stud.
,
32
(
4
), pp.
348
383
.
47.
Shergadwala
,
M. N.
, and
El-Nasr
,
M. S.
,
2021
, “
Esports Agents With a Theory of Mind: Towards Better Engagement, Education, and Engineering
,” arXiv preprint arXiv:2103.04940.
48.
Shergadwala
,
M. N.
,
Teng
,
Z.
, and
El-Nasr
,
M. S.
,
2021
, “
Can We Infer Player Behavior Tendencies From a Player’s Decision-Making Data? Integrating Theory of Mind to Player Modeling
,”
Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment
, Virtual Conference,
Oct. 11–15
, Vol.
17
, pp.
195
202
.
49.
Premack
,
D.
, and
Woodruff
,
G.
,
1978
, “
Does the Chimpanzee Have a Theory of Mind?
,”
Behav. Brain Sci.
,
1
(
4
), pp.
515
526
.
50.
Cash
,
P. J.
,
Hartlev
,
C. G.
, and
Durazo
,
C. B.
,
2017
, “
Behavioural Design: A Process for Integrating Behaviour Change and Design
,”
Des. Stud.
,
48
, pp.
96
128
.
51.
Wendel
,
S.
,
2020
,
Designing for Behavior Change: Applying Psychology and Behavioral Economics
,
O’Reilly Media
,
Sebastopol, Canada
.
52.
Shaughnessy
,
J. J.
, and
Zechmeister
,
E. B.
,
1985
,
Research Methods in Psychology
,
Alfred A. Knopf
,
New York, NY
.
53.
Faul
,
F.
,
Erdfelder
,
E.
,
Buchner
,
A.
, and
Lang
,
A.-G.
,
2009
, “
Statistical Power Analyses Using G* Power 3.1: Tests for Correlation and Regression Analyses
,”
Behav. Res. Methods
,
41
(
4
), pp.
1149
1160
.
54.
Eatwell
,
J.
,
Milgate
,
M.
, and
Newman
,
P.
,
1987
,
The New Palgrave: A Dictionary of Economics
,
Macmillan
,
London
.
55.
Konings
,
M. J.
,
Schoenmakers
,
P. P.
,
Walker
,
A. J.
, and
Hettinga
,
F. J.
,
2016
, “
The Behavior of an Opponent Alters Pacing Decisions in 4-km Cycling Time Trials
,”
Physiol. Behav.
,
158
, pp.
1
5
.
56.
Hettinga
,
F.
,
Konings
,
M.
, and
Pepping
,
G.-J.
,
2017
, “
The Science of Racing Against Opponents: Affordance Competition and the Regulation of Exercise Intensity in Head-to-Head Competition
,”
Front. Physiol.
,
8
, p.
118
.
57.
Sheremeta
,
R.
,
2013
, “
Overbidding and Heterogeneous Behavior in Contest Experiments
,”
J. Econ. Surv.
,
27
(
3
), pp.
491
514
.
58.
Fallucchi
,
F.
,
Renner
,
E.
, and
Sefton
,
M.
,
2013
, “
Information Feedback and Contest Structure in Rent-Seeking Games
,”
Eur. Econ. Rev.
,
64
, pp.
223
240
.
59.
Von Neumann
,
J.
,
Morgenstern
,
O.
, and
Kuhn
,
H. W.
,
2007
,
Theory of Games and Economic Behavior (Commemorative Edition)
,
Princeton University Press
,
Princeton, NJ
.
60.
Loch
,
C. H.
,
Terwiesch
,
C.
, and
Thomke
,
S.
,
2001
, “
Parallel and Sequential Testing of Design Alternatives
,”
Manage. Sci.
,
47
(
5
), pp.
663
678
.
61.
Rasmussen
,
C. E.
, and
Williams
,
C. K.
,
2006
,
Gaussian Processes for Machine Learning
,
MIT Press
,
Cambridge, MA
.
62.
Borji
,
A.
, and
Itti
,
L.
,
2013
, “
Bayesian Optimization Explains Human Active Search
,”
Advances in Neural Information Processing Systems 26
,
C. J. C.
Burges
,
L.
Bottou
,
M.
Welling
,
Z.
Ghahramani
, and
K. Q.
Weinberger
, eds.,
Curran Associates, Inc.
, pp.
55
63
.
63.
Szajnfarber
,
Z.
, and
Vrolijk
,
A.
,
2018
, “
A Facilitated Expert-Based Approach to Architecting ‘Openable’ Complex Systems
,”
Syst. Eng.
,
21
(
1
), pp.
47
58
.
64.
Shergadwala
,
M.
,
Bilionis
,
I.
, and
Panchal
,
J. H.
,
2018
, “
Students As Sequential Decision-Makers: Quantifying the Impact of Problem Knowledge and Process Deviation on the Achievement of Their Design Problem Objective
,”
International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
, Vol.
51784
,
American Society of Mechanical Engineers
, Paper No. V003T04A011.
65.
Loch
,
C. H.
,
DeMeyer
,
A.
, and
Pich
,
M.
,
2011
,
Managing the Unknown: A New Approach to Managing High Uncertainty and Risk in Projects
,
John Wiley & Sons
,
Hoboken, NJ
.
66.
Terwiesch
,
C.
, and
Xu
,
Y.
,
2008
, “
Innovation Contests, Open Innovation, and Multiagent Problem Solving
,”
Manage. Sci.
,
54
(
9
), pp.
1529
1543
.
67.
Hoffman
,
M. D.
, and
Gelman
,
A.
,
2014
, “
The No-u-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo
,”
J. Mach. Learn. Res.
,
15
(
1
), pp.
1593
1623
.
68.
Duane
,
S.
,
Kennedy
,
A. D.
,
Pendleton
,
B. J.
, and
Roweth
,
D.
,
1987
, “
Hybrid Monte Carlo
,”
Phys. Lett. B
,
195
(
2
), pp.
216
222
.
69.
Salvatier
,
J.
,
Wiecki
,
T. V.
, and
Fonnesbeck
,
C.
,
2016
, “
Probabilistic Programming in Python Using PyMC3
,”
PeerJ Comput. Sci.
,
2
, p.
e55
.