The complexity of many large-scale systems is outpacing our ability to effectively design, analyze, and manage such systems. Projects such as the F-35 Joint Strike Fighter, the Boeing Dreamliner, the Mars Science Lab, Boston’s Big Dig, and the U.S. Navy’s Independence warship have all been well over budget and behind schedule. While there may be a number of contributing factors, the enormous complexity of the designed systems is certainly a culprit. Large enterprises appear to be embarking on the design of such systems without a fundamental understanding of some critical principles of complex systems. These principles are emerging in the design research community and clearly illustrate that there are some elegant and simple principles that can be used to better understand, predict, and design large-scale complex systems. In this article, a number of these principles are presented in an effort to highlight the emerging research in the science of designing complex systems. An assertion is made that simplicity and complexity can and should co-exist and if simple and elegant principles are ignored, disastrous consequence may await.
Whenever one embarks upon making sense of a topic, there are fundamental assumptions one has to make with respect to the navigation process. When the topic is “the design of complex systems,” these assumptions become even more critical. The concept of “design” alone is wrought with multiple interpretations . The idea of “complex systems” likewise is overwhelmingly complicated since it bleeds into many disciplines, sciences, and industries. As a result, when someone discusses the design of complex systems, it is almost as if someone is discussing the strategic development of systems that no one understands yet using design strategies that no one can agree on.
Regardless of how one views complex systems and their design, this article is not meant to clarify the notion of complex systems or even their design. There will be no call for unity around a definition, metric, framework, or method. Rather, the hope is that there will be a scholarly recognition that complexity and simplicity not only can co-exist in the design of large-scale systems, but that they must co-exist for us (a community of engineering design thinkers, researchers, educators, and practitioners) to stand a chance at meeting the ever-pressing global challenges and needs.
As a result, there is no plan to debate definitions of complex systems or discuss how many components it takes to reach a state of complexity (see Ref.  for an excellent exploration of complexity measures). Indeed, one person’s complex may be someone else’s simple. And in this state of acceptance, we can still find commonality of mission. For the most part, what we are currently doing as a broad community is not working. There are a number of ongoing debates and dialogues concerning the reasons for our lack of progress in designing complex systems. While this article made add to the dialogue it is rather meant to support progress toward an enlightened understanding of past successes and failures.
Although we are not debating definitions of complex systems, it is important to establish some context. First, the focus is on engineered systems. Therefore, complex engineered systems are those characterized by dynamic intrasystem couplings, such that the collective system behavior is not merely a sum of the individual parts . The existence of these significant, diverse and often unknown couplings creates an interdependence among the system components, given that subsystems are often irreducibly entwined . While concepts and ideas are borrowed from other disciplines to make certain points, the primary focus is on systems being strategically engineered.
Second, the design of such systems provides an added shade of context, providing further focus and clarity. Because a large number of people and organizations are intimately involved in the design of complex systems, there are typically unknown unknowns and, as a result, the system’s behavior is many times uncertain and emergent. While the analysis of such systems is critical, the emphasis here is on the innovative development and synthesis of these systems. Lastly, this manuscript is not meant to be a comprehensive review of related work. Indeed, this would require significantly longer space than is allotted.
Given this context, the fundamental premise to be postulated and supported is as follows:
Complexity and simplicity should co-exist in the design of large-scale engineering systems. Their elegant interplay will better allow engineers and managers to design, develop, and manage such systems.
To explore this premise, a framework that sheds light on complexity and simplicity in design is helpful. With such a framework, we can position the current and future state of design research to strategically impact the design of the increasingly complex systems that shape our societies. As a starting point, consider the three classes of problems facing science as noted by Warren Weaver when he was director of natural sciences at the Rockefeller Foundation :
Simple problems: problems with a small number of variables so that they can be analyzed completely and with certainty.
Complex problems marked by disorganization: problems with random ensembles and highly fragile behavior.
Complex problems marked by organization: problems where relatively simple insights and robust behavior arise.
With design emerging as its own field of science [6–9] including Ph.D. programs in Design Science at the University of Sydney and the University of Michigan, this classification seems quite applicable to gauge and direct its progress toward scientific rigor. What is apparent from Weaver’s classification is that he delineated simple and complex problems and noted that both can be present simultaneously within a scientific field.
Alderson and Doyle develop a taxonomy for complex systems based on Weaver’s classification which distinguishes dimensions of complexity based on model size (small or large) and resulting behavior (robust or fragile) . Indeed, the taxonomy can also be applied to design and we can conclude that the field of design consists of some simple exercises such as the design of a paperclip, and complex exercises like the design of a colony on the moon. However, the rhetorical use of the terms simple and complex in this context is not the point here. Instead, the focus is directed to the “complex problems marked by organization” category. Here, simplicity and complexity collide and ideally produce well understood systems, marked by robust performance and elegant function-to-form realization.
Although Weaver’s classification and Alderson and Doyle’s taxonomy are very useful to frame the discussion of complex systems in general, when considering the design of such systems, perhaps a better framing includes the type of questions we are asking and the kind of answers that result. This classification is meant to view design problems with a wide lens. As the problems are scrutinized with finer lenses, further classifications can be used to delineate problems based on the types of uncertainty present , the size of level of complexity of the problems as measured by various metrics , or the type of analytical tools necessary . Indeed other classifications can complement the one presented here and hopefully they will as the community converges on an emergent science of design.
Shown in Table 1, this classification, inspired by the work of Alderson and Doyle , stresses the importance of the type of questions being asked and the kind of answers being produced. When simple answers are found to simple questions, what results is merely simplicity. Simple questions are easy to describe and they can be studied using experiments or models requiring minimal interpretation. Their simple answers can be verified with short proofs and any accompanying experiments have reproducible outcomes that are insensitive to small changes in model parameters and assumptions . This discussion does not degrade simple questions with simple answers; indeed, we need to understand simple questions and their simple answers before being able to ask complex questions (e.g., understanding an inclined plane model before being able to study a more complex system with moving elements under friction).
When simple questions are answered with complex answers, what typically results is a set of profound insights regarding emergent complex system behavior. The emergence of complex behavior from simple models is referred to as “chaotic complexity.” While this intersection of complexity and simplicity many times uncovers important insights, this behavior is not typically observed in engineered systems. As a result, although this quadrant points to important insights in complex systems, the focus here is not on this quadrant and its properties.
When complex questions are asked and met with complex answers, what results is “irreducible complexity.” That is, when systems described by experiments and/or models that have many parts that interrelate in nonsimple manners produce delicate, nonresilient insights, the answers are equally as complex as the models. Take for example, the complex question of developing (or engineering) a nationwide dynamic planning strategy for counterinsurgency strategy (COIN) in Afghanistan. Correspondent Richard Engel reported the equally complex answer, shown in Fig. 1, as developed by the Office of the Joint Chiefs of Staff . Noting the irreducible complexity demonstrated in this diagram, General Stanley McChrystal, the U.S. and NATO force commander remarked, “When we understand that slide, we will have won the war.” When operating in an environment marked by irreducible complexity, it is very difficult to design effective systems to address the challenges at hand.
The Roman, Mayan, and Chacoan civilizations all may have experienced the consequences of trying to design and implement solutions in an environment of irreducible complexity. Joseph Tainter uses network theory, energy economics, and complexity theory to surmise that these civilizations collapsed because of their inability to adequately handle the increasingly complex questions they were facing, socially, geopolitically, economically, and agriculturally . Could we as a community of design engineers be facing a similar challenge with the increasingly complex systems we are developing? The evidence certainly says we may be.
The defense industry provides a multitude of evidence on the impact of increasingly complex systems. For instance, the U.S. Government Accountability Office (GAO) analyzed major defense acquisition programs and found that research and development costs are on average 42% higher than originally estimated and that the average delay in delivering initial capability to the field is 22 months . An independent report on the British Defense Department estimated that on average defense programs have overrun their schedules by 80%, which have led to overall cost increases of 40% . Similarly, in the United States a survey of selected defense projects by the Government Accountability Office found programs in 2009 overran their schedule by an average of 21 months and budgets by 25% . Also, the F-35 has experienced significant budget and time overages due partially to the order of magnitude increase in subsystems and coordination challenges relative to the F-16 . The F-16 has 15 subsystems and on the order of 103 interfaces while the F-35 has 130 subsystems and on the order of 105 interfaces and this increase in complexity has been shown to be the most significant predictor in the growth of fixed wing aircraft costs . It must be noted that complexity may not be the only reason for these schedule and cost overruns in defense acquisition. Indeed, the government procurement process may incentivize firms to under estimate their schedule and cost estimates. However, these cost and budget overruns are also observed in complex system development at NASA  and in private industry including Boeing and Airbus .
While this relationship between a complex answer to a complex question may be more common in biology, physics, and chemistry, it should be more of the exception in engineering design. Indeed, as Simon proposed, we should strive for “nearly decomposable systems” which are in the short run characterized by an approximate independence of the subsystems, and in the long run by a dependent aggregation of the behavior of the other components . When we are able to achieve some measure of decomposability, then we can begin to uncover simple answers to complex questions, resulting in elegant complexity.
This marriage of complexity and simplicity has been noted in a number of other fields. Gribbin notes that some influential insights in science have come because “chaos and complexity obey simple laws” and that a “deep simplicity” lies at the foundation of all nature . Also, Charles Perrow recently developed a framework to model the transformation of firms in the new global economy . This framework integrates elements of complexity and simplicity by analyzing whether the interactions within a firm and between a firm and its suppliers and customers are complex or linear, and determining whether these interactions are tightly or loosely coupled. Perrow postulates that most firms will ultimately desire to linearize (i.e., simplify) their market interactions while keeping the couplings tight; this simplistic understanding of a very complex marketplace would eventually result in monopolistic power. Returning to the example of the complex network in Fig. 1 recently Eric Berlow developed a casual loop diagram (CLD)  of the chart and then identified guiding features to significantly simplify the insights to the complex question of what the Afghan counterinsurgency strategy should be .
So while the framework in Table 1 provides a general categorization of design problems from a perspective of complexity, the assertion here is that the upper right quadrant—complex problems with simple answers—is where the science of design can make significant advancements. It is essential that as a research community we strive to develop simple answers to complex problems, resulting in elegant complexity that can transform our ability to design the complex systems necessary to solve the global technical, environment, and social problems that are upon us. The principles that emerge from this pursuit will also facilitate the continued emergence of a design science. For many researchers, this mindset has already been directly or indirectly guiding their recent research. In Sec. 3, some of the current research positioned in the elegant complexity quadrant is presented.
Elegant Complexity in Design Research
In this section, a set of diverse research is presented with an emphasis on highlighting how elegant complexity marks the foundational essence of the work. This includes research on the early stages of a design process where behavior of complex systems is simulated using much simpler models (Sec. 3.1), research on the later stages of a design process where optimal configurations and layouts for complex systems are determined (Sec. 3.2), and research aimed at addressing global, sociotechnical, and usage issues in the context of complex systems (Sec. 3.3). While certainly not comprehensive across all of complex systems design, the work reviewed here was selected because of the relationship to the emerging field of design science. Notably, the work discussed here focuses on the development and application of design principles to a particular element of complexity in systems design. In addition, the selection was informed by researchers involved in the recent national science foundation (NSF) Design Workshop Series [26,27], the NSF Future of Multidisciplinary Design Optimization: Advancing the Design of Complex Systems Workshop , and the NSF/NASA Large-Scale Complex Engineered Systems Workshop .
Simulation of Complex Systems in Early Design.
Modeling and understanding the behavior of complex systems early in a design process presents a significant challenge. There has been some very effective research into this area, with a number of groups developing capabilities to capture various elements of complex system behavior to support decisions early in a design process. At the foundation is the following complex question:
How can the behavior of complex engineering systems be simulated early in a design process?
Larson and his research team ask this question and develop answers based on a simple hypothesis: that reliable uncomplicated component models can be composed into a trusted system model . Their motivation to ask and answer this question is a recognition that much of system design in industry is carried out by reusing previous models and components. However, they recognize that to build reliable multidisciplinary system models using simpler component models, designers are limited in their ability to predict system behavior because of the diversity of component behaviors and interactions. As a result, they develop requirements for compositional system modeling from published solutions, and establish a mathematical definition for component models from existing discipline specific models conforming to these requirements. They then combine these component models into compositional system models using a set of graph theoretic constructs, as shown in Fig. 2.
These graphs and their associated mathematical notations are then applied to a solar powered unmanned aerial vehicle (UAV), illustrated in Fig. 3. This model demonstrates the emerging complexity of the system model, as only the systems’ propulsion model has 13 subsystems with 22 connections between them. They then show that the system model computed solution very closely approximates the actual UAV behavior. As a result, a readily available population of applicable discipline specific models from previous systems can become component models to predict a new system's complex behavior. This simple approach to handling the complexity of systems can be used for systems involving any combination of subsystem models represented as differential-algebraic equations, discrete event system specifications, and/or computer computation.
In related work from Malak and colleagues, the specific focus is how to represent the complex behavior of heterogeneous systems . Technology characterization models (TCM) are introduced to abstract models of specific disciplinary (i.e., heterogeneous) technologies that represent system behavior. TCMs represent mathematical models of the capabilities of a system using the system’s performance attributes. By focusing on system attributes, it allows a designer to focus on those variables that relate component performance to system performance and ignore lower-level, complex, domain-specific variables. By doing this a designer can recast multiple complex and discontinuous design spaces for multiple system concepts to a simpler continuous representation of system-level behavior. As a result, TCM’s allow for competing technologies to be defined in a much simpler way in the same performance space. As an example, consider the conceptual illustration in Fig. 4 that demonstrates a simple example of a TCM in the form of some kind of representation of the Pareto or efficient frontier for a set of design alternatives. This research extends previous work that modeled parameterized Pareto frontiers and studied the composition of frontiers across heterogeneous systems [31–34].
By being able to predict system-level performance another important capability is the prediction of the failure of complex systems. However, in the early stages of a design process, designers have not selected specific components and therefore detailed and complex models of system components that would facilitate understanding faulty system behavior and its consequences are not available. Instead, the systems are represented using low-fidelity, simpler models of intended functionality. As a result, another complex research issue is how to analyze functional failures and fault propagation at an abstract system topology level before any potentially high-cost design commitments are made. In order to address this challenging issue, Tumer and colleagues developed the functional-failure identification and propagation (FFIP) framework to enable a designer to make some simple conclusions about the behavior of a complex system . Using the FFIP, designers can proactively analyze the functionality of a system early in a design process, understand functional failures and their propagation paths, and determine what functions are lost and what the impact to the overall system could be.
The FFIP analysis framework and its three main modules are shown in Fig. 5. The first module of the FFIP analysis framework uses a set of graphical system-level representations, including a functional system model, a configuration model, and a behavior model to represent the system [36–39]. This representation provides an elegant and consistent schema to capture the function–configuration–behavior architecture of a system at an abstract level. It also helps build function failure logic, facilitating the reasoning about potential faults and their propagation through the system. FFIP reveals mappings that are otherwise difficult to see in complex systems and provides an elegant simulation environment for early design stage analysis, and as such, makes reliability and risk analysis possible during the qualitative stages of design. As we move into the more quantitative stages of design, additional complex research issues emerge as presented in the following section.
Modeling, Simulation, and Optimization of Complex Systems in Later Design Stages.
As noted by the editors of the recent special edition of the Journal of Mechanical Design on Designing Complex Engineering System, multidisciplinary systems are complex and multifaceted, they have emergent and unpredictable behavior, and their solutions must integrate knowledge from multiple disciplines while managing a wide range of risks and uncertainties . Unfortunately, common approaches to solving these problems are ad hoc and reductionist, often resulting in cost over-runs, schedule delays, and solutions that perform poorly. We have clearly reached the limits of what these approaches can do. To make progress, we need a more rigorous and deeper understanding of complex engineered systems and how they should be designed; we need firmer foundations for a science of design. The approaches presented in this section are all motivated by providing rigorous approaches for complex systems design from which elegant simplicity emerges in the accompanying formulation, solution, and/or insights.
Not only have products and systems become more complex, but the number and type of issues that must be accounted for in a design process is staggering. For instance, while life-cycle product analysis has been an active of research, intentionally designing products for rapid and easy recovery has become a rapidly growing research field. Considering product recovery requirements when the recovery is years, if not decades away, is in itself a very complex problem to model and solve. At the core of this issue are the following questions:
How do design differences in complex products impact product recovery and what architectural characteristics are desirable to facilitate recovery?
The links between product design and the product recovery process have largely been unknown. However, Kwak and Kim develop a framework to address these questions by optimizing the product architecture while simultaneously considered the recovery network . As a result, recovery profit is estimated based on the optimal reprocessing options for a product, as well as on the optimal recovery network design. The simple yet elegant result of the study is that differences in product design have a great influence on potential profit from product recovery. For example, the study demonstrated that a modular design for a cell phone handset is more preferred than an integrated design when a high rate of defects in the LCD screen is expected. This result elegantly implies that part composition has a greater impact on handset recovery profit than does assembly structure or weight, especially when the part has a relatively high cost.
While this framework effectively addresses a complex issue with a simple answer, demonstrating an example of elegant complexity, Kwak and Kim point out that many recovery network features are uncertain. In addition, the application of such a framework to other complex systems will demand the integration of considerations from multiple disciplines. Handling uncertainty across multiple disciplines in a system optimization process presents another difficult question that researchers are attempting to find simple answers for:
How can disciplinary consistency be maintained in the design of complex engineering systems under uncertainty?
In a deterministic case when a multidisciplinary analysis is decomposed, each subsystem should eventually have the same value for the coupling variables. Otherwise, the subsystems are said not to be consistent [42–44]. Typical approaches such as using an Interdisciplinary Consistency Constraint  are not effective when input uncertainty is introduced because the uncertainty in the inputs leads to a range rather than a single value for each coupling variable. An elegant solution to this complex problem is offered in the form of multiobjective collaborative robust optimization (McRO) . The McRO can find robust solutions for multiobjective multilevel coupled optimization problems in which uncontrollable variations exist not only in parameters in each subsystem but also in coupling variables. McRO requires a tolerance region for uncertain parameters and an acceptable variation range for objective functions and coupling variables  which provides a cushion to absorb the variation in coupling variables. In this case, as long as the variation in coupling variables is within the tolerance region of targets, all subsystems are said to be collaboratively consistent.
This elegant approach to absorb variation in the mathematical formulation of a complex system now allows fully coupled multidisciplinary design optimization problems with interval uncertainty to be solved. Moreover, the solutions are comparable to a single-disciplinary robust optimization approach.
Although ensuring disciplinary consistency under uncertainty is a significant accomplishment, additional sources of complexity are introduced when the system itself is required to undergo transformation or reconfiguration. In this case, not only are the operating conditions changing, but the system is as well. Reconfigurable systems are emerging as an effective way to perform at a high level in different operating environments. However, researchers and practitioners are realizing that they can be one of the most complex systems to model, design, and deploy. Indeed since many reconfigurable systems are based on a biological principle, mimicking the design of nature becomes a daunting task. Despite the complexity of reconfigurable systems in biological contexts, Singh and his colleagues were able to identify three design principles for the transformation of systems: expand/collapse, expose/cover, and fuse/divide . Haldaman and Parkinson argue for a fourth principle, reorientation, based on a sample of products whose reconfiguration is not captured by the previous three principles . These researchers have successfully extracted simple principles from complex behavior in natural systems, giving designers a set of elegant solutions to complex problems. For instance, Ferguson and his research team demonstrate the effectiveness of using reconfigurable principles to accommodate system uncertainty while also increasing important aspects of the system’s performance . Their elegant solution provided innovative insights into the development of complex systems.
A bottom-up approach to developing reconfigurable systems is to consolidate two (or more) existing single-state static products into an integrated product that provides multiple functions. This requires the integration of two or more already complex systems into a new system with increased functionality. At the core of this research challenge is the following question:
How can two or more complex system be consolidated into a multifunctional product?
In a general sense, the idea of consolidation is motivated by simplification. However, when increased functionality and more complex operation is the result of product consolidation, simplification is difficult. Yet, Kalyanasundaram and Lewis have developed an approach based on function sharing and flow analysis that simplifies the consolidation process to a quantitative assessment of component re-use . Functional similarity is quantified to identify the functions that can be shared by the same components. The information obtained from the function structure is then mapped to the components of two existing products to analyze their roles in the final reconfigurable product architecture.
Each component in the original systems is allocated into one of three categories: common, representing similar components that perform the same function in the systems; dormant, representing components that perform a function in one state of operation, but are nonfunctional in another state; and conflicting, representing components that must undergo some type of reconfiguration in order to remain in the new multifunctional product. The prominence of each of these three categories of components is quantified and represented elegantly with a prism as illustrated in Fig. 6. The superscripts on the “dormant” and “conflicting” axes correspond to which product the axis represents (product 1 or product 2). The height of the prism (the common core axis) represents the potential for component sharing between the products.
While reconfigurable systems transform to accommodate changes in operating conditions, changes in technology also create need for resilient products and systems. At some point, most electromechanical systems will be impacted by new technology, whether it is upgraded software, higher quality components, or new technology modules. Studying the impact of the infusion of such new technology into systems and products is an important research challenge and is captured in the following question:
What is the impact of the integration of new technologies into existing product lines?
de Weck and colleagues have studied this complex issue and developed a repeatable and scalable process that extracts simple insights into the lifecycle design of large systems [51,52]. Their systematic framework supports the assessment of the impact of technology infusion early in the product planning cycle. In addition, it quantitatively predicts the impact of technology infusion through the use of a design structure matrix (DSM) and the subsequent creation of a delta DSM (ΔDSM) describing the changes to the original system due to the infused technology. In Fig. 7, a complete DSM representation of a baseline printing system is shown . The DSM consists of 84 elements, and shows physical connections, mass flows, energy flows, and information flows within the system.
The cost for technology infusion is then estimated from the ΔDSM, and the potential market impact of the technology is calculated based on customer value, expressed through utility curves for system technical performance measures. Finally, probabilistic analysis is performed to predict the change in the net present value of a newly infused technology. The answers that are found to the original question are strikingly simple. For instance, the researchers found that the greater the range of uncertain operating conditions the more lifecycle value an adaptable system can deliver. By generating simple answers to complex questions, this research demonstrates the essence of elegant complexity.
While the design of multidisciplinary systems provides complex challenges, introducing globalization and sociotechnical issues creates another set of complex questions.
Addressing the Complexity of Globalization and Sociotechno Interfaces.
Globalization has created a wealth of additional challenges for design researchers. As noted by Augustine , new issues in complexity are emerging in the flatter world we find ourselves in because of the increase of social influences, decentralization, and cultural forces now impacting the development of products and systems. For example, it has become important to be able to capture user preferences and then link them to a system optimization model [54–57]. Since this is a very difficult task given the potential variability in population preferences, many times the number of variables in a preference model is kept small. However, there are a number of qualitative aspects of design that demand a large number of variables to accurately model, including stylistic issues. As a result, a complex question that has recently been posed is as follows:
How does one quantitatively capture user preferences on qualitative aspects of design?
Ren and Papalambros address this complex issue by developing an efficient global optimization algorithm to both converge to a preferred design quickly yet still exploit potentially preferred new designs . The algorithm is applied to an automotive styling problem of high geometric dimension and is able to support the identification of target (preferred) designs. This approach is able to elegantly guide users to preferred designs through an interactive optimization process, effectively addressing what can be a very complex problem of preference solicitation in high-dimensional space.
Not only are user choice preferences critical to capture before product/system deployment, but the dynamics of the user interaction with the product/system many times are essential to model and simulate. Since user interactions with the product/system involve feedback using multiple senses, developing a comprehensive analytical model of such complex interactions remains a future research challenge. However, modeling and simulating these interactions provide a way to extract simpler heuristics and principles that can guide the design of systems characterized by significant user interaction, as noted in the following research question:
How can complex socio-technical system use interactions be captured and utilized in the design of more effective products?
Consumer choice modeling has become prominent in engineering design research and while what a consumer chooses can be modeled, why they make those choices is a much more complex issue. To gain insight into this issue, Chen and her research team quantify the impact of usage context on consumer choice using a hybrid electric vehicle (HEV) case study, representing a product where the purchase motives are very important to capture and understand . This enhanced understanding of a complex and dynamic environment allows engineering designers to determine optimal performance targets for product development. These targets represent a relatively simple vector of scalar values that subsequently provide a mechanism to design products with enhanced usage in complex heterogeneous markets. The model was further expanded to include product, consumer, and social network interactions in HEV market choices . While this increased the complexity of the fundamental research issues, the insights are wonderfully simple. For instance, the research team made conclusions on the influence of HEV-owners on their friends, and the impact of education level on HEV preferences.
Sociotechnical interactions are also prominent in emergency scenarios where systems, such as aircraft, subways, and buildings must support optimal evacuation performance. The design of such systems must account for complex interactions among occupants and between occupants and their surroundings. Mesmer and Bloebaum have developed a simulation tool to identify principles based on the emergent behavior of sociotechnical systems including aircraft, buildings, and their surrounding environments. One element of the simulation tool—the use and impact of personal communication devices—has been the focus of recent studies . Based on this study and others, some simple yet powerful insights can be gained from the simulation of complex sociotechnical systems. For instance, in some scenarios the presence of groups (e.g., families, co-workers) detracted from the evacuation process as members of the group moved against the evacuation flow to find their group members. In other scenarios, groups with strong leaders significantly helped the evacuation process. These insights along with others can inform the design of such systems and their surrounding environments by extracting simple principles from very complex sociotechnical interfaces.
When designing complex sociotechnical systems, many times the design decisions and tasks are allocated to disciplinary subsystem teams who then must communicate and coordinate their subsystem solutions. There are a number of different allocation, coordination, and convergence strategies for such distributed design problems. There are also many process architectures for such problems, including parallel tasks, optimal sequencing of tasks, or a hybrid structure. Until recently, the process architecture was thought to impact the rate of convergence to a particular solution, but not whether the solution process would ultimately converge or not. A recent study challenged this assumption and asked the following question:
In a complex network of coupled subsystem optimization problems, what impact does the process architecture have on the final system solution?
There are a number of frameworks that have studied these coupled subsystem optimization problems in the context of multidisciplinary design optimization including analytical target cascading (ATC) , concurrent subspace optimization (CSSO) [63,64], bilevel integrated system synthesis (BLISS) , and collaborative optimization (CO) . While these approaches each have certain advantages in terms of global optimality, convergence speed, or compatibility constraints, the process flow structure is assumed to be consistent in most approaches. While the original ATC formulations were hierarchical in nature, formulations that accommodate nonhierarchical subproblems  and separable subproblems to facilitate parallel computation  have been developed. However, these developments focused on providing additional flexibility and computational efficiency rather than studying the impact on whether or not the design optimization problem converged. Lewis and Devendorf found that process architecture not only impacts the convergence speed (which was already known), but that it can determine whether the process even converges or not . As a result, a parallel equivalent model was created, allowing for a straightforward determination of the convergence properties of any complex decentralized network of coupled design optimization problems.
Many times, these distributed design problems are allocated to teams across the globe who are trying to determine the best product strategy for a number of global markets. Consequently, not only are issues of decentralization, convergence, stability, sociotechnical interactions, and optimality present, but additional considerations of regional cultures, global economics, and international regulations must be considered. In global product design, there are numerous factors to consider and the interactions between all these factors create a highly uncertain environment (see Fig. 8) .
Because of the volume of issues to consider in such a complex product design problem, there could be hundreds of possible design strategies and thousands of resulting solutions. As a result, a fundamental research question being asked is the following:
In global product design, what is the best product design strategy to use when integrating user, technological, and regulatory requirements?
This question is addressed by Simpson, Parkinson, and their team who identify a set of elegant solutions centered around determining whether to create a single global platform of products, a flexible global platform of products, or a series of unique products for each region . This top-down approach can help simplify a very complex task that many companies are currently facing as global markets emerge and holistic product design opportunities arise.
There is an increasingly anxious tension that is emerging in engineering design practice as systems become more and more complex. As noted by the editors of the recent special edition on “Designing Complex Engineering Systems,” we are facing countless challenges in understanding, designing, and managing complex systems in the areas of defense, building, transportation, energy, food, water, disasters, global markets, healthcare, ageing populations, etc. . Repeatedly, we are learning that we have reached the limit of what current engineering design approaches can offer. We are in desperate need of a fresh mindset on these complex problems. While understanding these complex systems is important, what is also critical is the ability to identify elegant design principles which reside at the foundation of the efficient creation of such systems. The golden section principle stands as an example of an elegant guideline applicable to many fields including mathematics, architecture, art, biology, philosophy, and communication systems . As a design community there is a glaring challenge before us to establish similarly elegant design principles that can provide foundational support for the development of a design science to support the ever-accelerating conception of complex systems across numerous domains.
While the development of a research agenda in complex systems design is beyond the scope of this article, a number of open challenges are emerging from the discussion of current advancements in Sec. 3. It is clear that many researchers, including the ones discussed here, are recognizing the limitations and perhaps even the flaws of long-standing assumptions, tools, and methods. Given the fact that our current methods, theories, and tools, denoted collectively as Π, are not working as well as we would hope, we must develop new methods, theories, and tools, denoted collectively as Π′, in the context of complex engineered systems. If Π′ improves upon Π, then we can progress toward a science of complex engineered systems. However, as also noted in related discussions [72,73], to facilitate this improvement the following need to occur.
Π′ explains more facts than Π: How do we develop better descriptive models for the emergent behavior we observe in the organizations that design complex engineered systems? And where are the opportunities to develop descriptive models of natural systems to help design better engineered systems?
Π′ predicts more facts than Π: How do we develop better predictive models for future emergent system behavior governed by uncertainty and interdependencies? Many current tools and methods support the prediction of a “best” design. However, we must consider how we are measuring “best” and if this measurement aligns with how we want or need our complex systems to perform. Without a sound way to quantitatively evaluate “best,” we are left to predict performance using individual judgment and experience.
Π′ makes fewer mistakes than Π: How can we increase the rigor of the evaluative capabilities of our methods based on known principles of logic and consistency? The sheer number of people and processes interacting to realize a complex engineered systems demands more capability to capture the inherent interactions between the physical science and social science models.
Π′ is simpler than Π for a given level of explanatory power: Complexity in the natural world arises from simple rules and complex effects are many times explained by simple causes . While our engineered systems are becoming more complex, how can we evaluate the simplicity or elegance of methods while maintaining rigor and consistency?
Π′ solves more problems than Π: Simpler, more elegant answers have the potential to address broader classes of complex problems characterized by similar behavioral phenomena. How can these fundamental phenomena be better identified and shared across application domains to efficiently leverage design elegance?
In this article, I have tried to provide a framework to consider the formulation and answering of design research questions in the context of the pervasive growth of complex systems. While it may be appropriate to ask complex questions, their answers can be either equally complex or reducibly simple. When the answers are equally complex, what results is irreducible complexity; it is this result that has led to the quandary we find ourselves in. When complex questions are answered with accompanying complexity, knowledge may be gained, but the usability of this knowledge to actual solve the complex problem with practical effectiveness may be minimal. For this reason, large enterprises struggle with managing the resulting complexity as the design insights are difficult to realize in a producible form.
Developing complex answers to complex questions is many times viewed as the scientifically preferred route. However, in engineering design, the preferred route, I postulate must be one of elegant complexity where simple answers provide elegant insights to complex problems. Otherwise, we may see continued increases in budget overruns, production schedules that extend years beyond their deadlines, and large-scale systems that are designed using irreducible complexity that very few understand. When complex questions are answered with reducible complexity, usable knowledge to solve the problem in innovative and effective ways is gained, this knowledge is sharable with large enterprise networks, and elegant design principles emerge. While it is not always possible to answer complex questions with simple answers, in this article a number of researchers are highlighted who have done precisely this.
The recent Costa Concordia cruise ship disaster may provide an alarming foreshadowing of what could await other such complex systems when the underlying fundamentals of a system’s design, operation, and function are not understood. More than a decade ago, Professor Dracos Vassalos recognized that ship design was heading toward a fragile tipping point, calling for an overhaul of the ship design process . After the disaster off the coast of Tuscany, Prof. Vassalos captured the quandary the ship design research and industry communities now find themselves in, observing that “the internal architecture of cruise ships is so complex that even with the same effects being accounted for in…experiments, computer simulations, or indeed, in real-life accidents, we could potentially see a different outcome every time we simulate the accident” .
The ship design community is not the only one in this quandary; indeed, many complex system domains share this plight. As another example, in a recent NSF and NASA Workshop on Large-Scale Complex Engineered Systems, Mark Ballin from NASA-Langley while addressing the design of a new National Airspace System (NAS) noted that the current NAS is not even well understood due to its enormous complexity .
It is important to note that people are being transported around the world more safely than ever before. However, the increasingly complex systems that are critical cogs in our transportation networks (e.g., ships, planes, communication systems) are setting the stage for perhaps fewer but more catastrophic disasters as progressively safer systems could cause operators to underestimate the risk of certain environments. Additionally, as engineers design better and more functional systems, previously unrealized weak points may be unintentionally exposed. Finally, the complexity that makes these systems so impressive may actually facilitate and accelerate failure when something goes wrong .
As an interdisciplinary research community, we have a tremendous opportunity to identify fundamental design principles to model, simulate, realize, and support these large-scale systems. With this opportunity, we find ourselves postured to make significant impact not only on the scientific design of complex technical engineering systems, but also on complex engineered systems in the civic, economic, healthcare, environmental, social, and political arenas.
The author would like to thank the professional colleagues, highlighted in this article, for doing such excellent research that exemplifies elegant complexity. Also, the author is grateful to the National Science Foundation and the National Aeronautics and Space Administration for envisioning and supporting the “Workshop on Large-Scale Complex Engineered Systems” where some of these ideas were first presented. The author would lastly like to thank the anonymous reviewers for their discerning comments and suggestions. They helped sharpen many of the ideas, greatly improving the paper.