Abstract

This article introduces 55 prompt questions that can be used by design teams to consider the social impacts of the engineered products they develop. These 55 questions were developed by a team of engineers and social scientists to help design teams consider the wide range of social impacts that can result from their design decisions. After their development, these 55 questions were tested in a controlled experiment involving 12 design teams. Given a 1-h period of time, 6 control teams were asked to identify many social impacts within each of the 11 social impact categories identified by Rainock et al. (2018, The Social Impacts of Products: A Review, Impact Assess. Project Appraisal, 36, pp. 230241), while 6 treatment groups were asked to do the same while using the 55 questions as prompts to the ideation session. Considering all 1079 social impacts identified by the teams combined and using 99% confidence intervals, the analysis of the data shows that the 55 questions cause teams to more evenly identify high-quality, high-variety, high-novelty impacts across all 11 social impact categories during an ideation session, as opposed to focusing too heavily on a subset of impact categories. The questions (treatment) do this without reducing the quantity, quality, or novelty of impacts identified, compared to the control group. In addition, using a 90% confidence interval, the 55 questions cause teams to more evenly identify impacts when low quality, low variety, and low novelty are not filtered out. As a point of interest, the case where low quality and low variety impacts are removed – but low novelty impacts are not – the treatment draws the same conclusion but with only 85% confidence.

1 Introduction

Most engineered products have an impact on society [2]. Those social impacts affect sustainable development, as do environmental and economic impacts [3]. Thoughtful engineering decisions made in consideration of all three impact areas are most likely to support the United Nations (UN) sustainable development goals (SDGs), which address the global challenges of our day, including climate change [4,5]. This article is focused solely on improving the design team’s ability to meaningfully consider the social dimension of sustainable design.

As a way of decomposing the challenging task of considering and evaluating the social impact of engineered products, Rainock et al. carried out a substantial literature survey of 121 papers from 72 different journal sources in numerous disciplines [1]. The goal of the survey was to identify the various ways engineered products impact society. Rainock et al. identified 11 social impact categories, which represent the scope of the present article. They are as follows:

  • Impacts on health and safety

  • Impacts on education

  • Impacts on paid work

  • Impacts on conflict and crime

  • Impacts on family

  • Impacts on gender

  • Impacts on human rights

  • Impacts on stratification

  • Impacts on social networks and communication

  • Impacts on population change

  • Impacts on cultural identity and heritage

Ottosson et al. asked a multidisciplinary team consisting of eight engineering and social scientists to map these social impacts onto 150 products designed for social good. With this mapping, they identified the conditional and joint probability of impacts in these categories being co-present in any one product [6]. They found that in no case did a product have impact in only one of these categories, thus pointing to the value of considering various social impacts for any one product.

Pack et al. mapped these social impacts to industry practice by interviewing 46 individuals at 34 companies in search of the breadth to which social impacts are considered by design and engineering professionals [7]. The study found that social impacts were considered by these professionals, but not in a way that was as comprehensive or as holistic when compared to the social impact categories identified by Rainock et al. [1]. Pack et al. found that industry professionals were significantly more focused on health and safety impacts, seemingly at the expense of the other social impact categories. Figure 1 illustrates the main finding from Pack et al., which is pertinent to the present article [7].

Fig. 1
Results of Pack et al.’s industry survey on attention given to social impacts based on category [7]
Fig. 1
Results of Pack et al.’s industry survey on attention given to social impacts based on category [7]
Close modal

Also discovered by Pack et al. was the reality that design teams have very few tools at their disposal for considering the social impacts of their design and engineering decisions [7].

Various tools have since been developed with hopes of facilitating a design team’s consideration of social impacts during the development process. Those include social failure modes and effects analysis (FMEA) [8], social impact modeling [9], use of social impact sensors [10,11], and more. In recent years, there has been an increase in the development of complex system-based models, including those that rely heavily on adoption models [12], agent-based models [13], and optimization techniques. Although meaningful, there is still a shortage of simple, generalized, and quick-to-use tools that can positively guide design teams to more fully consider the social dimension of their work.

This article introduces a simple set of thought questions, designed by a set of engineers and social scientists, to be used as prompts when considering social impacts for a given product. This set of questions was tested in a controlled experiment and shown with 99% confidence to help teams more evenly identify high-quality, high-variety, high-novelty impacts across all 11 social impact categories during an ideation session, as opposed to focusing too heavily on a subset of impact categories. Importantly, these questions support the team in this way without reducing the quantity of impacts identified.

To present the questions, experiment, and findings, the remainder of this article is organized as follows: Sec. 2 presents a historical perspective on social impact consideration as well as a review of pertinent literature. Section 3 presents the 55 questions and their development, followed by Sec. 4, which describes the test used to validate the questions. Section 5 presents results. Finally, in Sec. 6, concluding remarks and limitations are presented.

2 Historical Perspective and Literature Survey

The social impact of technology has been a topic discussed in the archival literature since the 1940s, beginning with Marcuse who described how technological solutions strongly influence human behavior [2]. His vivid examples include highway design with its careful placement of vehicle parking near scenic vistas, refueling locations, and signage regarding leisure and refreshment. Implied from this example are social impacts on cultural identity and heritage, paid work, networks and communication, and population change. He provides other examples that extend from technology’s influence on behavior to its influence on thoughts and priorities when he reports “the average man hardly cares for any living being with the intensity and persistence he shows for his automobile.” This implies the technology’s social impact on gender, family, and possibly more. Marcuse’s thesis is powerful; the engineer—through technology development—is a social leader. While a few specific social impacts can be implied from his examples, Marcuse provides little guidance for the engineer to grasp or plan for the social impacts he or she has.

In 1947, with growing acknowledgment of technology’s positive and negative role in society, Bartlett examined the social impacts of the era’s most impactful technology: the radio [14]. His study used measures of listener audience size and location coupled with a variety of social surveys to draw conclusions about the radio’s social impact. Bartlett discusses the radio’s influence on voters and reelection, farmers, and society’s confidence in news reporting. Bartlett reports various details including that farm-family cohesiveness increased with the adoption of the radio. He concludes that the radio “made life more appealing” and that its wide and rapid diffusion was due to its universality. Although more explicit than Marcuse [2] in articulating specific social impacts, Barlett’s review of the radio’s social impacts does not attempt to establish social impact theories or methodology.

By the 1960s, however, theoretical underpinning still present in modern sustainable development emerge as researchers focused on what is now considered traditional socio-economics (impacts on health, education, and income). Centering on health and safety, Starr introduced a basic utility-risk framework for evaluating social impacts [15]. To identify the social impacts needed to carry out his framework, Starr indicates that readily available historical data on accidents and health are “stepping stones” for design teams to find social benefits and costs of technology. While true, his guidance is minimal, leaving design teams with complex sociotechnical systems that are difficult to decompose from a social impact perspective. The challenge is worsened by accelerated technology diffusion, which Starr identifies as “engineering developments involving new technology…become deeply integrated into the system of society before their impact is evident or measurable” [15].

In the 1980s, quantitative measures of social impact become more present in the literature, centered primarily on technology safety [16,17]. Kenney [16] introduces a von Neumann–Morgenstern utility function [18], which is executed using a hierarchy of individual and societal level impacts—all based on fatalities caused by technology. While valuable for its important step toward quantifying social impact, the impacts are so narrowly focused on fatality that the broader meanings of social impact are lost.

Also focused on fatalities, Slovic et al. widen Kenny’s evaluation of social impact by considering pain, suffering, and economic hardship of victims and their family and friends, as well as public distress and economic turmoil that can result from larger scale technological accidents [17]. Their study draws on various empirical studies involving drugs, transportation, weapons, and more to capture societal perceptions of hazard and risk.

In the early 2000s, as the millennium development goals gained popularity [19], social sustainability research began capturing the social impacts of business practice in a deeper way (with some extensions to engineering) [2024]. In their seminal work, Labuschagne and Brent directly address the question ‘What social criteria must a social impact assessment method consider and measure?” While never explicitly answered due to the unique nature of each enterprise, they review 31 frameworks and guidelines related to social impact assessment [20], ranging from the United Nations Commission on Sustainable Development to sustainability metrics proposed by the Institution of Chemical Engineers, to the Dow Jones Sustainability World Indexes Guide. Their review maps frameworks and guidelines to 18 social criteria influenced by business structure and practice. These include the criteria of (i) economic welfare and employment, (ii) community involvement of company, and (iii) fair labor practices. Only one of their identified social criteria definitively relates to engineered products: product responsibility. Only 4 of the 31 frameworks and guidelines mapped to product responsibility.

With these studies in the early 2000s [2024], an important pattern emerges: the identification of broad social impact categories (e.g., equity), more focused subcategories (e.g., gender equality), and measurable social indicators (e.g., ratio of average female wage and male wage).

Following these developments to introduce social sustainability measures into business practice and structure, more specific social impact considerations become noticeable in the engineering literature. For example, Rojanomon et al., when choosing run-of-river hydro-power sites in Thailand, explicitly consider the hydro-power project’s impact on community member health, education, employment, quality of life, household changes, community changes, use of nearby forest, community perception/attitude toward project, and community support [25]. With this and other project studies [26], we see social impacts as applied to specific engineering projects, but we do not see generalized methods in the early 2000s to guide design teams in identifying social impacts pertinent to their specific project.

In recent years, however, more generalized engineering-centric methods for considering the social impact of engineered products appear in the literature [27], including methods such as design justice [28,29], with its emphasis on equity. Considering 374 engineering papers published between 2012 and 2022, Armstrong et al. observed various trends related to Rainock’s 11 social impact categories [1], including large variation in literature attention across the 11 social impact categories, as well as noticeable mismatch between impacts engineered products have and the attention given to those categories in the literature. These observations are illustrated in Fig. 2.

Fig. 2
Quantity of papers/products for each social impact category from Ref. [27]. Notice wide variation across impact categories, and large differences between product and literature classifications.
Fig. 2
Quantity of papers/products for each social impact category from Ref. [27]. Notice wide variation across impact categories, and large differences between product and literature classifications.
Close modal

Of the 374 papers reviewed by Armstrong et al., 134 of them were coded as including heuristics and/or frameworks meant to guide engineering design teams. Notable among these is the work by Ottosson et al. [6], where design teams are guided to nonobvious impact categories through correlation tables derived by joint and conditional probabilities of various impacts being co-present. While these tables can be used very quickly by any design team, the study is limited to simply guiding teams to one or more of the 11 social impact categories and does not explicitly help teams identify specific social impacts within those categories.

Stevenson et al. [9] provide a more in-depth approach, which allows teams to identify pertinent and specific social impact indicators within the 11 social impact categories. A significant drawback to this approach, however, is that it requires dozens of hours of design work to explore social impact categories, converge upon pertinent ones, and extract the necessary data needed to carry out this approach.

In review, we see the early recognition (1940s) of the engineers’ role as technology designer, and the influence of engineered products on society [2,14]. By the 1960s, we see a coalescence around impacts related to socioeconomics (health, education, and paid work) around which basic utility-risk frameworks emerge [15]. More focused mathematical models for social impact, and consequently less comprehensive socially, gain traction in the literature in the 1980s [16,17], but not without deep criticism regarding their accuracy. A wider perspective on social sustainability appears in the literature in the early 2000s with the explicit articulation of the broad social impact categories and specific social impact indicators popularized with the millennium development goals [2024], although these were heavily centered on business practice, not engineering. From 2012 to 2022, we see the emergence of numerous engineering studies including social impacts, although we continue to see a noticeable imbalance in how broadly social impacts are considered in the engineering literature [27].

3 Development of the 55 Questions

Having conducted the study presented in Sec. 2, we believe that there is an opportunity to help design teams identify a wider range of social impact than facilitated by the current literature. While there are meaningful models and methods for assessing [30] or predicting [13] the social impact of engineered products, these are often resource intensive to create. In addition, these models and methods require insights and outlooks not often associated with engineering, such as those related to a stakeholder’s cultural identity and heritage and how that might be enhanced or compromised by engineering decisions. These shortcomings often prevent engineers from considering social impact more fully in the engineering process.

With the goal to remove some of these shortcomings, we sought the development of a simple design activity that could be completed in 1 h with little training, yet provide meaningful insights through broad consideration of social impact. The activity format we designed is a series of thought questions about the potential social impacts of a product across 11 social impact categories. Because there are 55 such questions, we simply refer to them as The 55 Questions or The 55 Prompt Questions.

The 55 questions are the result of a systematic multiphase process carried out by an interdisciplinary team consisting of three social scientists and five engineers, where all of the engineers had notable experience in development engineering. The goal of the process was to identify meaningful questions that would prompt consideration of social impact—not a complete and comprehensive set of questions to capture all social impacts. The process had four distinct phases.

  • Phase 1: Divergent social science exploration. In this phase, a social scientist identified many questions aimed at self-assessing the social impact of engineering work. The social scientist centered the question ideation on the 11 social impact categories defined by Rainock et al. [1]. For each category, the social scientist proposed social dimensions that could be represented in a question. For example, within the category of paid work, worker efficiency and new business creation were proposed as separate dimensions, around which candidate questions were formed. Feedback was provided weekly by the entire interdisciplinary team. After approximately 2 months, a large set of candidate questions existed.

  • Phase 2: Reframing phase 1 questions from an engineering perspective. An engineer then voiced each question to be product centered and accessible to engineers. The engineer also had the opportunity to propose new questions. Like phase 1, feedback on progress was provided weekly by an interdisciplinary team. This resulted in many more questions than could be asked and answered in an hour-long activity.

  • Phase 3: Interdisciplinary convergence. All candidate questions from phase 2 were then evaluated, similar questions were combined, and redundant questions were removed. The team repeated this step multiple times over the course of approximately 2 months to refine the questions, ultimately choosing four questions per category. Importantly, the first four questions in each category are detailed and thought provoking, while the final question in each category encourages thought about other potential impacts not identified by the first questions. This fifth question is essential to illustrate to those using the questions that they do not encompass all possible social impacts.

  • Phase 4: Interdisciplinary acceptance of final 55 questions. As a final step in the development of the 55 questions, an interdisciplinary subteam refined the final wording and overall coherence of the set of questions. Once all subteam members were satisfied, the 55 questions were finalized and tested as described later in this article.

The 55 questions are designed for teams to spend approximately 1 min reading and internalizing each question, and identifying 1 to 3 potential impacts as prompted by each question. The 55 questions are listed below, by social impact category.

  • Impacts on health and safety.

    1. In what ways could the product improve/change the health of users or aid users in healthy practices?

    2. In what ways could the product (unintentional or not) harm users, or have long term or addictive effects?

    3. In what ways could the product protect/prevent users from harm or safety hazards?

    4. In what ways could the product affect mental and/or emotional health?

    5. In what other ways could the product impact health and safety (positive or negative)?

  • Impacts on education

    1. In what ways could the product provide formal or informal education or skill training to users?

    2. In what ways could the product require or provide specialized education/training to use?

    3. In what ways could the product be used in the creation, discovery, or sharing of new knowledge?

    4. In what ways could the product change access to education by gender, socioeconomic status, age, or race?

    5. In what other ways could the product impact education (positive or negative)?

  • Impacts on paid work

    1. In what ways could the product change the output, efficiency, or ability to produce a good or service?

    2. In what ways could the product create jobs or skilled labor? Could it replace or eliminate jobs?

    3. In what ways could the product affect the safety/well-being of employees or protect worker rights?

    4. In what ways could the product facilitate the creation, management, or growth of businesses?

    5. In what other ways could the product impact paid work (positive or negative)?

  • Impacts on conflict and crime

    1. In what ways could the product help detect/prevent/prosecute crime or help ensure fair legal process?

    2. In what ways could the product be used for crime such as violence, theft, sexual abuse, substance abuse, or fraud?

    3. In what ways could the product expose or protect personal information/privacy?

    4. In what ways could the product increase interpersonal conflict/contention (road rage, arguments, litigation)?

    5. In what other ways could the product impact conflict and crime (positive or negative)?

  • Impacts on family

    1. In what ways could the product alter the way family members interact with each other?

    2. In what ways could the product strengthen or weaken family ties, including spending time together?

    3. In what ways could the product be used simultaneously by or shared between family members?

    4. In what ways could the product change family roles (household work, child-rearing, income earning, etc.)?

    5. In what other ways could the product impact family (positive or negative)?

  • Impacts on gender

    1. In what ways could the product amplify gender-specific issues (health, sanitation, gender norms, etc.)?

    2. In what ways is the product’s usability or ergonomics affected by the user’s gender?

    3. In what ways could the product be sold in gender-specific product lines, or marketed to specific genders?

    4. In what ways could the product maintain/uphold/challenge gender roles and norms (cultural expectations)?

    5. In what other ways could the product impact gender (positive or negative)?

  • Impacts on human rights

    1. In what ways could the product provide/extend the most basic human rights (water, energy, etc.) to users?

    2. In what ways could the product affect access to public services or democratic processes for all people?

    3. In what ways could the product affect personal freedoms (religion, assembly, speech)?

    4. In what ways could the product influence how human rights are protected/violations reported/prosecuted?

    5. In what other ways could the product impact human rights (positive or negative)?

  • Impacts on stratification

    1. In what ways could the product be used to distinguish between social or economic groups?

    2. In what ways could the product be accessible to all people, or could it decrease access/accessibility?

    3. In what ways could the product provide access to goods/services to those who were previously excluded?

    4. In what ways could the product be used to improve or degrade one’s socioeconomic status?

    5. In what other ways could the product impact stratification (positive or negative)?

  • Impacts on social networks and communication

    1. In what ways could the product improve or impair the ability of users to communicate?

    2. In what ways could the product change the way people communicate or the content of communication?

    3. In what ways could the product facilitate/sustain the creation of new relationships and communities?

    4. In what ways could the product provide equitable opportunities for communication and connection?

    5. In what other ways could the product impact social networks and communication (positive or negative)?

  • Impacts on population change

    1. In what ways could the product generate/produce population change (immigration, move-ins, travel, etc.)?

    2. In what ways could the product affect birth rate/death rate?

    3. In what ways could the product affect living conditions in an area that would encourage population change?

    4. In what ways could the product allow populations to move from place to place seasonally or otherwise?

    5. In what other ways could the product impact population change (positive or negative)?

  • Impacts on cultural identity and heritage

    1. In what ways could the product be used to express someone’s cultural values, norms, and beliefs?

    2. In what ways could the product be in conflict with any cultural norms or religious practices?

    3. In what ways could the product move behaviors away from traditional practices?

    4. In what ways could the product create/alter/protect culture?

    5. In what other ways could the product impact cultural identity and heritage (positive or negative)?

While the present article is limited in scope to the 11 social impact categories identified by Rainock et al. [1], it is possible that additional categories will be identified in the future. When new categories are identified, or when a design team chooses to develop their own set of questions more pertinent to their industry, we recommend following the four-phase approach described at the beginning of this section, as it will most likely result in new questions that can be seamlessly meshed with those derived by an interdisciplinary team as presented in this article.

4 Method Used to Test the 55 Questions

In this section, a team-based experiment designed to test the effectiveness of the 55 prompt questions is described. The data evaluation methods used to code and score the output of the team in preparation for statistical significance testing are also described.

Frey and Dym [31] provide a meaningful and convincing argument regarding the necessity of testing design methods to validate claims made by method developers. Consistent with their argument, we designed a controlled experiment to test the effectiveness of the 55 questions. This experiment produced data from which statistical analysis could reveal the influence of the treatment (use of 55 questions), if any.

4.1 Experiment Description.

To test the 55 prompt questions, an experiment was conducted that compared the output of 12 randomly formed design teams. Six teams were randomly chosen to be included in treatment or control groups. During the experiment, teams did not know if they were part of the treatment or control. Treatment groups were instructed to complete a specific activity with the aid of the 55 questions presented in Sec. 3, while the control groups were asked to complete the same activity without the questions. All teams were given 55 min to identify many specific potential social impacts for a specific product evaluated by all teams.

Immediately before the activity began, all teams were given a basic presentation introducing Rainock et al.’s 11 social impact categories [1], with examples. After the briefing, teams moved into individual team spaces, which consisted of a whiteboard, a sealed activity packet, and approximately 100 square feet of space to work separated from other teams. The treatment and control groups were in the same large space but were visually separated to prevent ideas or expectations from being shared between the treatment and the control.

The sealed activity packet consisted of (i) a short written set of instructions specific to treatment and control groups, (ii) a brief description of the 11 social impact categories for reference during the activity, (iii) the written design brief, which described the product they were to evaluate, and (iv) for the treatment group only, the 55 prompt questions.

At the beginning of the activity, teams were asked to open the activity packet and follow the written instructions. The instructions asked the 6 control teams to identify many specific social impacts of the Global Village Shelter (see Fig. 3) using Rainock et al.’s 11 social impact categories as a guide [1]. The 6 treatment teams were asked to do the same, but were given the 55 prompt questions to guide their ideation.

Fig. 3
Design brief given to all groups detailing the product for which social impacts were to be identified
Fig. 3
Design brief given to all groups detailing the product for which social impacts were to be identified
Close modal

All groups were given 55 min to ideate and record possible social impacts of the Global Village Shelter. The control teams were told to spend about 5 min on each of the 11 social impact categories, while the treatment groups were told to spend about 1 min per question (which is equivalent to 5 min per category). Both groups were encouraged to produce two to three ideas per minute. Ideas were recorded on sticky notes during the experiment and later recorded digitally in a spreadsheet. Importantly, teams acted independently after the instruction to open the design packet was given. While the organizers were present to make observations, they did not interact with the teams and did not answer questions in order to avoid giving any one team additional information or any unfair advantage.

4.2 Product Considered in Experiment.

The product selected for this experiment was a modular plastic housing unit called the Global Village Shelter, designed by Ferrara Design Inc. with Architecture for Humanity. This product is designed to provide temporary housing for displaced persons such as political refugees or victims of natural disasters.

To introduce participants to this product, a simple design brief was included in the sealed activity packet. The design brief is shown in Fig. 3. This product was chosen because it can be simply and quickly understood by the participants, who were unfamiliar with this product.

4.3 Participants.

The participants in this experiment were undergraduate students at Brigham Young University in Provo, UT, with demographics as shown in Table 1. As shown in Table 2, participants were also asked to comment on their previous experience with design and social impact. Brigham Young University’s Institutional Review Board approved the role of the participants in the experiment. Participants were recruited before the day of the event using printed fliers and classroom announcements in various engineering courses. No announcements were made by course instructors, nor were participants made to feel that grades in their courses would be influenced by their participation in the experiment. Participants were monetarily compensated for their time and provided a meal before the experiment.

Table 1

Participant demographics

Participant demographicTreatmentControl
Field of study (major)
Total participants2018
Mechanical engineering (ME)1614
Applied math (ME focus)01
Finance01
Chemical engineering01
Experience design01
Pre-Mechanical engineering10
Open10
Manufacturing10
Biology10
Year of university study
Year 1 (freshman)43
Year 2 (sophomore)76
Year 3 (junior)44
Years 4 or 5 (senior)55
Gender
Female53
Male1515
Participant demographicTreatmentControl
Field of study (major)
Total participants2018
Mechanical engineering (ME)1614
Applied math (ME focus)01
Finance01
Chemical engineering01
Experience design01
Pre-Mechanical engineering10
Open10
Manufacturing10
Biology10
Year of university study
Year 1 (freshman)43
Year 2 (sophomore)76
Year 3 (junior)44
Years 4 or 5 (senior)55
Gender
Female53
Male1515
Table 2

Experience with social impact and design of study participants

Survey question administered pre-ideationTC
Design is something I do as a hobby64
I have taken design courses99
My major is design centered89
I have done design at an internship or job13
I do design research10
I know what design is, but have no experience55
I have no knowledge about design20
Social Impacts are a personal interest or hobby13
I have taken classes on social impact20
My major is social impact centered11
I have done social impact work at an internship, job, or volunteer position (outside of church service)22
I know what social impacts are but have no personal experience97
I have no knowledge about social impact87
Survey question administered pre-ideationTC
Design is something I do as a hobby64
I have taken design courses99
My major is design centered89
I have done design at an internship or job13
I do design research10
I know what design is, but have no experience55
I have no knowledge about design20
Social Impacts are a personal interest or hobby13
I have taken classes on social impact20
My major is social impact centered11
I have done social impact work at an internship, job, or volunteer position (outside of church service)22
I know what social impacts are but have no personal experience97
I have no knowledge about social impact87

4.4 Method for Evaluating Team Output.

We used content analysis [32] initially to code the social impacts identified by the teams participating in the study. Then we rated the quality and novelty of each impact. This was a multipart process involving:

  • Part 1: Sorting impacts by social impact category

  • Part 2: Clustering similar impacts together within each team’s set of responses

  • Part 3: Rating the quality of each identified impact

  • Part 4: Rating the novelty of each identified impact

4.4.1 Sorting Impacts by Social Impact Category (Part 1).

During the test, 942 sticky notes were produced by the teams. These impacts were digitally listed in a spreadsheet. To minimize potential sources of bias, this list was disconnected from any identifying information (team number, treatment/control, etc.) and randomized before being given to the sorting team. Following conventional procedures for coding qualitative data, first, two research assistants working individually sorted each impact into one of the 11 social impact categories. These groupings were then compared and conflicts were discussed by a group of two highly experienced researchers in the social impact space, and at least one of the research assistants who did the initial sorting. Forty impacts were removed from consideration because they could not be considered social impacts; rather, they were solely environmental impacts, enterprise-level economic impacts, or otherwise unintelligible statements. Other identified social impacts spanned multiple social impact categories and were thus assigned to all applicable categories as separate impacts, thus resulting in a total of 1079 identified impacts and 1039 intelligible social impacts.

4.4.2 Clustering Similar Impacts Together Within Each Team’s Set of Responses (Part 2).

Part 1 of the sorting and evaluation process was completed before parts 2–4. To remove bias, part 2 was done independently of parts 3 and 4. This was done because part 2 exposes which teams identified which impacts, though it was not disclosed which teams were part of treatment or control groups. Raters who participated in part 2 did not participate in parts 3 or 4.

After the results of part 1 were sorted, two research assistants independently clustered team-identified impacts into supersets that captured the same basic impact, while being blind to which teams were treatment and which were control. For example, the impacts “protection from UV” and “protection from wind” were grouped together into a superset related to “protection from exposure to nature.” These were clustered together because the research assistants deemed these impacts to be insufficiently different. While a team identifying 30 potential impacts in a single category may appear impressive, it is ultimately less useful if those impacts are the same or very similar.

4.4.3 Rating the Quality of Each Identified Impact (Part 3).

Each of the 1079 impacts identified in the experiment were randomized and then rated for their quality in two dimensions: the quality of the impact articulation, and the potential of the identified impact to influence design decision making. All impact quality ratings were made by a single expert reviewer with significant experience in social impact modeling. A single reviewer was used to better ensure that any observed differences between the control and treatment groups were not artificially induced by differences in reviewer perspectives. Both quality ratings were given based on a scale from 1–5 with 1 being a low quality score and a 5 being a high-quality score. Table 3 shows the full rubric used for assigning quality articulation scores, while Table 4 shows the rubric for scoring quality of potential influence on product decisions.

Table 3

Rubric used to rate quality articulation of identified social impacts

ScoreDescriptionExample
5Stated as a viable social impact or states viable impact with identified social impact category (not necessarily verbatim)“Degradation in health due to lack of sanitation facilities”
4Stated as a product concept/feature with an obvious and viable social impact or obviously related to a social impact but is not stated as a product concept/feature“Could create sense of worthlessness as shelters fall apart”
3Stated as a product concept/feature without an obvious impact or possible inferred secondary impact“More parties with more people living close together”
2Poor result in the ideation activity (gaming the ideation process) or not obviously connected to product“Family”
1Stated without enough information to understand or not at all related to social impact“72 hour kit with shelter” or “Find lighter material”
ScoreDescriptionExample
5Stated as a viable social impact or states viable impact with identified social impact category (not necessarily verbatim)“Degradation in health due to lack of sanitation facilities”
4Stated as a product concept/feature with an obvious and viable social impact or obviously related to a social impact but is not stated as a product concept/feature“Could create sense of worthlessness as shelters fall apart”
3Stated as a product concept/feature without an obvious impact or possible inferred secondary impact“More parties with more people living close together”
2Poor result in the ideation activity (gaming the ideation process) or not obviously connected to product“Family”
1Stated without enough information to understand or not at all related to social impact“72 hour kit with shelter” or “Find lighter material”
Table 4

Rubric used to rate quality of expected influence of identified social impacts

ScoreDescriptionExample
5Identified impact will definitely influence product or system decision making“Degradation in health due to lack of sanitation facilities”
4Identified impact will probably influence product or system decision making“Different shelter sizes/combine them for bigger families”
3Identified impact will possibly influence product or system decision making“Include volunteer groups to help connect/find people with dispatch teams”
2Identified impact will probably not influence product or system decision making“Could detract from rebuilding efforts”
1Identified impact will definitely not influence product or system decision making“Boredom”
ScoreDescriptionExample
5Identified impact will definitely influence product or system decision making“Degradation in health due to lack of sanitation facilities”
4Identified impact will probably influence product or system decision making“Different shelter sizes/combine them for bigger families”
3Identified impact will possibly influence product or system decision making“Include volunteer groups to help connect/find people with dispatch teams”
2Identified impact will probably not influence product or system decision making“Could detract from rebuilding efforts”
1Identified impact will definitely not influence product or system decision making“Boredom”

4.4.4 Rating the Novelty of Each Identified Impact (Part 4).

Like part 3, the novelty of each of the 1079 impacts was rated in two dimensions: the novelty of the impact in-domain and novelty out-of-domain. In-domain novelty is a measure of how often the rater believes the social impact would be identified within the specified problem domain (in this case, temporary housing). Out-of-domain novelty is a measure of how often the rater believes the impact would be identified outside of the specified problem domain. For each novelty evaluation, a rating of 1–3 was given with 1 being common and 3 being novel. If an identified social impact was rated as common within the specified domain, it was automatically rated as common outside of the specified domain. Table 5 shows the rubric used by the rater during the evaluation. Novelty was rated by the same expert from part 3, but not at the same time part 3 was being completed.

Table 5

Rubric used for assigning novelty scores

DomainScoreFrequency of impact being referenced inside or outside domain
In domain3Never seen (less than 1% of the time)
2Rarely seen (less than 5% of the time)
1Common (more than 5% of the time)
Out of domain3Never seen (less than 1% of the time)
2Rarely seen (less than 5% of the time)
1Common (more than 5% of the time)
DomainScoreFrequency of impact being referenced inside or outside domain
In domain3Never seen (less than 1% of the time)
2Rarely seen (less than 5% of the time)
1Common (more than 5% of the time)
Out of domain3Never seen (less than 1% of the time)
2Rarely seen (less than 5% of the time)
1Common (more than 5% of the time)

5 Test Results

The data resulting from the tests described in Sec. 4 were analyzed using common statistical methods. Two general tests were conducted: a test evaluating differences in means, and another evaluating differences in variation across control and treatment groups. Because it is possible that the 55 questions cause teams to more evenly identify impacts across all social impact categories (as opposed to focusing on only a few), differences in variance may be present. Therefore, we applied the two-sample Welch’s T-test, which allows for unequal variances between control and treatment groups.

The null hypothesis for all T-tests was that the mean values from the control group (C) and treatment group (T) are indistinguishable. For each individual T-test—if found to be true—the null hypothesis indicates that the treatment had no impact on team performance relative to the design prompt for that test. The null hypothesis for the one-tailed F-tests carried out in this study was that the variance from the treatment group (T) is not lower than the variance from the control group (C). For each individual F-test—if found to be true—the null hypothesis indicates that the treatment had no impact on team performance relative to the design prompt for that test. We performed each test at 90% confidence interval (CI). If the p value for the test is greater than 0.10, the null hypothesis cannot be rejected. To understand the confidence with which conclusions could be drawn, we also ran the same tests at higher confidence intervals (95 and 99%) and lower (85%).

Summary statistics and results of the statistical tests are shown in Table 6. Note the bolded and italicized text in the table shows statistically significant results. As described more fully below, we conclude the following from the statistical testing:

  1. The 55 questions (treatment) cause teams to more evenly identify high-quality, high-variety, high-novelty impacts across all 11 social impact categories during an ideation session, as opposed to focusing too heavily on a subset of impact categories. See right-most column in Table 6.

  2. The 55 questions accomplish (1) without reducing the total quantity of impacts identified. In other words, using the 55 questions does not reduce the quantity of high-quality, high-novelty, high-variety impacts identified.

Table 6

Data from experiment with statistical test results

Total quantity of impactsQuantity of high-quality and high-variety impactsQuantity of high-quality, high-variety, and high-novelty impacts
CaTbCTCT
No impact (removedc)19210000
Heath and safety10878382414
Education3638142136
Paid work4046151521
Conflict and crime5947322855
Family4060192933
Gender2131151534
Human rights23299911
Stratification5546261573
Social networks and communication6065161793
Population change2949111443
Cultural heritage and identity44352215166
Total5345452172025439
Total (no impact removed)5155242172025439
Mean46.8247.6419.7318.364.913.55
Variance590.16226.0581.2239.8519.492.87
T value0.0950.4110.956
T critical (85% CI)1.0711.0691.083
T critical (90% CI)1.3371.3331.356
T critical (95% CI)1.7461.7401.782
T critical (99% CI)2.5832.5672.681
p Value (T-test)0.9260.6860.358
Conclusion (T-test)Treatment has no negative influence on meanTreatment has no negative influence on meanTreatment has no negative influence on mean
F statistic2.6112.0386.785
F critical (85% CI)1.971.971.97
F critical (90% CI)2.322.322.32
F critical (95% CI)2.982.982.98
F critical (99% CI)4.854.854.85
p Value (F-test)0.0730.1390.003
Conclusion (F-test)Treatment significantly reduces variation at the 90% CITreatment significantly reduces variation at the 85% CITreatment significantly reduces variation at the 99% CI
Total quantity of impactsQuantity of high-quality and high-variety impactsQuantity of high-quality, high-variety, and high-novelty impacts
CaTbCTCT
No impact (removedc)19210000
Heath and safety10878382414
Education3638142136
Paid work4046151521
Conflict and crime5947322855
Family4060192933
Gender2131151534
Human rights23299911
Stratification5546261573
Social networks and communication6065161793
Population change2949111443
Cultural heritage and identity44352215166
Total5345452172025439
Total (no impact removed)5155242172025439
Mean46.8247.6419.7318.364.913.55
Variance590.16226.0581.2239.8519.492.87
T value0.0950.4110.956
T critical (85% CI)1.0711.0691.083
T critical (90% CI)1.3371.3331.356
T critical (95% CI)1.7461.7401.782
T critical (99% CI)2.5832.5672.681
p Value (T-test)0.9260.6860.358
Conclusion (T-test)Treatment has no negative influence on meanTreatment has no negative influence on meanTreatment has no negative influence on mean
F statistic2.6112.0386.785
F critical (85% CI)1.971.971.97
F critical (90% CI)2.322.322.32
F critical (95% CI)2.982.982.98
F critical (99% CI)4.854.854.85
p Value (F-test)0.0730.1390.003
Conclusion (F-test)Treatment significantly reduces variation at the 90% CITreatment significantly reduces variation at the 85% CITreatment significantly reduces variation at the 99% CI

Note: The bolded and italicized text shows statistically significant results.

a

C represents control.

b

T represents treatment.

c

Deemed purely an environmental impact, or enterprise economic impact.

These conclusions about the 55 questions are meaningful since they indicate that the questions promote team consideration of social impacts more evenly than is (i) found in current industry practice [7], (ii) observed in commercial products [6], and (iii) treated in the sustainability literature [27]. Importantly, the 55 questions are shown to do this without compromising team productivity in terms of quantity, quality, variety, and novelty when ideating potential social impacts for a given product.

In the remainder of this section, each of the data sets shown in Table 6 are described in greater detail.

5.1 First Test: Total Quantity of Impacts.

The first column of data in Table 6 is labeled Total quantity of impacts. These data are total quantity of impacts identified in each category by the control groups summed (C), and in each category by the treatment groups summed (T). A bar chart of that data is shown in Fig. 4. The summary statistics indicate that there is no statistically significant difference between the quantity of impacts identified by the control and treatment groups, as shown by the T-tests. This is an important finding because it indicates that although the treatment group was required to read, internalize, and use as inspiration the 55 prompt questions, this did not observably reduce team output.

Fig. 4
Total quantity of impacts
Fig. 4
Total quantity of impacts
Close modal

Also, it can be seen for Total quantity of impacts that the F-test indicates the 55 questions do produce a more even consideration of social impacts across all social impact categories, and that the result is statistically significant when using 90% C.I.

5.2 Second Test: Quantity of High-Quality, High-Variety Impacts.

The second column of data in Table 6 is labeled Quantity of high-quality, high-variety impacts. This column is a filtered subset of the data from the first column. It was filtered before the statistical tests were performed and is based on this rationale: for any ideation activity, it is common to expect a large number of low-quality ideas. It is also common to expect repeated ideas to appear. The filtering simply removed low-quality ideas and duplicate ideas. Therefore, the remaining set is considered to be high quality and high variety (but still includes nonnovel impacts, which are removed in the third test). A bar chart of this filtered set is shown in Fig. 5.

Fig. 5
Quantity of high-quality and high-variety impacts
Fig. 5
Quantity of high-quality and high-variety impacts
Close modal

An aggregate quality score was determined by simply using the minimum quality rating between quality of articulation and influence on decision-making ratings made by experts. Such an aggregation prevented well articulated impacts that have little influence on decision making from receiving a high-quality rating. Low-quality ideas were determined by removing any idea with an aggregate quality score below 3.

The variety was determined using the method described in Sec. 4, and by simply counting, the number of unique impacts after duplicates was clustered together.

The T-test indicates that there is no difference in the number of remaining impacts (after filtering) between the control and treatment groups. The F-test indicates that the 55 question treatment produced less variation in the number of high-quality, high-variety ideas across the 11 social impact categories, but this can only be said using 85% confidence intervals. Thus, with 85% confidence, we conclude—from this test—that the 55 question treatment causes teams to identify high-quality, high-variety impacts more evenly across the 11 social impact categories.

5.3 Third Test: Total Quantity High-Quality, High-Variety, High-Novelty Impacts.

The third column of data in Table 6 is labeled Quantity of high-quality, high-variety, high-novelty impacts. This column is another layer of filtering compared to the second column. Here, low novelty impacts are filtered out. Thus, the remaining impacts are considered high quality, high variety, and high novelty. A bar chart of this full filtered set is shown in Fig. 6.

Fig. 6
Quantity of high-quality, high-variety, high-novelty impacts
Fig. 6
Quantity of high-quality, high-variety, high-novelty impacts
Close modal

Novelty was rated by an expert reviewer using a three-point scale as described in Sec. 4. All impacts receiving a rating of 2 or 3 in the out-of-domain criteria were retained. We chose this particular filtering because it kept only those impacts that are truly novel both in and out of domain. As with the second test, we see no difference in means, but we do see a difference in variation between control and treatment groups. Therefore, we conclude that the 55 question treatment causes teams to identify high-quality, high-variety, high-novelty impacts more evenly across the 11 social impact categories when compared to the control group. When tested at 95% CI and 99% CI, this finding holds true. This is a significant finding to make with 99% confidence because it shows that the 55 questions have the potential to remedy the trend observed multiple times in the literature: that engineers focus primarily on societal health and safety when considering the social impacts of their work.

6 Discussion and Conclusion

In this article, we have introduced 55 prompt questions for design teams to use when trying to identify social impacts for a given products. We created these 55 prompt questions with the intent to help design teams spend 1 h identifying meaningful social impacts across a wide range of social impact categories. The summary statistics show that compared to a set of control groups that did not have the 55 questions, the 55 questions are effective at helping teams considering social impact more evenly across Rainock et al.’s [1] 11 social impact categories, as opposed to focusing on only a subset of impacts. Although the 55 questions require additional time to read, internalize, and use as prompts during the ideation, this additional requirement does not change the quantity of team output in a statistically significant way. Ultimately this means the 55 questions help the team more deeply consider the wide range of impacts from Rainock et al. [1] without negatively affecting team performance relative to the three tests shown in Table 6.

To put these results into visual perspective, and as an anecdote, consider the output of 3 teams among the 12 who participated in this article’s experiment. A single control team and a single treatment team are compared in Figs. 7 and 8. The comparison shown in Fig. 7 shows a more even consideration of impacts by the treatment group, compared to the control group that has focused more heavily on health and safety, conflict and crime, and stratification compared to the treatment. To be fair, Fig. 8 represents a different potential result. Here, the control and the treatment are statistically no different in their means nor their variances. While this is a possible outcome, it is important to observe that using the 55 prompt questions did not negatively affect team performance.

Fig. 7
Output of meaningful impacts from team 10 (control) compared to that of team 5 (treatment)
Fig. 7
Output of meaningful impacts from team 10 (control) compared to that of team 5 (treatment)
Close modal
Fig. 8
Output of meaningful impacts from team 7 (control) compared to that of team 5 (treatment)
Fig. 8
Output of meaningful impacts from team 7 (control) compared to that of team 5 (treatment)
Close modal

Statistically speaking, and when considering all of the control group data together and all of the treatment group data together, neither the control nor the treatment produced more concepts across categories (difference in means test), but the treatment group produced less variation in the quantity of impacts identified across the impact categories (difference in variance test).

Aside from the summary statistics, a participant survey issued directly after the experiment, and completed by each participant, asked “In the time allotted, what percentage of all potential Social Impacts of the Product do you think your team identified?” Responses were 62% for control groups and 71% for treatment groups. While these differences may or may not be statistically significant, we include them here to illustrate that the treatment groups on average felt noticeably more confident in their ability to identify social impacts compared to the control groups.

The 55 questions and the tests used to validate their effectiveness are not without limitations. The questions themselves are limited to four specific questions per social impact category and one generic question per category. This small set of questions may unduly limit the social impacts considered to primarily those related to the questions themselves. In addition, questions were only developed and tested for Rainock et al.’s [1] 11 social impact categories. Other categories not part of those 11 may be important for a given project. In the test performed, the effectiveness of each question was not evaluated. Such testing could lead to question refinement that may have a measurable effect.

Regarding the testing performed, the results are potentially limited to those who match the demographic of the test subject—undergraduate engineering students. A different result may be observed if tested with experienced professionals. The test results are also limited to the scenario tested, which is that social impacts were identified for a project the test subjects were not designing themselves. Different observations may be made if the test subjects were more intimately involved in the design of the product under consideration. Another limitation of the study was in the small number of participants (38) involved in assessing a single product. Different results may be observed if the experiment is tested with a larger group of participants or with other products. Other future work may include the development of adaptive prompt questions, or a more engaging approach for presenting the questions such as with an accompanying image.

Based on the findings presented in this article, coupled with our experience in social impact modeling for engineering projects, we recommend teams spend 1 h using the 55 prompt questions to identify a set of potential social impacts for the product with which they are involved. We believe that using these questions will help teams avoid fixation on a subset of impacts and thus consider social impact more holistically.

Acknowledgment

The authors gratefully acknowledge the contributions of Dr. Phillip Stevenson and engineer Andrew Armstrong for their early-stage contributions to 55 prompt questions. Likewise we acknowledge social scientists Johnny Cope and Rachel Samsion for their careful review of the 55 prompt questions before they were used in testing. We also acknowledge the generous funding of Crocker Ventures.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

References

1.
Rainock
,
M.
,
Everett
,
D.
,
Pack
,
A.
,
Dahlin
,
E. C.
, and
Mattson
,
C. A.
,
2018
, “
The Social Impacts of Products: A Review
,”
Impact Assess. Project Appraisal
,
36
(
3
), pp.
230
241
.
2.
Marcuse
,
H.
,
1941
, “
Some Social Implications of Modern Technology
,”
Zeitschrift für Sozialforschung
,
9
(
3
), pp.
414
439
.
3.
Mattson
,
C. A.
,
Pack
,
A. T.
,
Lofthouse
,
V.
, and
Bhamra
,
T.
,
2019
, “
Using a Product’s Sustainability Space as a Design Exploration Tool
,”
Design Sci.
,
5
, p.
e1
.
4.
Cf
,
O. D. D. S.
, “
Transforming Our World: The 2030 Agenda for Sustainable Development
,”
United Nations, New York
.
5.
Burleson
,
G.
,
Lajoie
,
J.
,
Mabey
,
C.
,
Sours
,
P.
,
Ventrella
,
J.
,
Peiffer
,
E.
,
Stine
,
E.
,
Stettler Kleine
,
M.
,
MacDonald
,
L.
,
Austin-Breneman
,
J.
,
Javernick-Will
,
A.
,
Winter
,
A.
,
Lucena
,
J.
,
Knight
,
D.
,
Daniel
,
S.
,
Thomas
,
E.
,
Mattson
,
C.
, and
Aranda
,
I.
,
2023
, “
Advancing Sustainable Development: Emerging Factors and Futures for the Engineering Field
,”
Sustainability
,
15
(
20
), p.
7869
.
6.
Ottosson
,
H. J.
,
Mattson
,
C. A.
, and
Dahlin
,
E. C.
,
2020
, “
Analysis of Perceived Social Impacts of Existing Products Designed for the Developing World, With Implications for New Product Development
,”
ASME J. Mech. Des.
,
142
(
5
), p.
051101
.
7.
Pack
,
A. T.
,
Rose Phipps
,
E.
,
Mattson
,
C. A.
, and
Dahlin
,
E. C.
,
2020
, “
Social Impact in Product Design, an Exploration of Current Industry Practices
,”
J. Mech. Des.
,
142
(
7
), p.
071702
.
8.
Armstrong
,
A. G.
,
Mattson
,
C. A.
,
Salmon
,
J. L.
, and
Dahlin
,
E. C.
,
2021
, “
FMEA-Inspired Analysis for Social Impact of Engineered Products
,”
International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
Virtual, Online
,
Aug. 17–19
,
American Society of Mechanical Engineers
, p.
V03BT03A017
.
9.
Stevenson
,
P. D.
,
Mattson
,
C. A.
, and
Dahlin
,
E. C.
,
2020
, “
A Method for Creating Product Social Impact Models of Engineered Products
,”
ASME J. Mech. Des.
,
142
(
4
), p.
041101
.
10.
Thomas
,
E.
,
Wilson
,
D.
,
Kathuni
,
S.
,
Libey
,
A.
,
Chintalapati
,
P.
, and
Coyle
,
J.
,
2021
, “
A Contribution to Drought Resilience in East Africa Through Groundwater Pump Monitoring Informed by In-Situ Instrumentation, Remote Sensing and Ensemble Machine Learning
,”
Sci. Total. Environ.
,
780
, p.
146486
.
11.
Stringham
,
B. J.
, and
Mattson
,
C. A.
,
2021
, “
Design of Remote Data Collection Devices for Social Impact Indicators of Products in Developing Countries
,”
Develop. Eng.
,
6
, p.
100062
.
12.
Kiesling
,
E.
,
Günther
,
M.
,
Stummer
,
C.
, and
Wakolbinger
,
L. M.
,
2012
, “
Agent-Based Simulation of Innovation Diffusion: A Review
,”
Central Eur. J. Oper. Res.
,
20
, pp.
183
230
.
13.
Mabey
,
C. S.
,
Armstrong
,
A. G.
,
Mattson
,
C. A.
,
Salmon
,
J. L.
,
Hatch
,
N. W.
, and
Dahlin
,
E. C.
,
2021
, “
A Computational Simulation-Based Framework for Estimating Potential Product Impact During Product Design
,”
Design Sci.
,
7
, p.
e15
.
14.
Bartlett
,
K. G.
,
1947
, “
Social Impact of the Radio
,”
Ann. Amer. Acad. Political Soc. Sci.
,
250
(
1
), pp.
89
97
.
15.
Starr
,
C.
,
1969
, “
Social Benefit Versus Technological Risk: What Is Our Society Willing to Pay for Safety
?”
Science
,
165
(
3899
), pp.
1232
1238
.
16.
Keeney
,
R. L.
,
1980
, “
Evaluating Alternatives Involving Potential Fatalities
,”
Oper. Res.
,
28
(
1
), pp.
188
205
.
17.
Slovic
,
P.
,
Lichtenstein
,
S.
, and
Fischhoff
,
B.
,
1984
, “
Modeling the Societal Impact of Fatal Accidents
,”
Manag. Sci.
,
30
(
4
), pp.
464
474
.
18.
von Neumann
,
J.
, and
Morgenstern
,
O.
,
1944
,
Theory of Games and Economic behavior
,
Princeton University Press
,
Princeton, NJ
.
19.
Sachs
,
J. D.
,
2012
, “
From Millennium Development Goals to Sustainable Development Goals
,”
Lancet
,
379
(
9832
), pp.
2206
2211
.
20.
Labuschagne
,
C.
, and
Brent
,
A.
,
2006
, “
Social Indicators for Sustainable Project and Technology Life Cycle Management in the Process Industry (13 Pp+ 4)
,”
Int. J. Life Cycle Assess.
,
11
, pp.
3
15
.
21.
Labuschagne
,
C.
,
Brent
,
A. C.
, and
Van Erck
,
R. P. G.
,
2005
, “
Assessing the Sustainability Performances of Industries
,”
J. Cleaner. Prod.
,
13
(
4
), pp.
373
385
.
22.
Labuschagne
,
C.
,
Brent
,
A. C.
, and
Claasen
,
S. J.
,
2005
, “
Environmental and Social Impact Considerations for Sustainable Project Life Cycle Management in the Process Industry
,”
Corporate Soc. Responsibility Environmental Manag.
,
12
(
1
), pp.
38
54
.
23.
Hutchins
,
M. J.
, and
Sutherland
,
J. W.
,
2008
, “
An Exploration of Measures of Social Sustainability and Their Application to Supply Chain Decisions
,”
J. Cleaner. Prod.
,
16
(
15
), pp.
1688
1698
.
24.
Bai
,
C.
, and
Sarkis
,
J.
,
2010
, “
Integrating Sustainability Into Supplier Selection With Grey System and Rough Set Methodologies
,”
Int. J. Production Econ.
,
124
(
1
), pp.
252
264
.
25.
Rojanamon
,
P.
,
Chaisomphob
,
T.
, and
Bureekul
,
T.
,
2009
, “
Application of Geographical Information System to Site Selection of Small Run-of-River Hydropower Project by Considering Engineering/Economic/Environmental Criteria and Social Impact
,”
Renewable. Sustainable. Energy. Rev.
,
13
(
9
), pp.
2336
2348
.
26.
Sabini
,
L.
,
Muzio
,
D.
, and
Alderman
,
N.
,
2019
, “
25 Years of ‘sustainable Projects’. What We Know and What the Literature Says
,”
Int. J. Project Manag.
,
37
(
6
), pp.
820
838
.
27.
Armstrong
,
A. G.
,
Suk
,
H.
,
Mabey
,
C. S.
,
Mattson
,
C. A.
,
Hall
,
J.
, and
Salmon
,
J. L.
,
2023
, “
Systematic Review and Classification of the Engineering for Global Development Literature Based on Design Tools and Methods for Social Impact Consideration
,”
ASME J. Mech. Des.
,
145
(
3
), p.
030801
.
28.
Costanza-Chock
,
S.
,
2020
,
Design Justice: Community-Led Practices to Build the Worlds We Need
,
The MIT Press
,
Cambridge, MA
.
29.
Das
,
M.
,
Roeder
,
G.
,
Ostrowski
,
A. K.
,
Yang
,
M. C.
, and
Verma
,
A.
,
2022
, “
What Do We Mean When We Write About Ethics, Equity, and Justice in Engineering Design
?”
International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
St. Louis, MO
,
Aug. 14–17
,
American Society of Mechanical Engineers
, p.
V006T06A036
.
30.
Petti
,
L.
,
Serreli
,
M.
, and
Di Cesare
,
S.
,
2018
, “
Systematic Literature Review in Social Life Cycle Assessment
,”
Int. J. Life Cycle Assess.
,
23
, pp.
422
431
.
31.
Frey
,
D. D.
, and
Dym
,
C. L.
,
2006
, “
Validation of Design Methods: Lessons From Medicine
,”
Res. Eng. Design
,
17
, pp.
45
57
.
32.
Krippendorff
,
K.
,
2018
,
Content Analysis: An Introduction to Its Methodology
,
SAGE Publications
,
Thousand Oaks, CA
.