Conventional propulsion systems are typically represented as uninstalled systems to suit the simple separation between airframe and engine in a podded configuration. However, boundary layer ingesting systems are inherently integrated, and require a different perspective for performance analysis. Simulations of boundary layer ingesting propulsions systems must represent the change in inlet flow characteristics, which result from different local flow conditions. In addition, a suitable accounting system is required to split the airframe forces from the propulsion system forces. The research assesses the performance of a conceptual vehicle, which applies a boundary layer ingesting propulsion system—NASA's N3-X blended wing body aircraft—as a case study. The performance of the aircraft's distributed propulsor array is assessed using a performance method, which accounts for installation terms resulting from the boundary layer ingesting nature of the system. A “thrust split” option is considered, which splits the source of thrust between the aircraft's main turbojet engines and the distributed propulsor array. An optimum thrust split (TS) for a specific fuel consumption at design point (DP) is found to occur for a TS value of 94.1%. In comparison, the optimum TS with respect to fuel consumption for the design 7500 nmi mission is found to be 93.6%, leading to a 1.5% fuel saving for the configuration considered.
In the commercial industry, a desire to reduce costs has historically driven the development of aircraft and engines that are significantly more efficient than their predecessors, despite superficial similarity. From an operator's perspective, fuel cost is one of the largest operating cost components and may contribute to more than a quarter of the total operating cost . Therefore, reduction in fuel consumption is often a key goal. However, in a global industry that is increasingly more aware of its impact on the environment, new technology development cannot be driven by cost reduction alone. As a consequence of the increase in environmental awareness, future aircraft are required to meet challenging targets for reductions in fuel burn and emissions in comparison to current aircraft. This includes reductions in carbon dioxide (CO2), a key contributor to climate change, and nitrous oxides (NOx), a contributor to climate change and damaging to health, in addition to reducing landing and take-off (LTO) noise. To a certain extent, fuel consumption and emissions can be reduced by changing modes of operation and by iterating or updating current designs. However, environmental targets necessitate a revolution in aircraft design with the introduction of novel technologies and fuel sources, in addition to improvements in aircraft operation [1,2].
Historically, the drive for an increase in efficiency led the development of aircraft engines from turbojets to modern turbofan engines with ever increasing bypass ratios. The maximum diameter of a fan is limited from a mechanical and ground clearance standpoint, while the minimum size of an engine core is limited from an efficiency standpoint. Therefore, the maximum bypass for conventional turbofans is limited. Although turboprop engines are a more efficient alternative, they are typically limited to a lower speed than turbofan engines. A solution to this problem may be achieved through the application of a distributed propulsion system. Rather than providing the bypass flow through a single fan as with a conventional turbofan, the bypass flow may instead be spread over a number of fans. Previous concepts have included fans linked to a common core , or arrays of fans .
Distributed propulsion systems can be used to obtain aerodynamic and propulsive benefits, leading to improvements in fuel consumption and flight range, in addition to decreasing LTO distance and reducing noise . The configuration enables high bypass ratio configurations and can enable further benefits, such as through the use of wake filling. In addition, the installation location and size of a distributed propulsion system means that they may be combined with boundary layer ingestion (BLI) to provide further efficiency increases. The boundary layer is generally considered to be detrimental to the performance of a conventional aircraft, as it contributes to drag and results in a momentum deficit, or wake. However, as momentum deficit is one of the sources of aircraft drag, it follows that technologies to control or reduce the impact of the boundary layer are a potential route to an improvement in aircraft performance. The concept of boundary layer control is not a new one. Early research suggested that the application of constant suction on the aircraft surface could reduce the drag of the aircraft through removal of the boundary layer . BLI can re-energize the wake of the aircraft by accelerating ingested boundary layer flow back up to the free stream velocity. Therefore, wake re-energization reduces the overall drag of the aircraft. By ingesting the boundary layer, less power is used by the propulsion system than for a propulsion system that produces the same thrust with free-stream air . However, ingestion of the boundary layer is not entirely beneficial to a propulsion system. The boundary layer is an inherently distorted flow, which will almost certainly have some impact on the performance of the system. Significant distortion can negate the power or fuel consumption benefits of a BLI system such that a free-stream system is the more efficient option .
Evaluation of the typical podded engines on a conventional transport aircraft is a well-established method with a relatively simple thrust and drag accounting between the aircraft and its engines. In a BLI system, flow that enters the propulsion system has traveled over the aircraft surface and the separation of the airframe and propulsion system is therefore more complicated. This integration also influences performance calculations, as local flow characteristics are linked with the propulsion system location and the aircraft configuration. A number of methods have been developed to simulate boundary layer ingesting propulsion systems and can be split into two categories. Computational fluid dynamics (CFD) analyses of the problem as a whole, and lower order analytical methods. Control volume analyses typically deal with the propulsion system as an isolated or uninstalled system in a similar manner to conventional propulsion system analysis [7–9]. Alternatively, energy or exergy methods analyze the system as a whole [10,11]. Methods that rely heavily on detailed knowledge of the aircraft configuration, such as CFD analyses or energy/exergy analyses, are challenging to implement at the preliminary stage where the aircraft design is subject to change. In addition, conventional point mass aircraft performance modeling methods rely on a split between the propulsion system and airframe forces. Therefore, of the available methods, conventional force control volume methods are more suitable for the purposes of both preliminary design and integration in aircraft performance analyses.
This research focuses on a case study of NASA's N3-X aircraft, a conceptual blended wing body aircraft with a distributed BLI propulsor array powered by a pair of turbogenerator/turbojet engines via a superconducting electrical system (Fig. 1) . BLI propulsion system performance will be combined with the performance simulation of a conventional gas turbine to assess the propulsion system performance as a whole. As a key component of the analysis, the influence of a spanwise variation on flow characteristics is included. Assessments of the BLI system at design and off-design are performed using the extension of a tool developed for the preliminary design of BLI propulsion systems [13,14]. A number of alternative configurations will also be assessed using a thrust split (TS) variable, where the aircraft propulsive force is split between the propulsor array and the main turbojet engines. The analysis aims to identify the potential benefits that may be gained through the use of thrust split, as preliminary analyses have shown that the alternative configuration may prove beneficial to performance . Finally, an estimate of the aircraft performance will be presented by combining the propulsion system model with an aircraft performance model.
Case Study Definition
This research focuses on the assessment of the propulsion system performance of the N3-X aircraft. The aircraft is one of a number of concepts that has been developed in NASA's Subsonic Fixed Wing project for the 2035+ developmental goals. The N3-X is therefore targeted to reduce fuel/energy consumption by at least 60% relative to a 2005 best-in-class aircraft in addition to noise reductions and an 80% reduction in NOx during LTO and cruise. For the purposes of this research, the key element is the configuration of the propulsion system. The aircraft's propulsor array is powered by a pair of turbogenerator/turbojet via a superconducting electrical system . In the NASA design, the propulsion system is sized for an aerodynamic design point (DP) at Mach number 0.84 and an altitude of 30,000 ft. The propulsor array has an inlet set at 85% of the centerline chord and parallel to the trailing edge . A combination of the highly swept fuselage leading edge and the angled nature of the propulsor array results in a significant change in chord length and local Mach number between the centerline propulsor and the propulsors at the extreme edges of the array. Due to the nature of the aircraft shape and airflow, each propulsor inlet is subject to different boundary layer and air flow characteristics at the inlet.
The case study made use of publicly available data and research on the aircraft, including the aircraft and propulsor array configuration , and the boundary layer profiles at the centerline of the airframe . The aircraft configuration provided the reference lengths (x, distance from the aircraft leading edge) necessary for estimation of the boundary layer thickness. The velocity profile provided the local free-stream Mach number at the edge of the boundary layer from x/c0 = 0.6 to x/c0 = 1.0, where c0 is the centerline chord length. As the CFD data were available only for the centerline of the airframe at design point, the velocity profile was extended to encompass the entire length of the array. The research therefore assumed that the local free-stream velocity at the edge of the boundary layer would be constant for a constant axial distance x0 from the aircraft nose. Given the Mach number distributed that has been assumed for the airframe in previous research, this assumption is approximately applicable over the rear fuselage of the aircraft (Fig. 2). Boundary layer thickness was estimated using a turbulent flat plate correlation for boundary layer thickness, scaled to the N3-X configuration. In order to support the off-design simulations, it was assumed that the turbulent flat plate scaled to the N3-X configuration may be used to estimate the boundary layer thickness at any of the simulated flight conditions. It is assumed that the velocity profile for the airframe at design point may be scaled to the Mach number at the off-design flight condition. A more detailed estimate of airframe boundary layer and velocity distribution may be calculated using a more complete analysis of the airframe. However, the publicly available data were applied to demonstrate the analysis possible using the limited information that might be available at an early design stage. This is in keeping with the intended application of the method as a preliminary design tool. These assumptions attempt to address some of the factors neglected in previous simulations of the N3-X propulsion system, which do not account for differences in flow characteristics along the length of the array.
Due to the novel nature of the aircraft configuration, a full assessment of the propulsion system performance requires the combination of a number of separate components. Sections 3.1–3.3 will briefly describe the models used to simulate the propulsion system. This includes the BLI propulsion system, the conventional gas turbines, and the linkage between the two at design and off-design conditions.
BLI Propulsion System Performance
Thrust and Drag Accounting for BLI.
The forces produced by an aircraft in flight can be split to belong to either a propulsion system or an airframe force accounting system in support of conventional descriptions of aircraft performance . In steady level flight, the net thrust produced by a propulsion system should be adequate to counter the drag of the combined airframe and installed engine. For aircraft with more integrated architectures, it becomes more difficult to differentiate between the airframe and the propulsion system forces . Nonetheless, it is useful to have a way of differentiating the two for ease of design and simulation. Typically, thrust is defined as a “standard net thrust” term (FN), the difference between the gross thrust at the nozzle exit (FG9), and the gross thrust far upstream (FG0). However, a boundary layer ingesting propulsion system is an internal integrated system that is reliant on the airframe for performance. In an integrated or installed system, additional terms may also be assigned to the propulsion system accounting of thrust and drag. Performance may instead be represented by a net propulsive force (NPF), which includes the force terms associated with the engine cowl and afterbody, spillage drag, and interference drag.
where FG9 is the gross thrust, and FGi is the momentum drag, τw Swet is the skin friction of the surface from the interface point to the inlet, Dnacelle is the nacelle drag, ΔD is the drag of the airframe surface covered by the propulsion system control volume, and Daircraft,clean is the drag of the airframe without the propulsion system. The difference between FG9 and FGi is analogous to the conventional net thrust term used in propulsion system performance reporting. The skin friction and nacelle drag terms are used to account for installed system components. In the formulation shown in Eq. (1), an installed perspective is taken and the recovered drag is accounted for in the propulsion system's net propulsive force. The ΔD term accounts for the fact that a portion of the airframe's wetted surface area is now covered by the propulsion system control volume. Therefore, for an aircraft of a fixed size (i.e., fixed Daircraft,clean) the net propulsive force requirement is the same, regardless of propulsion system configuration. This enables analysis of the installed BLI system performance independently from the aircraft performance as the only airframe inputs to the propulsion system performance are the size of the boundary layer and the local speed of the flow. This formulation also supports the use of conventional aircraft performance methods by maintaining a separation between the airframe and propulsion system for the purposes of simulation.
Boundary Layer Flow Characteristics.
The primary purpose of this step in the process is to output three relevant boundary layer characteristics: the mass flow, average velocity, and the average total pressure relative to the free-stream total pressure. Any of the numerous methods available for determining the boundary layer characteristics may be applied, provided that they produce the required boundary layer flow characteristics.
Inlet Flow Characteristics for a BLI System.
Depending on the propulsion system's size, flow ingested by the propulsion system may be more or less than the entire boundary layer flow. If only the boundary layer is ingested, the streamtube height is automatically fixed to equal the boundary layer thickness and the resultant flow characteristics are the average for the boundary layer. However, ingesting more or less than the entire boundary layer will change the average flow characteristics for the streamtube. Hence, an additional step is required to establish the flow characteristics of the streamtube actually ingested by the propulsion system. Assuming a rectangular streamtube with height h and width w and constant boundary layer thickness δ, there are three possible options that can be considered:
Ingest only the boundary layer ().
Ingest free-stream and the boundary layer ().
Ingest a portion of the boundary layer ().
In the first case, the average flow characteristics of the ingested flow can be derived using the equations summarized in Sec. 3.1.2. In the case of an inlet, which ingests only a portion of the boundary layer, the upper limit of the integrals becomes , the ratio of streamtube height to boundary layer thickness, where . In the final case, ingested flow characteristics must take into account the combination of both free-stream and boundary layer flow.
Each of the equations represents the terms nondimensionally. The streamtube flow characteristics equations are functions of and the relevant average/total boundary layer flow characteristic. Boundary layer flow characteristics can therefore be calculated separately from the propulsion system sizing or performance process and then used to estimate the average streamtube flow characteristics. The three defined terms, , and , are necessary to determine the performance of a propulsion system following one-dimensional gas dynamics methods. Each of the three defined terms, , and , is a key component of the net propulsive force calculation through the definition of FG (Eq. (2)). The representation of each of the terms as averaged and total values for the stream enables the integration of the boundary layer characteristics within conventional methods for propulsion system performance.
Design Point Sizing.
The BLI propulsion system must be sized to provide the NPF required by the aircraft at the propulsion system design point, with NPF defined as in Eq. (1) following the control volume of Fig. 3. The performance of each propulsor in the array is calculated following conventional one-dimensional gas dynamics methods for propulsion system performance. However, as the system ingests the airframe boundary layer, an additional stage is required to determine the inlet flow characteristics. Depending on the size of the propulsion system, the inlet flow may be either the entire boundary layer, a portion of the boundary layer, or both boundary layer and free stream flow.
The boundary layer theory procedure described in Secs. 3.1.2 and 3.1.3 was used to obtain the inlet flow characteristics required in the estimation of propulsion system performance. The velocity and total pressure are estimated as mass flow-averaged values, and include the free stream flow where relevant. As the method used does not directly represent the distortion-related losses associated with the boundary layer, they are instead introduced to the propulsion system as a combination of a loss in fan efficiency and an additional total pressure loss in the inlet. Fan efficiency is assumed from previous research on the N3-X propulsion system configuration . Previous research has identified that low pressure ratio configurations are sensitive to high pressure loss in the inlet, due to a reduction in the effective pressure ratio of the system. The system is less sensitive to changes in fan efficiency .
In the case of the N3-X, the system must ingest both the boundary layer and free-stream flow in order to produce the requisite thrust. The exact proportion of free-stream to boundary layer flow was obtained through a procedure to determine the propulsor size given a NPF requirement. During the sizing process, the propulsor inlet height h was iterated to achieve the required net propulsive force per fan.
The mass flow demanded by the propulsion system varies when operating off the design point. Therefore, a mass flow matching procedure is required to match the mass flow demanded by the fan with the mass flow in the streamtube entering the propulsion system. However, unlike a conventional free-stream propulsion system, the flow characteristics of the streamtube vary depending on the streamtube size, as the flow characteristics are a function of the ratio .
Therefore, for a fixed nondimensional mass flow, the mass flow rate demand by the propulsor must be found given that the pressure at the inlet is a function of the ratio .
A procedure was applied to solve for the inlet height that matches the mass flow demand of the propulsion system at off-design. A number of general assumptions were applied: the flow at the interface point is independent of the propulsion system, the ratio remains constant from the interface point onward, flow characteristics are averaged from the interface point onward, and the streamtube has a rectangular cross section. For the specific case of the N3-X propulsion system, a number of further assumptions are used: variable area floating nozzle , constant nondimensional mass flow at a fixed rotational speed, and a running line following peak efficiency using a generic fan map scaled to the operating point of the propulsors. The mass flow matching procedure combines with the design point method used to obtain the flow characteristics to create the procedure for the off-design simulation of the propulsion system (Fig. 4).
At static or very low velocity, the influence of the boundary layer on the propulsion system performance is negligible due to a combination of factors. As the boundary layer thickness is a function of the air velocity in addition to the surface length before the inlet, no boundary layer builds up over the aircraft when it is at rest. In addition, during high mass flow ratio operation, such as at sea level static, the overwhelming majority of ingested air is free stream. Therefore, as the aircraft velocity decreases, the ratio tends to infinity due to an increase in h to infinity and a decrease in δ to zero.
Gas Turbine Performance.
Gas turbine performance is estimated using in-house performance software. Using the software, a model of the engine can be created from a selection of modules in order to simulate the thermodynamic performance and predict gas properties of the individual gas turbine components. This in turn allows for a detailed simulation of the overall engine performance. The engine design follows the design defined by Felder et al. for the N3-X propulsion system . The engine is a two spool design with a free power turbine, which is required to provide power for the propulsor array. The pressure ratio of each compressor was selected to ensure an equal enthalpy split between the compressors . The NASA design states an aerodynamic design point of Mach 0.84 at 30,000 ft. A number of further design parameters are displayed in Table 1. For the purposes of this research, the engine is assumed to be fueled purely by Kerosene.
Combined Propulsion System.
The main engine design parameters follow those defined in Table 1. The array is assumed to consist of 15 fans, each with a fan pressure ratio of 1.3 . The propulsors were sized to produce the same net propulsive force. Each propulsor is subject to different local flow characteristics, and therefore has a different ratio of free stream to boundary layer air and a different local Mach number. Hence, the size and performance of each propulsor vary.
For simulation purposes, two modes of operation were assumed: a “thrust matching” and a “power matching” mode. In the former mode, a thrust requirement is established (such as thrust equal to drag at cruise). The thrust requirement must be matched by the combined propulsive force of the main engines and propulsor array, while the power requirement of the array must be met by the main engines. The array rotational speed is iterated until the two conditions are met. In the second power matching mode, a power (or burner exit/turbine entry temperature (TET)) setting is defined for main engines, leading to an output thrust and power. The array rotational speed is iterated until the power demanded by the array matches the power produced by the main engines.
Propulsion System Weight.
The weight of the propulsion system as a whole is an important aspect of the propulsion system analysis. The propulsion system may be split into three separate subsystems: the main engines, the propulsor array, and the superconducting system. Turbomachinery weight for the propulsors and main engines was estimated using an in-house weight model. The model estimates the weight of a propulsion system on a component basis, based on the performance and size of the system .
The weight of the superconducting electrical system is further split into motors, generators, and transmission lines. Liquid hydrogen cooling was assumed, and therefore there is no additional weight due to cryocoolers. In the absence of a preliminary weight modeling tool for superconducting systems, the weight of the motors and generators was estimated based on a trend relating weight to the shaft horsepower , assuming that generators may be treated similarly to motors for weight purposes . While the weight estimated may be considered optimistic, it provides an initial estimate of potential technology in the 2035+ entry into service period . A flat weight of 453 kg (1000 lb) was added to account for the transmission lines, based on previous weight estimations performed for the N3-X . The electrical system was sized for the high power conditions experienced at rolling take-off (RTO) (0 ft, Mach 0.25).
Validation and Comparison
In order to validate the propulsion system performance model, the results obtained from the model were compared against those presented in previous research. In the first validation step, the inlet flow characteristics were compared against those produced from a CFD analysis of the airframe as referenced by Felder et al.  (Fig. 5). Flow characteristics resulting from the model align closely with the results used in the reference source, with a slight divergence in total pressure as h/δ decreases. However, there is a good correspondence in results for h/δ > 1, where this study is focused.
A second validation step was performed to compare the design point sizing process with the propulsion system size that was obtained in previous research . The required propulsor size for a fixed thrust requirement was determined using the previously described design procedure, neglecting the installation terms as the referenced research sized the systems for a net thrust rather than net propulsive force (Fig. 6). The calculation was performed for a single propulsor at the airframe centerline. A number of other variables were also set as equal to the reference study including fan efficiency, inlet aspect ratio, and thrust per propulsor. As may be expected from the similarity in inlet flow characteristics, the resulting propulsor size closely matches the propulsion system obtained in previous research (average 3.8% difference in resulting propulsor height). The comparison of results to previous research suggests that the developed models are suitable for use in the design point sizing tool.
Baseline Propulsion System Performance
The preliminary analysis focused on the performance assessment of the baseline configuration of the N3-X propulsion system. Therefore, the system is designed for an operating point at Mach 0.84 and 30,000 ft . The thrust requirement defined in the reference source sets a 119 kN net propulsive force target. For subsequent analyses, a comparison of the design point system performance to that obtained in referenced research becomes more difficult, as the present research accounts for the change in spanwise flow characteristics over the length of the array. Nonetheless, a general comparison may be made to establish whether the propulsion system meets performance requirements.
Three key operating points are presented in Table 2, the specified aerodynamic design point, RTO at sea level and Mach 0.25, and seal level static (SLS). In addition, a cruise point at 40,000 ft and Mach 0.84 was simulated . The control parameter for each of these points is the engine TET. The net propulsive force produced at take-off corresponds with the 552.3 kN value quoted in the referenced research . There is a greater deviation in net propulsive force at RTO, which is 368.9 kN in comparison to the 301.8 kN value quoted in the referenced research. However, it should be noted that the difference in force accounting and boundary layer and airframe flow characterization will account for the difference in results obtained for the propulsion system at off-design.
Simulations suggest that at low altitude and Mach number, the limiting factor on performance is the ability of the main engines to produce power. The engines are unable to produce sufficient power to run the propulsors in the array at their full rotational speed, and the propulsors must therefore run at a reduced rotational speed to match the available power, as shown by the RTO and SLS operating points in Table 2. The rotational speed of the propulsors is therefore lower than at the design point, due to power limitations. It should be noted that as the flight speed is decreased, the free stream to boundary layer flow ratio (h/δ) increases, hence tending performance closer to a free-stream propulsion system. The power consumption of the system is therefore higher than the power requirement of a system with a lower ratio of h/δ.
In contrast, the array is the limiting factor at altitude, due to the assumption that the electrical motor rotational speed, and hence fan rpm, will not exceed 100%. However, the main engines produce a greater amount of power than would be required to run the propulsor array at the full rotational speed. The main engine therefore runs at a lower power setting to compensate for the reduced power requirement. A full map of the array propulsive force over a range of altitudes and Mach numbers is shown in Fig. 7. The kink in the net propulsive force trend indicates the point where the power demanded by the propulsors diverges from the power available from the main engines.
Thrust Split Propulsion System
The following analysis will maintain the engine design parameters specified in Table 1. However, the amount of thrust and power produced will vary according to the thrust split term, where power production and thrust are split equally between the two engines. Similarly, many design parameters are maintained as a constant for the propulsor array, including the number of fans in the array, the fan efficiency and pressure ratio, and the inlet aspect ratio. However, the size of each propulsor and the array as a whole varies as a function of the net propulsive force it is required to produce. As the aircraft design remains unchanged, the net propulsive force required from the propulsion system as a whole remains constant. Given these design requirements, the efficiency of the system (in terms of specific fuel consumption) for a variety of thrust split values may be determined.
Two operating points were considered for each thrust split design, the design point, and rolling take-off, as defined in Table 2 (Fig. 8). For both the design point and cruise operating points, a similar minimum SFC point emerges at approximately 94.1% thrust split. The resulting optimum thrust split point is the result of a number of factors. A loss in power in the transmission system means that the power that must be produced by the main engine is higher. In addition, energy is wasted in the power turbine due to the efficiency of the conversion of power. Therefore, producing a small measure of thrust from the main engines rather than from the array balances the power lost in the transmission system (0.2% of the array power demand in this case) and energy wasted in the power turbine. A decrease in thrust split will increase the thrust required from the turbojet and decrease the effective bypass ratio of the system. Assuming a fixed length array, reducing the thrust required from the array, will reduce its size, hence reducing the array height and increasing the ratio of boundary layer air to free-stream air. In addition, the array avoids stretching into the high speed, thin boundary layer flow at the outer extremities of the aircraft fuselage . A thrust split which provides a minimum specific fuel consumption is therefore obtained by balancing the costs and benefits of changes to the BLI propulsor array's size and configuration. However, a turbojet is not the most efficient source of thrust. Further decreases in thrust split beyond the optimum will therefore increase the specific fuel consumption at cruise and design point. As the proportion of thrust produced by the main engines increases, it will become increasingly beneficial to apply a turbofan rather than a turbojet.
A different conclusion is apparent for the rolling take-off condition. At rolling take-off, the boundary layer thickness is negligible and the benefit of a BLI propulsion system is therefore small. Any reduction in thrust split below 100% will reduce the thrust produced by the array, as the array size is smaller. A propulsion system designed for the optimum thrust split at cruise will therefore not be the most efficient option at take-off conditions.
Transmission efficiency is a key aspect of the optimum thrust split result. As the transmission efficiency of the superconducting system is high, the minimum SFC therefore favors a thrust split where thrust is predominantly produced by the propulsor array. As a point of comparison, a system with a transmission efficiency of 95% has a minimum SFC thrust split of approximately 92.8% (Fig. 9).
In addition to the specific fuel consumption of the system, it is also important to consider the system weight. As the thrust split of the system is decreased, the thrust required by the array decreases, hence reducing the weight of the propulsors and the weight of the electrical system required by the array. Determining the correlation between weight and thrust split for the main engines is more challenging. Decreasing the auxiliary power required by the main engine for a fixed thrust requirement would reduce the required size of the engine. However, in the thrust split system, a reduction in the auxiliary power is counterbalanced by an increase in the thrust requirements. One parameter would reduce the engine size, while the other would increase the engine size.
In the optimum SFC region relevant to the current research, the overall weight of the propulsion system decreases as the thrust split decreases (Fig. 10). Each of the individual propulsion system components (main engines, propulsor array, and high temperature superconducting equipment) reduces in weight as the thrust split is decreased; hence, from a weight perspective, a lower thrust split is preferable. The array and superconducting electrical system weight linearly reduce as the thrust split is reduced. However, engine weight changes nonlinearly, resulting in a nonlinear weight trend for the total system weight.
Conventional aircraft mission simulation tools are designed to support standard aircraft configurations and propulsion systems. However, there are a limited number of tools available to enable the simulation and integration of the novel propulsion system architecture of the N3-X. Therefore, a custom aircraft performance model was created for this study, in order to combine conventional aircraft simulation methods with a module to simulate the BLI propulsion system. The mission performance model applied a point mass approximation of the aircraft. Block fuel burn was estimated by splitting the aircraft mission into taxi, take-off, climb, cruise, descent, and landing segments.
Weights and dimensions for the N3-X were obtained from referenced sources [12,15,21] in combination with a three-dimensional model of the aircraft available in the public domain . In the original NASA body of research on the N3-X, no assumptions or estimations were made for the maximum take-off weight of the aircraft. The only weight values provided are the aircraft's operating empty weight and its design payload. However, in order to predict the aircrafts maximum range, an estimate of the maximum take-off weight was required. This was estimated by assuming that the ratio of maximum take-off weight to operating empty weight was the same as for the baseline aircraft. This assumption is based on historical data for commercial aircraft . No design changes were made to the N3-X configuration or airframe weight in comparison to the original NASA design. For aircraft lift and drag estimation purposes, the N3-X was treated as a flying wing. Therefore, an equivalent planform was used with an effective aircraft wing area approximated as the entire planform area (assuming that the fuselage is used as a lifting body).
In order to determine the actual benefit of the novel propulsion system design on the aircraft efficiency, a full aircraft performance analysis is required. A payload range chart provides an overview of the aircraft efficiency by defining the maximum range of the aircraft. A payload range chart consists of three points, the maximum payload range (aircraft range with maximum payload), the maximum fuel range (aircraft range with maximum fuel load), and the maximum ferry range (aircraft range with maximum fuel load, zero payload). The maximum take-off weight limit of an aircraft typically precludes flying with both a full fuel and full passenger load. The payload range chart therefore demonstrates the trade-off between the aircraft payload and range.
Simulations of the baseline N3-X configuration (N3-X with a 100% thrust split) suggest that the aircraft could reach a maximum payload range of 12,650 nautical miles, given the assumptions made for maximum take-off weight (Fig. 11). The range predicted by the performance model far exceeds the range that would be required from an aircraft operating on typical commercial routes. However, it does suggest the potential for changes to the aircraft design or mode of operation to take advantage of the high aircraft efficiency, such as by increasing the maximum payload. The high efficiency results from the high efficiency propulsion system obtained through the use of distributed propulsion and boundary layer ingestion. In addition, the blended wing body airframe has a higher lift-to-drag ratio during cruise in comparison to conventional tube and wing aircraft, resulting in a more aerodynamically efficient airframe.
Section 7 demonstrated that a more efficient propulsion system is possible through the use of a thrust split between the array and the main engines. However, weight estimation of the systems suggests that the lowest SFC system does not correspond with the lowest weight system. The most efficient system is therefore the one that minimizes the aircraft fuel consumption during the course of a flight. Each thrust split configuration was simulated to obtain the fuel consumption for the design mission of 7500 nautical miles. Combining both the propulsion system performance and weight change results in an optimum thrust split of approximately 93.6% (Fig. 12), in comparison to an optimum SFC thrust split of 94.1%. At this point, the slight decrease in SFC from the optimum thrust split is compensated for by a decrease in the aircraft weight.
This research has shown the application of a performance simulation method for boundary layer ingesting propulsion system installed on the airframe of the N3-X aircraft. The research highlights the usefulness of a rapid analysis tool for BLI propulsion systems, as this enables the comparison of multiple configurations and easy design space exploration for a novel propulsion system concept at any operating point.
The results may be used to draw a number of conclusions with regard to the design of a turbo-electric BLI system. The main conclusion is the usefulness of introducing an additional degree-of-freedom in a thrust split type of parameter. Even with very high component efficiencies such as may be found in a superconducting system, energy will be lost in the production and transmission of power. Splitting thrust between different sources can also support the design of a more favorable configuration for a BLI system. Lower thrust requirements from a BLI propulsion system will reduce its size, which will reduce the free-stream ingested flow and hence improve efficiency. However, large distributed propulsion systems provide an effective bypass ratio for the system as a whole. Reducing size of the distributed propulsion system necessitates a greater proportion of thrust to be produced by the gas turbine. This is better provided by a turbofan engine rather than turbojet, as a turbofan is able to more efficiently produce thrust. Weight is another key design parameter for the system, as an optimum in terms of specific fuel consumption will often not be the optimum in terms of mission fuel consumption.
A number of conclusions may be drawn with regard to the performance of a propulsion system such as that of the N3-X. The first relates to the limiting factors on propulsion system performance. Results suggest that the main engine limits the propulsor array performance at take-off and static conditions, as it is unable to produce the power that would be required by the array at full rotational speed. The array speed is therefore spooled down to match the power produced by the main engines. Including alternative power sources, such as battery storage, could be used provide the additional power to run the propulsion system at full power. However, despite the power limitation, the propulsion system would seem to achieve the necessary thrust for take-off, as estimated by previous research. While the outcome is represented as a power “deficit,” it is an expected aspect of operating different systems and turbomachinery components together. At off-design, the spools in a conventional turbomachine will also settle at a rotational speed that scales up or down depending on operating point and power requirements. The modeling process for the N3-X turbo-electric system highlights that the same requirement will apply for systems that are connected through an electrical transmission system.
The thrust split parameter has been shown to be beneficial to the fuel consumption of the aircraft. Optimum thrust split for the simulated configuration lies at a thrust split of approximately 93% and offers an SFC saving of approximately 2.3% versus a propulsion system producing all thrust from the propulsor array. The optimum point occurs due to a combination of the transmission efficiency and array size effects linked to the boundary layer thickness and local flow velocity. An increase in transmission loss further favors producing a greater proportion of thrust from the main engines and decreases the optimum thrust split.
When considering performance of the aircraft as a whole, a new optimum thrust split of 93.6% may be derived, which reduces fuel consumption for the design mission by approximately 1.5%. Although cruise SFC at this point is slightly greater than value obtained at 94.1% (optimum for eSFC), the reduction in weight balances the increase in SFC. The outcome demonstrates the importance of considering the aircraft and its systems as a whole, especially in a novel aircraft such as the N3-X.
The results shown have demonstrated performance simulation for a propulsion system with a relatively fixed design focusing only on the thrust split parameter. However, the design of a propulsion system such as that of the N3-X suggests an optimization problem with a large number of parameters, as both the individual system components and the system as a whole may be designed to maximize the system efficiency.
NASA Glenn Research Center (Grant No. NNX13AI78G).
- c0 =
aircraft centerline chord length
- Dnacelle =
- FGi =
- FG9 =
- h =
inlet stream height
boundary layer mass flow rate
free-stream mass flow rate
- P =
- Re =
- u =
- w =
- x =
axial distance from aircraft leading edge
- x0 =
axial distance from aircraft nose
- y =
vertical distance from aircraft surface
- δ =
boundary layer thickness
- δ* =
- θ =
- θ* =
- ρ =
- BLI =
boundary layer ingestion
- DP =
- eBPR =
effective bypass ratio
- eSFC =
effective specific fuel consumption
- KEG =
kinetic energy group
- LTO =
landing and take-off
- MAG =
mass flow group
- MOG =
- NPF =
net propulsive force
- RTO =
- SLS =
sea level static
- TET =
turbine entry temperature
- TS =