Abstract

When it comes to multiphysics modeling and simulation, the ever-improving advances of computational technologies have forced the user to manage higher resource complexity while at the same time they are motivating the modeling of more complex systems than before. Consequently, the time for the user’s iterations within the context space characterizing all choices required for a successful computation far exceeds the time required for the runtime software execution to produce acceptable results. This paper presents metacomputing as an approach to address this issue, starting with describing this high-dimensional context space. Then it highlights the abstract process of multiphysics model generation/solution and proposes performing top-down and bottom-up metacomputing. In the top-down approach, metacomputing is used for automating the process of generating theories, raising the semantic dimensionality of these theories in higher dimensional algebraic systems that enable simplification of the equational representation, and raising the syntactic dimensionality of equational representation from 1D equational forms to 2D and 3D algebraic solution graphs that reduce solving to path-following. In the bottom-up approach, already existing legacy codes evolving over multiple decades are encapsulated at the bottom layer of a multilayer semantic framework that utilizes category theory based operations on specifications to enable the user to spend time only for defining the physics of the relevant problem and not have to deal with the rest of the details involved in deploying and executing the solution of the problem at hand. Consequently, these two metacomputing approaches enable the automated generation, composition, deployment, and execution of directly computable multiphysics models.

Introduction

In the discipline of computational multiphysics, computational technologies have been evolving toward improving the efficiency, accuracy, and scope of relevant computational models that encapsulate the computational representation of various physical systems. However, in their great majority of the relevant approaches exploit pre-existing components that enable utilization of specific physics and also do not address the role of the user in developing, composing, and deploying the resulting computational assemblies. Instead of only spending time do define the problem to be solved or modeled from a physics perspective, the user is also responsible for configuring the necessary input files and computational resources required. Furthermore, the time for composing a theory and developing and deploying its computational implementations and iterating in high-dimensional context space is spanned by bases involving choice that need to be made for obtaining acceptable solutions. In most cases, these activities take much longer time than the execution of the constructed computational implementation. However and paradoxically, the great majority of innovations in hardware and software development have been focusing their efforts to improve the runtime execution performance of the relevant simulations and models, and not to improve the truly inefficient part of these models, which is their development, deployment, and reuse that it consistently takes much longer time for the user.

In order to address the inefficiency stemming from the user being in an iterative loop for formulating theories and their associated models along with their composition and deployment on available computational fabrics, we have recently proposed that metacomputing approaches are needed to replace activities accomplished traditionally by the user to those that are accomplished by the computer but at higher level of abstraction [1,2]. In this paper, we present a high-level outline of both top-down and bottom-up metacomputing efforts as an example of moving the human activity from the user’s brain to the computational fabric available in a manner that computes what needs to be computed. In the top-down context, theories and their models that in some cases take decades to be developed, can be derived in minutes and/or seconds if proper metacomputing approaches are applied. Furthermore, if the user considers their embedding in spaces of higher semantic and syntactic dimensionality, then they reduce to simpler and more powerful representations. More specifically, utilizing the fact that when increasing the semantic dimensionality of a model from one where all field quantities are defined in the algebra of reals, to that of hypercomplex algebras (HAs), the problem to be solved can turn out to be a much simpler one and in some cases instead of solving the partial differential equations (PDEs), it requires just algebraic evaluation. For example, systems of coupled PDEs describing the spatiotemporal evolution of fields to be determined, require the numerical solutions of these equations when the algebra of reals is used. If alternatively, the theory is projected to the semantics of complex or quaternion algebras, then the problem may no longer require numerical solution of PDEs but instead, it may require just a straightforward evaluation of a system of algebraic equations expressed in terms of complex or quaternionic potentials. On the syntactic dimensionality side, it can be demonstrated that instead of writing equations from left to right in the usual one-dimensional paradigm, their transformation to a directed acyclic graph (DAG) has the potential to generate algebraic solution graphs (ASGs) where the unknown quantities can be evaluated by simple path traversal between known and unknown quantities. Examples of developing the metacomputing infrastructure that automates the semantic and syntactic dimensionality are presented. On the bottom-up context, it is assumed that legacy codes developed over several decades are available and need to be composed and deployed in a manner that takes the user’s iterative adjustment role out of this incremental, inefficient and painful (for the user) loop. The potential of using resource and problem specifications under the context of category theory is demonstrated as way to implement metacomputing on specifications. In this manner, the user only needs to spend time to define the problem as he/she needs to solve or model only once and let the proper infrastructure obtain a directly computable model of it. The discussion of the future directions of these efforts is closing this paper.

The multiphysics modeling context space will be described briefly to expose the complexity of the decisions and user has to make in order to generate and/or solve a computational multiphysics model (CMM). Then a high-level description of the workflow required to develop and compute multiphysics modeling activities will follow. To demonstrate the feasibility and effectiveness of top-down metacomputing, three metacomputing software modules will be described. They are the “computational multiphysics equation builder” (CMEB) for deriving theories and their equational models, the “reals to hypercomplex” (R2hC) transformer module that raises the semantic dimensionality of an equational model by mapping the state variables participating in the relevant theories that defined over the field of reals to new ones defined over higher dimensional algebras such as the complex numbers (for 2D problems) and quaternions for 3D problems, and the “equations to graphs” (e2g) that embeds 1D equational systems to directed acyclic graphs termed algebraic solution graphs. To demonstrate the feasibility and effectiveness of bottom-up metacomputing approach, a multi-meta-level architecture of a computational framework is outlined and its first computational implementation is presented. Finally, the present work ends with conclusions.

Multiphysics Modeling Context Space.

Equation-based CMMs are traditionally described in terms of state variables along with an analytical description that represents a logical conjunction of equations being true and forming an equational theory. Furthermore, its computational form encapsulates the numerical form in a software representation to be executed on a hardware infrastructure, as reflected by the labels of the context space bases axes of Fig. 1. These four bases of “physics model specification,” “numerical model specification,” “software model specification,” and “hardware model specification” are themselves aggregated subspaces of collapsed bases as shown in Fig. 1. The groupings of these sub-bases are depicted in the corresponding groups outlined with distinct rectangles. While the “physics model” contained in the aggregation is termed “physics specification of analytic model,” the numerical, software, and hardware instances of the model are all grouped to form the “computational model.” Each of these four bases is themselves subspaces, spanned by several context sub-bases, and described in detail elsewhere [1,2].

Fig. 1
Four-folded bases of the context space where multiphysics models are embedded along with their respective lists of sub-bases. The analytic physics model specification consists of 12 sub-bases, while the computational model is spanned from three bases, the numerical, the software, and the hardware specification bases each of which consists of 6, 12, and 5 sub-bases respectively, for a total of 35 sub-bases.
Fig. 1
Four-folded bases of the context space where multiphysics models are embedded along with their respective lists of sub-bases. The analytic physics model specification consists of 12 sub-bases, while the computational model is spanned from three bases, the numerical, the software, and the hardware specification bases each of which consists of 6, 12, and 5 sub-bases respectively, for a total of 35 sub-bases.
Close modal

It is important to mention here that these four-folded or 35 unfolded bases of the context space depicted in Fig. 1 are not meant to be exhaustively all-inclusive, nor are they fixed in time as computational technologies evolve.

The purpose of describing the context space of an arbitrary CMM here is to draw attention to the fact that the user must make decisions reflecting a succession of points forming a trajectory in this (at least a 35-dimensional) context space. If the results of such decisions are not adequate, then the user must then continuously iterate and follow a helicoidal path in this space that hopefully converges to an acceptable terminal outcome. This is a very time-consuming process, often unlikely to yield satisfactory results within desired time requirements.

The last point raised in the previous section can be considered as the main motivation for an alternate approach based on metacomputing. Our experiences over multiple decades of modeling and simulation exercises have revealed that incrementally searching this context space for problem-solution requirements is dramatically more expensive than the actual execution time of the model algorithmics. This is because the user must manually identify, implement, and manage a high-dimensional problem specification without assistance that depends entirely on the user’s prior experience and knowledge. Meanwhile, the computer is used strictly for executing the symbolic or numeric processing associated with the computational implementation of the CMM. More specifically, our findings across multiple instances of developing and executing CMMs suggest that the user is spending 75–98% of the total wall clock time required for the effort, while the remainder is spent in the computational fabric. Clearly, this situation motivates the development of a methodology that enables the users to shift the burden of implementation from their own reasoning to the computational fabric as much as possible. Additional motivation for developing a metacomputing infrastructure stems from the fact that it may improve accuracy, precision, length scale bridging, and other desirable performance metrics.

A way to further identify the roles of the user and the computer in the derivation and evaluation or solution of CMMs is to consider the abstract workflow of CMM generation activities depicted in Fig. 2 [3]. Individual rectangles denote each major activity, while the thick arrows denote the succession and data flow from source to target. The first activity to be instantiated on the far left of Fig. 2 is the specification definition by the researcher of the state space variables representing the fields of interest for a particular system. These are usually given in terms of conjugate state variables, the product of which has units of energy density. Then, the researcher has to invoke and formulate the specific form of the conservation laws applicable to the system in their local or global form. Following this, the researcher needs to develop the constitutive laws required to make the conservation laws algebraically closed, because their number is half as high as the number of field variables characterizing the state of the system. Furthermore, the researcher may want to use formulation axioms like those associated with the constitutive theories, or axioms that enforce certain properties like frame reference invariance, equipresence, neighborhood, etc. A more detailed description of this process will be given later in the present work.

Fig. 2
Activities workflow for the formation and solution of equation-based multiphysics models and the respective background computing execution embedding
Fig. 2
Activities workflow for the formation and solution of equation-based multiphysics models and the respective background computing execution embedding
Close modal

The combinations of the last three activities enable the derivation of the governing equations (usually PDEs) represented by the next node to the right. After a boundary value problem that represents the CMM of the physical system is defined, the derived PDEs must be solved and the results visualized. The “person” and a “computer monitor” icons of Fig. 2 denote that almost exclusively, the solution and visualization of the PDEs require the use of numerical computing and the presence of the user to handle all the necessary dependencies. All other activities, in their great majority, have been handled by the individual researcher using pen-and-paper computing. Some small exceptions of using symbolic computing have appeared in the past, for derivation and analytical solution of the PDEs and the associated formulation of the relevant boundary value problems (BVPs). It is implicit that all such activities are valid only if the researcher is aware of the meta-theoretic procedures necessary for making this an admissible workflow. These involve the belief that the conservation laws and thermodynamics are valid and should not be ignored. Furthermore, some implementation meta-axioms may be relevant here as well. Examples include whether infinitesimal strain tensors or finite strain tensors should be used, or if the material properties will be considered constants or dependent on some of the state variables such as temperature. Although the majority of the focus for CMMs during the recent decades has been invested in the solution of the PDEs representing the CMM at hand, a significant effort is required to derive these PDEs as indicated by the nodes to the left of the “solve the governing PDEs” activity in Fig. 2.

An alternate way to focus on the CMM generation process is to look at the hierarchical modeling structure depicting the various incarnations of a model attempting to mimic the behavior of a physical system as shown in Fig. 3. The two major approaches for implementing both the workflow and the individual activities associated with the derivation and solution of CMMs as depicted in Fig. 2, and in the CMM hierarchy as depicted in Fig. 3, are the user-defined Top-Down Architecture Approach (TDAA) and the semi-automatically defined Bottom-Up Architecture Approach (BUAA). Both of these approaches have been explored and initial steps have been demonstrated recently in Refs. [1,2]. An outline of them will be provided here to demonstrate their feasibility and benefits as a motivational opportunity for the future.

Fig. 3
Modeling hierarchy for a physical system and associated focus on the analytical and computational models
Fig. 3
Modeling hierarchy for a physical system and associated focus on the analytical and computational models
Close modal

TDAA for Metacomputing.

Prior to describing the TDAA, we describe here the opportunities associated with the semantic and syntactic spaces associated with CMMs in the context of the TDAA.

Semantic Space.

Within the semantic context, the variables and coefficients participating in a CMM representation can be instantiated not just in the field of real numbers, and have been done for the great majority of available bibliography. HAs, being products of the Caley–Dickson construction and as members of the set Clifford algebras that provide the grammar for geometric calculus and as quantizations of exterior algebra, offer the capability for a higher semantic dimensionality representation of equational theories. Their benefits in enabling lower equational complexity are largely under-recognized and under-utilized.

It is important to highlight that the key idea enabling the utilization of HAs and specifically complex and quaternion algebras for expressing 2D and 3D continuum multiphysiscs problems is based on a key observation: If the algebra of complex numbers is defined as C={z=z1+iz2(z1,z2)R2} and the quaternion algebra is defined as H={q=q1+iq2+jq3+kq4,(q1,q2,q3,q4)R4} and if the {e1,e2} is the orthonormal basis of R2, and {e1,e2,e3,e4} is the orthonormal basis of R4, then for all vectors z=z1e1+z2e2R2 the corresponding complex number z=z1+iz2C, and for all vectors q=q1e1+q2e2+q3e3+q4e4R4 the corresponding quaternion q=q1+iq2+jq3+kq4H. Therefore, the dimensional equivalence between C and R2 and between H and R4 permits the approximations CR2 and HR4 respectively.

Limited efforts to exploit complex-valued potential functions defined over complex variables (i.e., approaches involving the first HA above the reals) have been identified in the past for 2D problems in elasticity [410], thermoelasticity [11,12], hygrothermoelasticity [13], fluid mechanics [14], etc. For all these cases, the problem of solving a single or a set of PDEs is reduced to the problem of evaluating algebraic equations expressed in terms of complex potential functions, thus drastically simplifying the original problem. Similarly, during the last two decades, some work has been published involving quaternion algebra for solving 3D problems in elasticity [1528], fluid mechanics [29,25,30], multiphysics problems such as electromagnetism [3140], and thermoelasticity [41,42,20]. For all these cases, the problem of solving a single or a set of PDEs is reduced to the problem of evaluating algebraic equations expressed in terms of quaternionic potentials, thus drastically simplifying the original problem.

Consequently, it appears that allowing an equational representation to be redefined in a higher dimensionality semantic space is equivalent to solving a problem by making it somewhat of a non-problem.

Syntactic Space.

Similarly, in the syntactic space context, it is often forgotten that the traditional equational representation is a string sequence of equational terms read from left to right that are actually embedded in a 1D syntactic space. It is also often forgotten that the solution of equations via algebraically-enabled rewriting methods is restricted by our insistence on writing equations in the 1D string formalism. The historical dominance of sentential representation systems in the history of modern logic has obscured several important facts about diagrammatic systems. One of them is that several well-known diagrammatic systems were available as heuristic tools before the era of modern logic and algebra. Euler circles, Venn diagrams, and Lewis Carroll’s squares have been widely used for certain types of syllogistic reasoning [4345]. Another not well-known story is that a founder of modern symbolic logic, Peirce, not only revised Venn diagrams but also invented a graphical system called “existential graphs,” which has been proven to be equivalent to a predicate language [4649]. In the 1930s, Gentzen introduced the 2D expression called “sequent” and demonstrated its efficiency for performing logical expression evaluation [50,51]. Furthermore, Brown [52] introduced another 2D diagrammatic system that was also focused on Boolean logic and propositional calculus. Subsequently, a few investigators in the area of reasoning and logic extended the syntactic dimensionality to 2D motivated by the advent of computational progress in 2D graphical user interfaces [5355]. However, these graphical representation systems have not been used in formal contexts such as proofs because they are considered to be unreliable, or they often cannot capture general cases. Responding to the need for tools that can reconcile the apparently opposing issues of formal rigor and intuitive understanding, Buchberger proposed the new concept of logographic symbols [56,57] and implemented them in the theorem prover “Theorema” [58]. Nevertheless, none of these 2D diagrammatic and symbolic efforts addressed the equational aspects associated with theories of continua in terms of graph representations involving nodes and arrows connecting them.

To the authors’ knowledge, the first attempt to utilize graphs (weighted and undirected) for describing circuit networks was given by Kron [59]. Subsequently, equational representations of dynamical systems in the form of Bond graphs were introduced by Paynter [60]. Then, DAGs were introduced for the first time as ASGs by Mast [6163] for representing and solving elasticity problems expressed over tensor quantities defined in the algebra of complex numbers and in Ref. [9], it was proposed to extend them for multiphysics of continua. Independently and unaware of the ASG efforts, Tonti introduced DAGs for equational theories representation [6466] with all quantities labeling DAG nodes and edges of the graphs defined over the field of reals to denote equational theories. In the 1980s, Deschamps utilized DAGs with nodes representing scalar and vector quantities to represent Maxwellian electromagnetics. Finally, “Formal” graphs were introduced by Deschamps [67] for electromagnetics, where the nodes and edges were labeled by scalar and vector-based equational components.

It should be underlined here that although in form, the ASGs are reminiscent of Tonti [68], Bond [60], Kron [59], Deschamps [67], and Formal [69] graphs (TBKDF graphs hereafter), they are functionally very different from ASGs due to two critical features:

  1. Although TBKDF graphs enable compositionality, they are not endowed with direct computability because they do not contain isomorphic syntactic and semantic evaluation capability. The typing of the labels of the arrows involves operators that do not allow the syntactic operator of concatenation to act consistently for all arrows signifying tensor product or function application as ASGs do.

  2. TBKDF graphs are not capable of expressing algebraic theories in terms of the two required fundamental operators where one distributes of the other (e.g., as tensor multiplication distributes over tensor addition) to enable algebraic semantics of tensor operations. Therefore, path-finding algorithms cannot be implemented for constructing the compositional operations implementing transitivity for direct evaluation, as in the case of ASGs.

TDAA Approach Outline.

To automate as many of the workflow activities depicted in Fig. 2 and introduce the benefits of semantic and syntactic dimensionality raising, we have expanded and somewhat re-factored it, resulting in Fig. 4 as described in Ref. [1]. It should be noted that two new activity nodes have been inserted to represent the semantic and the syntactic dimensionality raising of the models derived from the previous activities. The computational embeddings of the activity nodes are represented by the background rectangles. These signify both the pre-existing state-of-the-art, and the new one described later in this work.

Fig. 4
Proposed activity workflow, refactoring, and expanding for the formation and solution of equation-based multiphysics models. The respective background computational embedding, along with computational modules (dashed lines) that implement the corresponding functionalities, are also shown.
Fig. 4
Proposed activity workflow, refactoring, and expanding for the formation and solution of equation-based multiphysics models. The respective background computational embedding, along with computational modules (dashed lines) that implement the corresponding functionalities, are also shown.
Close modal

The software modules developed for automating these activities are represented in Fig. 4 by the dashed outlined rectangles with curved corners, and they contain the activities they are implementing. The CMEB enables the activities of (1) defining the state variable pairs describing the state of a multiphysics system, (2) formulation of the conservation laws, (3) their algebraic closure through the development of constitutive relations, (4) the enrichment from various axioms of the classical constitutive theory as well as other assumptions, and finally (5) the generation of the respective algebraically closed system of PDEs governing the spatiotemporal behavior of the system of interest. The R2hC module is responsible for converting the PDEs developed by the previous module in terms of variables defined over the field of real numbers R to a set of equations defined over the fields of complex numbers C or quaternions H. This effectively raises the semantic dimensionality of the models and reduces their representational complexity. The e2g module is responsible for converting the equational form of the model developed by the previous module to an algebraic solution graph. This module effectively raises the syntactic embedding dimensionality to two or three dimensions and reduces both the representational and solution complexity.

Continuum Multiphysics Equation Builder.

The CMEB system is a computational infrastructure for the automated derivation, composition, and deployment of models. It enables a user to be responsible only for specifying the physics/engineering problem to be simulated and frees them from the tasks of manually selecting the details of a simulation tool, its connectivity, and associated data files and other dependencies. The entire algorithmic development of the system is based on the following meta-axioms for open and closed continuum multiphysics systems:

  1. Conservation laws for mass, linear and angular momentum, and energy are holding [3].

  2. The electromagnetic principles of electric charge conservation, Gauss law, Gauss law for magnetism, Faraday’s law of induction (Maxwell–Faraday law), and Ampere’s circuital law (with Maxwell’s addition) are holding. The last four laws representing Maxwell’s PDEs of electrodynamics can also be represented by the two conservation laws of magnetic vector and electric scalar potentials [3].

  3. The laws of thermodynamics of continua [70] are holding.

  4. Among all possible formalisms available, the particular formalism utilized for implementing thermodynamics in CMEB is that of the theory of irreversible processes near equilibrium [70].

  5. Neumann’s principle, or the principle of symmetry [71], states that “if a crystal is invariant with respect to certain symmetry operations, any of its physical properties must also be invariant with respect to the same symmetry operations.” This principle is taken to hold for any (not necessarily) crystalline material that exhibits symmetry of any kind (i.e., laminated composite materials homogenized at either the lamina or laminate levels).

The CMEB framework establishes constitutive relations, employs the relevant conservation laws, and introduces formulation axioms in order to automatically derive constitutive and governing equations for continua exposed to multiple fields. It is evident that the computational and prototyping resource for implementing CMEB should be capable of symbolic manipulation, which is the main activity performed by the various investigators on the topic. The Mathematica [72] symbolic algebra and programming environment was chosen for these tasks mainly because of its rich set of features and maturity.

The CMEB infrastructure applies intelligent automation through a design implementation mainly consisting of an architecture involving four sub-modules. Its outline is depicted in Fig. 5. These sub-modules are described as follows:

Fig. 5
Outline of the workflow implemented by the CMEB module for generating CMMs defined over the field of real numbers R
Fig. 5
Outline of the workflow implemented by the CMEB module for generating CMMs defined over the field of real numbers R
Close modal

Part I—User Input: Through an interrogative custom-generated graphical user interface (GUI), the user is asked to provide only six selections to generate the PDEs that represent and govern the CMM. The user also has the option to make declarative incorporation of additional characteristics of the continuum theory of interest.

The following user input options are available for each selection widget of the GUI:

  1. Physics: elastic, thermal, chemical, electrical, and magnetic.

  2. Independent variables: (strain or stress), (entropy density or temperature), (chemical species concentration or chemical potential per unit volume), (electric displacement field or electric vector field), and (magnetic displacement field or magnetic flux density field). The remaining variables of each pair are the dependent ones.

  3. Number of charged and uncharged species given by a positive integer.

  4. Material symmetry class: anisotropic, monoclinic, orthotropic, transversely isotropic, cubic, or isotropic material symmetry.

  5. Order of constitutive laws: linear, quadratic, or cubic, etc.

  6. Spatial dimension: 2D or 3D.

Part II—Derivation of Coupled Constitutive Laws From a Thermodynamic Potential: As eluded by the previous sub-module description, the state of the system defined by the input section can generally be described by four pairs of conjugate field variables. They are for structural mechanics, thermal physics (heat transport), electric physics, and magnetic physics. In addition, there are n pairs for mass transport physics for n chemical species diffusion, making a possible maximum total of 4 + n conjugate pairs. The user selections for variables implicitly define which thermodynamic potential will be used. However, the user has the alternative option to directly select one of many thermodynamic potentials (corresponding to the choice of conjugate pairs with regard to which components of the pairs will be independent and dependent variables) for introducing them in the first and second laws of thermodynamics. For the case of a single chemical species, there are five independent variable fields. Therefore there are 25 = 32 possible thermodynamic potentials. Among the 32 possible thermodynamic potential functions (for the case of a single chemical species), the framework enables the selection of one that can be constructed for a given selection of independent variables representing the system’s state. Among the 32, any choice is possible however the GUI provides the user with the option to select one of the named or well-known functions: the internal energy density U(ε,η,C,D,H), the Helmholtz free energy density ψ(ε,T,C,D,B), the generalized enthalpy density Ω(σ,η,C,E,B), the Gibbs free energy density Γ(σ,T,C,E,B), or the Landau (or grand) potential density Λ(ε,T,μ,E,B), where σ,ε,T,η,C,μ,D,E,H,B represent the stress and strain second-order tensors, the temperature, entropy, species concentration in the continuum, the entropy, the electric displacement, electric field, magnetic field, and the magnetic flux density vectors respectively.

It should be noted that the nine constitutive axioms of causality, determinism, local action, material frame indifference, dissipation, equipresence, time reversal, memory, and admissibility [73] are participating in the declarative part of the implementation as specified by the user.

Subsequently, this sub-module of CMEB performs a multivariate Taylor series expansion of the selected thermodynamic potential about the origin up to the m + 1 order where m is the order of the desired constitutive law also defined by the input sub-module. Finally, due to the second law of thermodynamics, the constitutive laws of the system for the dependent variables are determined by symbolically evaluating the first-order derivatives of Taylor’s series expansion with respect to the respective conjugate (independent) variables.

Part III—Incorporate Selected Symmetry in Constitutive Forms: Neumann’s principle is applied to simplify the form of the constitutive equations when certain material symmetries are known. Specifically, for each selected (by the user via the input module) symmetry class, physics, and order of constitutive laws, a set of matrices are generated based on an internal algorithm that follows Neumann’s principle or principle of symmetry [74,75]. This principle was originally derived for crystalline substances, but here it has been endowed to hold for any materials that exhibit symmetries due to their constituent makeup (such as laminated composites) by the fourth meta-axiom above. Therefore, each symmetry class is described by a well-known material symmetry group G (a set of tensors).

In this sub-module, an algorithm was written to solve for the elements of the relevant tensors C of various orders such that C=QC where the symmetry transformations QG. Rayleigh product is defined for any tensor rank, therefore, this algorithm generates constitutive material laws for any physics and any order.

Examples of tensors obtained by this algorithm are (a) stiffness tensor (effects of symmetry on fourth-rank tensors), (b) piezo-electric tensor (effects of symmetry on third-rank tensors), (c) dielectric permeability, electromagnetic and magnetic-permeability tensors, electric and thermal conductivity, thermal expansion, and hygroscopic expansion (effects of symmetry on second-rank tensors).

Part IV—Automated Derivation of 2D or 3D Field Governing Equations: Utilizing the local form of the conservation laws of mass, momentum, and energy, this module derives a set of PDEs that correspond to the physics selected. However, these PDEs are algebraically not closed as both dependent and independent variables participate, clearly indicating that there are fewer equations than unknowns present. To generate an algebraically closed system of PDEs, this sub-module applies the constitutive equations derived by the previous sub-module along with additional relationships required. That is to say, that it applies the gradient equations for materials kinematics connecting the displacement vector with the strain tensor and then the constitutive law connecting the stress tensor with all additional fields (if present). It also applies the Fourier constitutive law of heat conduction, Fick’s first law of diffusion, and Gauss’s law for magnetism. This derived and algebraically closed system of governing equations can be written both in 2D and 3D based on the user’s selection in the input sub-module. This system includes a mass transport PDE for each of the participating species (originating from the mass conservation), the structural equilibrium PDE (originating from the conservation of momentum) equations, coupled heat conduction equation (originating from the energy conservation), and Maxwell’s equations for electromagnetic fields in their original or magnetic vector and electric scalar potential forms.

Finally, this sub-module of CMEB has a rewriting engine to generate the derived equations in LaTeX [76] and PDF format for publication purposes.

A Three Field Computational Multiphysics Model Example: Linear Anisotropic Hygro-Thermo-Elasticity.

When the user is requesting the derivation of the CMM that encapsulates the 3D linear anisotropic hygro-thermo-elasticity for one species the following outputs are produced by CMEB. The first output is the Helmholtz free energy density function Taylor expansion up to second order in the neighborhood of the natural state P(0, T0, C10) representing the origin of the system:
P(εij,T,C1)=12εijεkl2Pεijεkl|T,C1+12(TT0)22PT2|εij,C1+12(C1C10)22P(C1)2|εij,T+εij(TT0)2PεijT|C1+εij(C1C10)2PεijC1|T+(TT0)(C1C10)2PTC1|εij
(1)
where εij, T, T0, C1, C10 represent respectively the components of the second-order strain tensor ε, the temperature, the initial temperature in the continuum of interest, the relative concentration of species “1”, and the initial concentration of that species in the continuum of interest (i.e., the independent state field variables of the system under consideration). In the above expression, and for the rest of the paper, the summation convention for repeated indices is used. As stated above, the reference state is assumed to be stress/strain free, therefore εij0 does not appear in the above equation.
The second output is the definition of the materials constants of the system
2Pεijεkl|T,C1=sijkl,sijklS;2PT2|εij,C1=ρ0cvT0;2P(C1)2|εij,T=bC10;
(2)
2PεijT|C1=α~ij,α~ijα~;2PεijC1|T=β~ij,β~ijβ~;2PTC1|εij=χ
(3)
where sijkl,S,ρ0,cv,b,χ represent respectively the components of the fourth-order Hooke’s tensor, the Hooke’s tensor, the initial material density, the heat capacity or specific heat per unit mass, and the coupling coefficient between temperature and species concentration. In the above α~ij are the components of α~=S:α with α being the second-order thermal expansion tensor and β~ij are the components of β~=S:β with β being the second-order moisture expansion tensor.

Also, the vertical line signifies that this is the value of the partial derivative of the quantity to the left of it, while the quantities in the subscripts are constant in a manner consistent with the theory of thermodynamics of irreversible processes.

The third output provided by CMEB is the constitutive equations
σ(x,t)=S:[ε(x,t)+α(T(x,t)T0)+β(C1(x,t)C10)]
(4a)
η(x,t)η0=αε(x,t)+ρ0cvT0(T(x,t)T0)+χ(C1(x,t)C10)
(4b)
μ1(x,t)μ10=βε(x,t)+χ(T(x,t)T0)+bC10(C1(x,t)C10)
(4c)
where σ,x,t,η,η0,μ1,μ10 represent respectively the second-order Cauchy stress tensor, the position point {x, y, z} of the Cartesian frame of reference, the time, the entropy density, the initial entropy density, the chemical potential of the species, and the initial chemical potential of the species involved in mass transport.
The final output is the algebraically closed system of field PDEs
ρu¨S:[12((u)Tu)+α(TT0)+β(C1C10)]=f
(5a)
T0α12[(u˙)T+u˙]+ρ0cvT˙+T0χC1˙(kT)=ρ0WT
(5b)
C10β12[(u˙)T+u˙]+C10χT˙+bC1˙(d1C1)=ρ0WM
(5c)
where f, k, WT, d1, WM represent respectively the body forces vector, the second-order thermal conductivity tensor, the heat transport source term, the mass transport equivalent conductivity, and the mass transport source term. It is important to underscore here that the derived equations are in agreement with the classical literature on the topic [12].

It should be noted that in the nomenclature above and for the rest of the present work, regular italic symbols represent scalar quantities, bold italic symbols represent vectors, bold, regular symbols represent for second-order tensors, and double-lined symbols represent tensors of order higher than second.

The final product of CMEB is the governing PDEs, and therefore they can either be solved numerically by following any of the available discretization methods (i.e., finite differences, finite volumes, finite elements, lattice Boltzmann method, etc.) or alternatively, be transformed to simpler formalisms (involving algebraic equations or simpler PDEs) by the “R2hC” module described in the next section.

Semantic Equation Dimensionality Raiser From R to hyperComplex (C or H) Algebras framework

R2hC Architecture and Functionality Description.

The benefits of algebraic dimensionality raising by rewriting theoretical formalisms developed by using variables defined over the field of reals R to those defined over the field of complex number C have been recognized since the beginning of the twentieth century as described in the Introduction. In this regard, the R2hC module is responsible for converting the PDEs developed by CMEB in terms of variables defined over the field of real numbers R to a set of algebraic equations defined over the fields of complex numbers C or quaternions H. This effectively raises the semantic dimensionality of the models and reduces their representational complexity. The Mathematica [72] symbolic algebra and programming environment was also chosen for its implementation.

The R2hC infrastructure applies intelligent automation through a design implementation mainly consisting of an architecture involving the sub-modules depicted in Fig. 6. These sub-modules are described as follows:

Fig. 6
Architectural outline of the R2hC module workflow for generating CMMs defined over the field of complex numbers C or quaternions H
Fig. 6
Architectural outline of the R2hC module workflow for generating CMMs defined over the field of complex numbers C or quaternions H
Close modal

Part I—User Input: Similar to the case of CMEB, the user is asked to provide only four selections needed by R2hC through an interrogative GUI. The user also has the option to make declarative incorporation of additional characteristics of the continuum theory of interest. The input options are as follows:

  1. Physics: elastic, thermal, chemical, electrical, and magnetic.

  2. Spatial dimension: 2D or 3D.

  3. Material symmetry class: anisotropic, monoclinic, orthotropic, transversely isotropic, cubic, or isotropic.

  4. Problem domain/topology: Finite simply-connected (i.e., without inclusions/holes), finite with inclusions, or infinite with inclusions.

Part II—Field Governing Equations: This module takes the PDE output of the CMEB framework and prepares it for solving various problems based on the user’s selections in the previous sub-module. Alternatively, it allows the user to define the PDE of interest. The dimensionality of the problem domain specified in the previous sub-module is used here to enable the branching transformation from the algebra of R onto the algebra of C or H. It also prepares the PDE forms for the appropriate transformations to be implemented in the following sub-modules.

Part III—RC Mapping for 2D Problems: The following steps are implemented in this sub-module to enable solving 2D problems defined over the algebra of C:

  1. Select proper PDE formalism based on isotropic or anisotropic material symmetries.

  2. Select appropriate methodology for utilizing complex potentials. The options vary from the single physics ones as described by Refs. [4,77,5,78] or the multiphysics ones utilizing the Papkovish–Neuber representation [79,80] or the Galerkin–Westergaard representation of potential functions [8183]. Although some of these have been developed for 3D problems, they can be applied to 2D problems as well. In the current implementation of R2hC, only the Kolosov–Muskhelishvili (K–M) approach involving holomorphic complex potential functions has been implemented.

  3. Formulate the isotropic Airy’s biharmonic PDE and the associated function [84,85] along the Navier PDE for displacements enhanced by additional physics in terms of Neuber, Papkovish, or Galerkin potentials for isotropic solids. Alternatively, for anisotropic bodies, formulate the anisotropic Airy’s PDE as introduced by Lekhnitskii [7,86].

  4. Perform Airy, and Navier mapping RC by introducing the complex numbers commutative algebra C={z=x+iyz¯=xiy,(x,y)R2} where z and z¯ are the complex variable and its conjugate while i is the imaginary unit, and then apply chain-rule of differentiation and the Cauchy–Riemann conditions.

  5. Perform algebraic reduction in terms of involved potentials that now are all functions defined as f(z,z¯):CC

  6. Select a particular domain topology to distinguish among simply connected, multiply connected, finite or infinite domains of applicability.

  7. Define boundary and/or continuity conditions to select proper complex potential functions such as they are satisfied by the selected general forms of the complex potentials.

  8. Formulate Riemann–Hilbert problem for multiply-connected domains or power series with collocation method for simply-connected domains according to the methods described in [4,5,78].

  9. Determine required admissible complex potentials.

A typical illustrative example that demonstrates the benefits of semantic dimensionality raising from R to C is shown for the case of isotropic and anisotropic elasticity in Fig. 7, where the case of quasi-static elasticity without body forces is considered. In R, the solution comes either from solving the three partial differential equations of equilibrium or the biharmonic Airy PDE. However, in C only a system of five algebraic equations need to be evaluated (called the K–M relations [4,5] for the isotropic case) in terms of two complex potential functions. For the anisotropic case, the equations are evaluated in terms of six complex potential functions [7,86]. Therefore, the problem of solving the appropriate PDEs in R has now been reduced to the problem of determining complex potentials required to evaluate algebraic equations.

Fig. 7
Schematic description of the effects that dimensionality raising from R to C has for the case of the isotropic and anisotropic elasticity. Instead of solving PDEs, we only need to identify complex potentials that, when replaced into an algebraic system of equations, the unknown quantities (stress and displacement components) can be determined by simple evaluation.
Fig. 7
Schematic description of the effects that dimensionality raising from R to C has for the case of the isotropic and anisotropic elasticity. Instead of solving PDEs, we only need to identify complex potentials that, when replaced into an algebraic system of equations, the unknown quantities (stress and displacement components) can be determined by simple evaluation.
Close modal

Part IV—RH Mapping for 3D Problems: The following steps are required to solve a 3D problem in a manner analogous to that described earlier for the 2D problems case:

  1. Select proper PDE formalism based on isotropic or anisotropic material symmetries.

  2. Select appropriate methodology for utilizing quaternion-based potentials. The options vary from the single physics ones as in elasticity [1528], fluid mechanics [29,25,30] or the multiphysics ones such as electromagnetism [3134,36,37,39,40], thermoelasticity [41,42,20] utilizing the Papkovish–Neuber representation [79,80], of potential functions, involving quaternion potential functions. In the current implementation of R2hC, the 3D extension of the K–M relations according to Ref. [20] has been initiated.

  3. Formulate the isotropic Airy’s biharmonic and Navier’s PDEs or their extended forms (for additional physics) and the associated functions [84,85] enhanced by additional physics in terms of Neuber, Papkovish, or Galerkin potentials for isotropic solids. The case of 3D anisotropic bodies does not have a Navier extension yet; therefore, it has not been considered for this sub-module.

  4. Perform mapping RH by introducing the quaternion non-commutative algebra H={q=q1+iq2+jq3+kq4,(q1,q2,q3,q4)R4} where i, j, k are imaginary numbers satisfying the multiplication rules: i2 = j2 = k2 = −1; i j = −j i = k; j k = −k j = i; k i = −i k = j and where q,Sc[q]=q1,Vec[q]=iq2+jq3+kq4,q¯=q1iq2jq3kq4,q^=kqk=q1iq2jq3+kq4, |q|=qq¯=q12+q22+q32+q42, q1=q¯/|q|2 are respectively the quaternion variable, its scalar part, its vector part, its conjugate, its k-involute, its norm, and its inverse (when q ≠ 0). Then apply the chain-rule of differentiation, and the generalized Cauchy–Riemann conditions for H [42].

  5. Perform algebraic reduction in terms of involved potentials that now are all functions defined as fq(q,q¯,q^,Sc[q],Vec[q]):HH.

  6. Select a particular domain topology to distinguish among simply connected, multiply connected, finite or infinite domains of applicability.

  7. Define boundary and/or continuity conditions in order to select proper complex potential functions such as they are satisfied by the selected general forms of the complex potentials.

  8. Formulate the problem for multiply-connected domains or collocation method for simply-connected domains according to the methods described in any of Refs. [1528].

  9. Determine required admissible quaternion potentials.

A typical illustrative example that demonstrates the benefits of semantic dimensionality raising from R to H is shown for the case of isotropic thermo-elasticity (without heat conduction) in Fig. 8, where the case of quasi-static elasticity with body forces is considered for the case of constant temperature throughout the domain. In R, the solution comes from solving the three partial differential equations of equilibrium folded in the momentum equation. However, in H only a system of five algebraic equations need to be evaluated (called the generalized K–M relations [20] for the isotropic case) in terms of three quaternionic potential functions.

Fig. 8
Schematic description of the effects that dimensionality raising from R to H has for the case of the isotropic thermoelasticity without conduction. Instead of solving PDEs, we only need to identify quaternionic potentials that, when replaced into an algebraic system of equations, the unknown quantities (stress and displacement components) can be determined by simple evaluation.
Fig. 8
Schematic description of the effects that dimensionality raising from R to H has for the case of the isotropic thermoelasticity without conduction. Instead of solving PDEs, we only need to identify quaternionic potentials that, when replaced into an algebraic system of equations, the unknown quantities (stress and displacement components) can be determined by simple evaluation.
Close modal

Syntactic Embedding Dimensionality Raising Via the Equations to Graphs Framework

Equations to Graphs Architecture and Functionality Description.

The benefits of raising the syntactic embedding dimensionality of equations have been recognized since the middle of the twentieth century—as described in the Introduction. This necessitates rewriting theoretical formalisms originally developed using variables defined over R in the form of equational theories expressed in 1D space. These may be rewritten in the form of 2D or 3D ASGs, where the labels of both the nodes and the edges are defined over hypercomplex algebras. For achieving this automated embedding dimensionality raising from equations to graphs, the module e2g was designed and developed within the Mathematica symbolic algebra system [72]. In general, the e2g module is a multi-modal interactive tool for equational parsing, ASG-DAG building, manipulation, and computation. More specifically, the e2g system implements and executes the following functionalities:

  • Parses a set of equations describing a particular problem in continuum mechanics provided as an input in LaTeX format.

  • Rewrites these equations in the form of a 2D or 3D DAG based on user preferences.

  • Enables utilization of graph embedding algorithms for generating alternate 2D or 3D representations. Embeddings that apply a physical parameterization (i.e., of elastic springs physics on the edges and Coulomb electrostatic fields on the nodes) to control the graph layout are particularly effective.

  • Enables successive composition of sub-problem sub-graphs into the full ASG to prevent cognitive overload.

  • To endow this DAG with the ability for direct composition and computability, it expresses all symbols related to edge and node labels in terms of quantities defined over the appropriate hypercomplex algebra (C for 2D problems and H for 3D problems).

  • Utilizes edge typing to distinguish between tensor multiplication and addition to enable tensor polynomial expressions in the context of the ASG.

  • Allows expression synthesis based on the selection of “target” and “source” nodes in the ASG. These user-selected nodes correspond to what needs to be computed, from which know quantities, respectively.

  • Facilitates and implements a path-finding algorithm for connecting the selected source and target nodes to enable symbolic and numerical computing based on ASG traversal.

  • Provides the capability of automatic expression term substitution and simplification to produce optimized human-parsable output.

  • Enables capability for isolation and display of individual sub-graphs of interest.

  • Provides an interactive output expression manipulator.

  • Enables interactive 3D visualization (rotation, panning, scaling of graphs) and computing functionality to gain an understanding of the underlying problem structure easily.

  • Provides first-person-view ASG-DAG visualization intended for immersive virtual-reality environments.

  • Provides the user with preferences for sizing, styling, and placing elements of the resulting ASG-DAGs

Semantics and Operational Outline of Algebraic Solution Graphs.

The basic ASG semantics implemented by e2g are as follows. Each directed edge of the graph connects an in-node to an out-node in that direction only. If a directed edge between two nodes is of type 0 (denoted by a solid line), to obtain the quantity signified by the out-vertex label, the value of the edge label must be applied to the value of the in-vertex label. Here “apply” means either tensor multiplication or function application. If the arrow between two nodes is of type 1 (a dashed line) then the value of the edge’s out-vertex label is calculated by summating the quantities given by applying the edge labels of all type 1 edges incident to the out-node in question to the values given by their respective in-node labels.

ASGs enable computability over graph paths (thus permitting the name “solution” graphs) in addition to their abilities for relational representation of relevant quantities. This is because the node labels correspond to tensors, and the edge labels correspond to function application of three types: tensor product, tensor sum, or substitution. When this is considered from the perspective of the possibility that the tensors are defined over C (for 2D problems) or over H (for 3D problems), formulations by the use of e2g lead to compact and invariant tensor formulations as described separately in Ref. [1].

An illustrative example demonstrating the benefits of syntactic embedding dimensionality raising, from traditional equations in 1D form to an ASG-DAG form via utilization of the e2g module, is displayed in Fig. 9. This figure shows the graph representation of quasi-static anisotropic elasticity without body forces for both the 2D and 3D visualization spaces implemented by e2g. As shown in detail in Ref. [1], the 1D equational form of this anisotropic elasticity CMM is given by
Vu(zn)=Tup(zn)|p
(6)
Vu(zn)=0^
(7)
Est(zn)=Tup(zn)Supst
(8)
Tup(zn)=Est(zn)Cupst
(9)
ϕ(zn)=Est(zn)|s|t
(10)
ϕ(zn)=0^
(11)
Est(zn)=Dt(zn)|s+Ds(zn)|t
(12)
Ess(zn)=Ds(zn)|s
(13)
Ds(t)=Ds(zn)
(14)
with all quantities defined properly in the Appendix of Ref. [1].
Fig. 9
ASG of differential constraints for anisotropic elasticity in (a) 2D and (b) 3D spaces
Fig. 9
ASG of differential constraints for anisotropic elasticity in (a) 2D and (b) 3D spaces
Close modal

Note the idiomatic use of the operators =, +, and to define each equation. Each equation has the form y=x1f1+x2f2+ where y and the xi are expressions corresponding to solution graph vertices, and the fi corresponds to solution graph edges. The equation format of y=0^ indicates that the node y is to be taken as the appropriate zero-valued scalar or vector, or tensor. In most regular cases, no symbol is used for the operator °. However, here this symbol had to be introduced for enabling the parser of e2g to consume the respective token and disambiguate from empty space.

It should be noted that the highlighted paths in Fig. 9 are computed by e2g when a user selects a target and a source node to denote what needs to be computed (target node) from what is known (source node). In the example of Fig. 9, the user requested the evaluation of the stress tensor Tup(zn) from the displacements Ds(zn). The e2g tool automatically invokes the application of Dt(zn) that is necessary for computing the intermediate node Est(zn) and highlights the path connecting the origin Ds(zn) with the target Tup(zn). The 1D equational form equivalent to this path is displayed in the bottom-left corner of the visualization.

Equations to Graphs Utilization Example for Anisotropic Elastic Media.

The challenge problem selected to demonstrate the power of the proposed approach and the utilization of e2g is that of the two semi-infinite bonded anisotropic elastic domains with partial cracks along the interface, loaded at infinity with a tensile load inclined to the global frame of reference. This problem was selected because, in the general case, it is associated with many biomaterial applications involving multiphysics of temperature, moisture, and electric field excitation in addition to mechanical excitation, and it is related to composite materials, additive manufacturing parts, piezo-ceramics, sensors, actuators, etc. The particular version of the problem focusing on the continuum mechanics aspects was selected to ensure verification based on previous work [62].

This problem is described by a composition of various sub-problems involving 123 tensor equations that, in turn, involve tensorial quantities and operators defined over the field of the complex numbers C. In the process of exploiting the current version of e2g, several sub-problems were constructed in the form of respective sub-graphs. Specifically, the Hilbert Arc formalism associated with this problem is shown in Fig. 10. Similarly, Fig. 11 shows the boundary displacement vector sub-graph, the differential constraints sub-graph, and the boundary stress vector sub-graph.

Fig. 10
Hilbert arc subgraph for the anisotropic half-planes with interfacial cracks problem, as produced by e2g
Fig. 10
Hilbert arc subgraph for the anisotropic half-planes with interfacial cracks problem, as produced by e2g
Close modal
Fig. 11
Boundary displacement vector, differential constraints, and boundary stress vector sub-graphs for anisotropic half-planes with semi-infinite interfacial cracks loaded at infinity problem, as produced by e2g
Fig. 11
Boundary displacement vector, differential constraints, and boundary stress vector sub-graphs for anisotropic half-planes with semi-infinite interfacial cracks loaded at infinity problem, as produced by e2g
Close modal

Since e2g implements automatic composition of these sub-graphs, the ASG formalism of the complete problem can now be automatically constructed and is shown in Fig. 12 by providing the LaTeX source of 123 algebraic tensor equations representing the formulation of the problem as presented in Refs. [61,62,63]. Clearly, the complexity of such ASGs may be cognitively overwhelming. For this reason, e2g includes a first-person-view capability within a virtual or augmented reality user interface.

Fig. 12
View of the complete ASG for the problem of the bonded anisotropic media with semi-infinite cracks on the interface loaded at infinity, as produced by e2g
Fig. 12
View of the complete ASG for the problem of the bonded anisotropic media with semi-infinite cracks on the interface loaded at infinity, as produced by e2g
Close modal

Bottom-Up Metacomputing

Objectives for the Bottom-Up Architecture Approach.

The goal of what is proposed herein is to remove the user out of the helicoidal loop for creating, composing, deploying, and executing a CMM based on multiphysics modeling functionality encapsulated in legacy codes as shown in Fig. 1. Therefore the two objectives of our effort are to:

  • Identify what human activities and processes are amenable to abstraction and transition to a computational fabric that executes metacomputing operations in the spirit that the computer generates at the meta-level what is to be computed at the actual computational level associated with traditional computing activities in a manner that addresses the issues mentioned earlier.

  • Implement a prototype of this meta-computing infrastructure and verify its functionality and efficiency.

Prior Work and State of the Art.

The BUAA is being pursued in an effort to address many challenges associated with the specialized knowledge overload required for deriving and deploying and executing manually physics models. Many investigators and teams have developed systems automating various aspects of computational model generation, composition, and execution for associated problem solving. These efforts can be classified based on the granularity of functionality encapsulations, the ancillary utilities and the mode of usage and abstraction. The six main categories are the unstructured and domain-specific libraries, the legacy driven computer-aided engineering systems that evolved to be CMM capable, the dedicated PDE solver systems, and the multiphysics environments focusing on the solution of systems of PDEs and finally problem solving environments. They all have been described in detail elsewhere [2].

The most common limitations of these systems are the lack of addressing holistically all bases of CMMs context space as they have defined in Ref. [2]. Another major limitation is the fact that more importantly these systems require an intensive participation of the user not only in defining the problem to be solved but also in all other details associated with the optimal deployments of the software in the available computational and networking infrastructure.

To address both the issues associated with the burden of the user to compose computational and deploy directly computable CMMs from pre-existing computational units in a manner that does not require that the user to know of all details pertaining to the choices involved with the context space of CMMs as described in Ref. [2] we proposed an approach that will be described in the following section and is based on performing operations acting on specifications of computational units that are defined in the context of category theory, as described in the subsequent section.

The following section describes the approach followed for the software design and development required for performing the required meta-computing operations.

BUAA Approach Outline

BUAA Technical Issues.

The two most important technical issues associated with the ability to implement a BUAA are as follows:

  1. Lack of context-free symbolic constructs capable of representing the semantics of the functionality of existing multiphysics computational modules: Efforts have been made to demonstrate metaprogramming, program synthesis, and automated software generation via various methodologies (mainly for business applications) by enabling computing over the specifications of the respective software modules. However, to the proposers’ knowledge, there has never been an effort to express the specification of existing multiphysics aware modules in a manner that captures their composability with others. An abstraction is required to capture both the external and the internal functional specification for each computational module. It has to be context-free so that computation over the relevant expressions can be consumed unambiguously by a metacomputing framework.

  2. Lack of ability to perform metacomputing on specification entities representing functional modules, problem descriptions, and computational resources: Although compositional frameworks for developing applications have been demonstrated when specifications of the available modules are provided, they have been limited to non-multiphysics applications and do not generate provably correct constructs [8790]. The realization that the co-limit operator in category theory (CT) enables composition/integration has been introduced by the software development community two decades ago [9195]. The suitability and feasibility of utilizing categorical operators for engineering applications have been demonstrated for the case of designing a composite panel by the PI in Ref. [96].

Overview of the Approach.

The proposed approach involves the consideration of multiple specification meta-levels and computation over specifications. The two main constructs proposed to be developed and integrated for addressing the above-mentioned issues.

To address the first issue, we defined and adopted an abstract atomic construct encapsulating directly the composition of the tuples associated with the discrete event system specification (DEVS) and the differential equation system specification (DESS) as defined in the theory of modeling simulation sciences [97] as shown in Fig. 13. The DEVS specification is defined [97] as a tuple according to
DEVS=X,S,Y,δint,δext,λ,ta
(15)
where X, S, Y, δint, δext, λ, ta represent respectively the set of inputs, the set of states, the set of outputs, the internal transition function such that δint: SS, the external transition function such that δext: Q × SS (where Q = (s, e)|sS, 0 ≤ eta(s) is the total set of states with e being the elapsed time since last transition), λ : SY the output function, and ta:SR0,+ set of positive reals. The DESS specification is defined as the tuple
DESS=X,Y,Q,f,λ
(16)
where X, Y, Q, f, λ are respectively the set of continuous inputs, the set of continuous outputs, the set of states, the rate of change function f: Q × XQ, the Moore-type output functions λ: QY, or the Mealy-type output function λ: Q + XY. The composition of DEVS and DESS constructs as displayed in Fig. 13 is capable to accept vectors of both discrete and continuous inputs and provide both discrete and continuous outputs. Thus enables this atomic composition to contain the semantics for representing and process associated with computational module abstractions and implementations.
Fig. 13
Graphical depiction of the atomic specification to be considered as a composition of the DEVS and the DESS specifications for each process within the meta-computing layers
Fig. 13
Graphical depiction of the atomic specification to be considered as a composition of the DEVS and the DESS specifications for each process within the meta-computing layers
Close modal

To address the second issue, considered the instances of the atomic entities encapsulating functional specifications, as objects in the category of specifications and then utilize compositional software (such as SPECWARE [93]), implementing categorical operators such as the “limit” and “colimit” constructs to generate provably correct deployments of modular assemblies of CMMs that effectively will function as Directly Coomputable Multiphysics Models (DCMMs) as shown in Fig. 14. Intrinsic to CT-based composition is that it ensures provably correct deployments and is effectively acting as a CMMs compiler for generating DCMMs. This approach is anticipated to automatically produce code needed to enable the underlying composability required. The proposed CT-based approach will enable automated generation of missing files or scripts that implement the composition of pre-existing modules as well as data transformation modules ensuring compatibility of domains and co-domains associated with outputs and inputs of communicating modules.

Fig. 14
(a) Typical representation of the Colimit operation within CT, (b) Colimit application for the design of complex object T4 from simpler ones (T1, T2, T3) via commutative diagrams of the functors si, and (c) Colimit application for developing a composition of three models, one referring to the aerothermostructural model encapsulated in CMSoft AERO-S [33], and another for the fluid domain CMM encapsulated in CMSoft AERO-F [34] and the computable interface model needed for passing data from one code to the other
Fig. 14
(a) Typical representation of the Colimit operation within CT, (b) Colimit application for the design of complex object T4 from simpler ones (T1, T2, T3) via commutative diagrams of the functors si, and (c) Colimit application for developing a composition of three models, one referring to the aerothermostructural model encapsulated in CMSoft AERO-S [33], and another for the fluid domain CMM encapsulated in CMSoft AERO-F [34] and the computable interface model needed for passing data from one code to the other
Close modal

Architecture Outline.

The outline of the architecture of the proposed computational infrastructure from the user perspective is depicted in Fig. 15. The core functional process of this architecture is the node “meta-computing synthesis and composition” that serves as model compiler that takes the four specification descriptions of the physics problem, the numerical problem, the software resources, and the hardware resources, and produces the respective DCMM. It should be noted that the system is designed such that the user can have access through the GUI to the meta-level computing control widgets expressing the physics encapsulation and relevant workflows as well as the results produced when the DCMM is executed. The specifications and descriptions are captured in terms of a representation that have both user accessible but also internal representations to be consumed by the meta-computing synthesis and composition process.

Fig. 15
Outline of initial high-level computational infrastructure from the user’s perspective
Fig. 15
Outline of initial high-level computational infrastructure from the user’s perspective
Close modal

The challenges associated with the bottom-up approach have been identified to be the following:

  1. Non-existence of unique manner for encoding computable specifications.

  2. Non-existing specification of computational resources (numerical, software, hardware).

  3. No access to legacy code architects/developers on the structure and intent of internal modules in legacy codes.

To address these challenges we have developed a plan that focused on:

  1. Exploring specification capturing methodologies and select one to implement for encoding computable specifications.

  2. Utilizing selected methodology to capture the specification of computational resources (numerical, software, hardware).

  3. Obtaining access of AERO suite architects/developers to capture the specifications of the structure and intent of internal modules.

Furthermore, as a part of the effort to identify and develop the initial metacomputing infrastructure associated with the bottom-up part of the following considerations have been taken. In addition to the categorical specification Semantics layer and the DEVS & DESS specification layer, the “L3: Work/Data Flow Specific to Legacy Codes” layer has been added, to represent the workflow required to be followed among the modules of the application runtime layer in the “Computing” layer in the bottom partitioning of Fig. 16.

Fig. 16
Outline block diagram of the internal architecture of the metacomputing synthesis and composition node of Fig. 5, depicting the internal form of the meta-computing and computing layers of the proposed framework along with the associated transformation meta-processes
Fig. 16
Outline block diagram of the internal architecture of the metacomputing synthesis and composition node of Fig. 5, depicting the internal form of the meta-computing and computing layers of the proposed framework along with the associated transformation meta-processes
Close modal

It is important to highlight here, that the main computing infrastructure in the “application runtime layer” will be the modules of the AERO suite of codes developed by CMSoft Inc. A more detailed description of the AERO suite that justifies its selection is presented in [2].

Consequently, the internal architecture of the metacomputing synthesis and composition node of Fig. 15 is shown in Fig. 16. The left side of this diagram represents the internal metacomputing representation of the overall semantic specification computing where two metacomputing levels are shown. The required specifications for the physics composition involved (level L1) and the corresponding computational resources specification captured via the DEVS and DESS representations. The right side of the diagram in Fig. 16 shows the user exposed syntactic form of the physics specification widgets both in terms of the level L2 layer of the workflow entities as perceived by the user, and in terms if the level L3 workflow as expected by the legacy code and corresponds to that of level L2. On the bottom of the right side of Fig. 16, the computing layer is depicted that to contain the scripts generated by the meta-computing layers along with all input files required in order to run the composition of all executables associated with the legacy code that represent the embodiments of the DCMM. The bottom left of Fig. 16 presents the legends associated with L0, L1 metacomputing layers and the computing layer.

Important Implementation Details.

The view presented in the right side of Fig. 16 is splitting the representation plane to the computing and specification/metacomputing layers. The top level of the specification and metacomputing layer enables the user to assemble the meta-specification of a physics problem of interest in terms of physics-relevant computational abstractions. This will be achieved by using a meta-meta integrated development environment (m2IDE) capable of capturing the Work Flow for the Physics Formulation Specification (WFPFS) in the form of a 2D graph capturing the workflow between physics-specific entities that a multiphysics domain expert is already aware. We will therefore be calling this tool WFPFS-m2IDE.

The output of this facility when passed through the “Graph Transformer 1” will be a lower level meta-specification representation of the interconnectivity between the various files and modules of the legacy code (in our case the AERO suite) entities. This representation will also have a graph view that may be inspected by the user in graphical form. We will be calling this facility the Legacy Code wWorkflow Specification meta-IDE (LCWS-mIDE).

The output of these resources will be generating all the Bash (or other type of shell) script required to automate the binding of all data files and legacy code modules for producing the solution of the defined problem. We will be referring to this script as the runtime implementation script.

Finally, the output of this code is spawning the executables of the legacy code modules and will be performing the necessary file system input/output while the will be endowed with facilities to handle the idiosyncrasies of the networking and runtime fabrics thus implementing the respective DCMM.

Work Flow for Physics Formulation Specification m2IDE and Legacy Code Workflow Specification mIDE.

The need for the agile development of the work flow for physics formulation specification meta-meta-IDE and the LCWS-mIDE environments led to a comparative evaluation of available infrastructures for code development.

The implementation language for all metacomputing infrastructure resources was selected to be C++ due to its maturity and level of optimization flexibility of the generated binaries relative to the widest range of computing architectures.

Based on the software development criteria described in Ref. [2], we also selected Qt Creator integrated development environment based on our familiarity and prior experience with it.

A critical component of the required toolchain is the library enabling the development of 2D graphs as they are necessary for the graphical specification for the multiphysics problem in the WFPFS-m2IDE and the legacy code workflow specification in the LCWS-mIDE. Our final choice was the “nodeeditor” library2 that had already build-in Qt support.

The core functionality for both the WFPFS-m2IDE and LCWS-mIDE is common and for the sake of space we will only describe it from the perspective of WFPFS-m2IDE where the user can define the physics problem in terms of the physics specification of a problem of interest.

The view of the graphical user interface of the WFPFS-m2IDE is presented in Fig. 17 where the toolbar, the graph, and the text pane areas are shown. The user uses the widgets of the toolbar to create, open, and save files containing the graphs created or modified by the user in the graph pane. The toolbar also contains node editing widgets for copy and paste of nodes as well as zoom to fit and style management and do and undo controls. The user uses the graph pane to create nodes with input and output ports as well as click and drag for creating port connections depicted as arrows to represent the flow of data. Finally, the text pane was added to aid the user to see the dynamic creation of the input files required for implementing the runtime configuration governing the execution of the directly computable CMM that corresponds to the physics problem created by the user in the graph pane.

Fig. 17
Graphical user interface of the WFPFS-m2IDE for enabling capture of the physics problem specification, depicting the toolbar, graph, and text panes
Fig. 17
Graphical user interface of the WFPFS-m2IDE for enabling capture of the physics problem specification, depicting the toolbar, graph, and text panes
Close modal

Figure 17 also shows the specific physics workflow specification of the Conjugate Heat Transfer (CHT) problem discussed in Ref. [2]. The corresponding module connectivity and data flow architecture of the legacy AERO suite modules that are created by the WFPFS-m2IDE for the LCWS-mIDE level is presented in Fig. 18.

Fig. 18
AERO suite modules data flow reflecting the LCWS-mIDE level, and corresponds to a hypersonic CHT problem
Fig. 18
AERO suite modules data flow reflecting the LCWS-mIDE level, and corresponds to a hypersonic CHT problem
Close modal

Conclusions

From top-down architecture approach perspective, the work presented here attempts to demonstrate that the media we select to develop and express CMMs, can limit our ability to solve problems. These selections also limit our ability to see a problem from other semantic and syntactic perspectives that may enable much easier problem solutions and enable direct computability. Consequently, this effort focused primarily on describing the architecture, development, demonstration, and performance evaluation of a metacomputing framework from a top-down and a bottom-up perspectives.

The top-down framework generates the forms of directly computable CMMs that, in turn, can address problem solutions in continuum multiphysics at the computing level. This framework is comprised of three metacomputational modules. The prototyping of the metacomputing and computing layers associated with these modules was implemented in the Mathematica symbolic environment.

The first metacomputing module is the CMEB, which derives the constitutive and field equations to be solved for particular continuum multiphysics problems at the computing level. The quantities (state variables and relevant operators) are defined within the algebra of reals, R.

The second metacomputing module is the R2hC projector that expresses field equations derived initially in terms of state variables defined within the algebra of reals R to field PDEs or/and algebraic equations with variables defined in the algebras of complex numbers, C for 2D problems and of quaternions, H for 3D problems. Thus, this module effectively increases the semantic dimensionality of the applicable formulation algebra and reduces complex forms to simpler ones in the hypercomplex algebras scope. This module also has a computing layer invokable by the user if desired for solving specific problems and demonstrates directly computable CMM capability. This module has been demonstrated for some 2D problems and needs to be extended more to address 3D problems.

The third metacomputing module is the e2g that converts equational theories expressed as conjunctions of equations written in the traditional 1D form to ASGs that are DAGs with embedded computability. Solving a problem has been mapped to the operation of following a path between a source representing the known quantities and a target node representing the unknown quantity to be computed. In this manner, e2g enables directly computable CMMs. Thus this module effectively increases the syntactic dimensionality of model representations from 1D to 2D and 3D. This module needs to be extended more for 3D problems involving quaternionic quantities to achieve its originally intended functionality.

From bottom-up architecture approach perspective, the work presented here attempts to demonstrate that when legacy codes are desired for CMM computing, then categorical metacomputing of specification along with properly designed integrated development environments (IDEs) can enable metacomputing that facilitates takes the user out of the iterative role in the context space of CMM computing. In particular, a multilayer metacomputing architecture of a framework was proposed that enable the user to utilize the specifications of the available resources to generate directly computable CMMs with the help of work flow for physics formulation specification and legacy code workflow specification meta-IDEs.

All the metacomputing facilities presented here are characterized by the unique feature of taking the user out of the loop for constructing, composing, and deploying CMMs and do it such that CMMs appear to be directly computable at a fraction of the time required if these technologies have not been utilized in the first place. This recent experience indicates that the opportunities for exploiting metacomputing have just began and they can only be generalized and refined further to enable a new dimension of utilization of computational resources and user experience.

Footnote

Acknowledgment

The authors would like to acknowledge support for this effort by the Defense Advanced Research Agency for its support under solicitation PA-19-02 via MIPR HR0011046726 and the Office of Naval Research via the core funding of the US Naval Research Laboratory. JGM would like to express his deep gratitude to Dr. P. W. Mast (NRL retired), whose unparalleled insight and vision in the 1990s both fed and inspired the thirst for discovering and acting on ideas related to the role of the media of mathematics can have on limiting or benefiting research and development activities associated with CMM representation. Finally, JGM would like to also express his deep appreciation to Dr. R. Badaliance (NRL retired) for enabling and encouraging JGM’s professional focus on the topics related to the present work.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

References

1.
Michopoulos
,
J. G.
,
Apetre
,
N. A.
,
Steuben
,
J. C.
, and
Iliopoulos
,
A. P.
,
2023
, “
Top-Down Metacomputing With Algebraic Dimensionality Raising for Automating Theory-Building to Enable Directly Computable Multiphysics Models
,”
J. Comput. Sci.
Accepted
.
2.
Michopoulos
,
J. G.
,
Iliopoulos
,
A. P.
,
Avery
,
P.
,
Daeninck
,
G.
,
Farhat
,
C.
,
Steuben
,
J. C.
, and
Apetre
,
N. A.
,
2023
, “
Bottom-Up Hierarchical and Catergorical Metacomputing for Automating Composition and Deployment of Directly Computable Multiphysics Models
,”
J. Comput. Sci.
In Review
.
3.
Michopoulos
,
J. G.
,
Farhat
,
C.
, and
Fish
,
J.
,
2005
, “
Modeling and Simulation of Multiphysics Systems
,”
ASME J. Comput. Inf. Sci. Eng.
,
5
(
3
), pp.
198
213
.
4.
Kolosov
,
G. V.
,
1909
, “
On a Application of Complex Function Theory to a Plane Problem of the Mathematical Theory of Elasticity
,” (In Russian), Ph.D. Thesis, Dorpat (Yuriev) University, Tartu, Estonia.
5.
Muskhelishvili
,
N. I.
,
1953
,
Some Basic Problems of the Mathematical Theory of Elasticity
,
Vol. 15
,
Noordhoff
,
Groningen, The Netherlands
, 3rd Revised and Augmented ed. Moscow, 1949.
6.
Pearson
,
C.
,
1959
,
Theoretical Elasticity
,
Harvard University Press
,
Cambridge, MA
.
7.
Green
,
A. E.
, and
Zerna
,
W.
,
1968
,
Theoretical Elasticity
, 2nd ed.,
Clarendon Press
,
Oxford, UK
.
8.
England
,
A.
,
2003
,
Complex Variable Methods in Elasticity
,
Dover Publications Inc.
,
New York
.
9.
Michopoulos
,
J.
,
2004
, “
Pathology of High Performance Computing
,” Keynote Presentation at the 2004 International Conference on Computational Science, June.
10.
Sadd
,
M. H.
,
2021
,
Elasticity: Chapter 10 - Complex Variable Methods
, 4th ed.,
Academic Press
,
Cambridge, MA
.
11.
Parkus
,
H.
,
1976
,
Thermoelasticity
, 2nd ed.,
Springer-Verlag
,
Wien
.
12.
Nowacki
,
W.
,
1986
,
Thermoelasticity
, 2nd ed.,
Pergamon Press
,
Oxford, UK
.
13.
Sih
,
G. C.
,
Michopoulos
,
J. G.
, and
Chou
,
S. C.
,
1986
,
Hygrothermoelasticity
,
Springer
,
Netherlands
.
14.
Kundu
,
P. K.
,
Cohen
,
I. M.
, and
Dowling
,
D. R.
,
2012
,
Fluid Mechanics: Chapter 6 – Ideal Flow
, 5th ed.,
Academic Press
,
Boston, MA
.
15.
Pimenov
,
A. A.
, and
Pushkarev
,
V. I.
,
1991
, “
The Use of Quaternions to Generalize the Kolosov-Muskhelishvili Method to Three-Dimensional Problems of the Theory of Elasticity
,”
J. Appl. Math. Mech.
,
55
(
3
), pp.
343
347
.
16.
Fokas
,
A. S.
, and
Pinotsis
,
D. A.
,
2007
, “
Quaternions, Evaluation of Integrals and Boundary Value Problems
,”
Comput. Meth. Funct. Theory
,
7
(
2
), pp.
443
476
.
17.
Pinotsis
,
D. A.
,
2010
, “
Quaternionic Analysis, Elliptic Problems and a Physical Application of the Dbar Formalism
,”
Adv. Appl. Clifford Algebras
,
20
(
3–4
), pp.
819
836
.
18.
Okay
,
F.
,
2010
, “
A New Model in Stress Analysis: Quaternions
,”
Sci. Res. Essays
,
5
(
23
), pp.
3711
3718
.
19.
Pinotsis
,
D. A.
,
2012
, “
Commutative Quaternions, Spectral Analysis and Boundary Value Problems
,”
Compl. Variab. Ellip. Equ.
,
57
(
9
), pp.
953
966
.
20.
Weisz-Patrault
,
D.
,
Bock
,
S.
, and
Gürlebeck
,
K.
,
2014
, “
Three-Dimensional Elasticity Based on Quaternion-Valued Potentials
,”
Int. J. Solids Struct.
,
51
(
19
), pp.
3422
3430
.
21.
Liu
,
L. W.
, and
Hong
,
H. K.
,
2014
, “
A Clifford Algebra Formulation of Navier-Cauchy Equation
,”
Procedia Eng.
,
79
(1st ICM), pp.
184
188
.
22.
Grigoriev
,
Y.
,
2015
, “
Radial Integration Method in Quaternion Function Theory and Its Applications
,”
Int. Conf. on Numerical Analysis and Applied Msthematics 2014 (ICNAAM-2014)
,
Rhodes, Greece
,
Sept. 22–28
, p.
440003
.
23.
Gürlebeck
,
K.
, and
Nguyen
,
H. M.
,
2015
, “
Ψ -Hyperholomorphic Functions and an Application to Elasticity Problems
,”
AIP Conf. Proc.
,
1648
.
24.
Grigor’Ev
,
Y.
,
Gürlebeck
,
K.
, and
Legatiuk
,
D.
,
2018
, “
Quaternionic Formulation of a Cauchy Problem for the Lamé Equation
,”
AIP Conf. Proc.
,
1978
, pp.
1
5
.
25.
Grigor’Ev
,
Y. M.
,
2018
, “
Quaternionic Functions and Their Applications in Mechanics of Continua
,”
AIP Conf. Proc.
,
2041
.
26.
Gürlebeck
,
K.
, and
Legatiuk
,
D.
,
2019
, “
Quaternionic Operator Calculus for Boundary Value Problems of Micropolar Elasticity
,” In
Topics in Clifford Analysis
,
S.
Bernstein
, ed., Vol.
10
in Trends in Mathematics, Birkhauser, pp.
221
234
.
27.
Yakovlev
,
A.
, and
Grigor’Ev
,
Y.
,
2020
, “
Three-Dimensional Quaternionic Kolosov-Muskhelishvili Formulae in Infinite Space With a Cavity
,”
AIP Conf. Proc.
,
2293
.
28.
Danielewski
,
M.
, and
Sapa
,
L.
,
2020
, “
Quaternions and Cauchy Classical Theory of Elasticity
,”
Adv. Manuf. Sci. Technol.
,
44
(
2
), pp.
67
70
.
29.
Grigor’Ev
,
Y.
,
2018
, “
Quaternionic Functions and Their Applications in a Viscous Fluid Flow
,”
Complex Anal. Oper. Theory
,
12
(
2
), pp.
491
508
.
30.
Grigor’Ev
,
Y.
,
Gürlebeck
,
K.
,
Legatiuk
,
D.
, and
Yakovlev
,
A.
,
2019
, “
On Quaternionic Functions for the Solution of an Ill-Posed Cauchy Problem for a Viscous Fluid
,”
AIP Conf. Proc.
,
2116
, pp.
1
5
.
31.
Singh
,
A.
,
1981
, “
Quaternionic Form of the Electromagnetic-Current Equations With Magnetic Monopoles
,”
Lettere Al Nuovo Cimento Series 2
,
31
(
5
), pp.
145
148
.
32.
Waser
,
A.
,
2000
, “
Quaternions in Electrodynamics
,” Self Published Online: http://www.aw-verlag, pp.
1
14
.
33.
Jack
,
P. M.
,
2003
, “
Physical Space as a Quaternion Structure, I: Maxwell Equations. A Brief Note
,” arXiv:math-ph/0307038(5), pp.
1
6
.
34.
Acevedo
,
M.
,
López-Bonilla
,
M. J.
, and
Sánchez-Meraz
,
M.
,
2005
, “
Quaternions, Maxwell Equations and Lorentz Transformations
,”
Apeiron
,
12
(
4
), pp.
371
384
.
35.
Sweetser
,
D. B.
,
2005
, Doing Physics With Quaternions. Self-Published.
36.
Smarandache
,
F.
, and
Christianto
,
V.
,
2010
, “
A Derivation of Maxwell Equations in Quaternion Space
,”
Prog. Phys.
,
2
(
6
), pp.
23
27
.
37.
Christianto
,
V.
,
2010
, “
A Derivation of Maxwell Equations in Quaternion Space
,”
Prog. Phys.
,
2
, pp.
23
27
.
38.
Rawat
,
A. S.
,
2017
, “
Quaternionic Reformulation of Massive Electrodynamics
,”
Int. J. Pure Appl. Phys.
,
13
(
1
), pp.
1
8
.
39.
Hong
,
I. K.
, and
Kim
,
C. S.
,
2019
, “
Quaternion Electromagnetism and the Relation With Two-Spinor Formalism
,”
Universe
,
5
(
6
), pp.
135
155
.
40.
Giardino
,
S.
,
2020
, “
Quaternionic Electrodynamics
,”
Mod. Phys. Lett. A
,
35
(
39
), p.
2050327
.
41.
Tsalik
,
A.
,
1995
, “
Quaternionic Representation of the 3D Elastic and Thermoelastic Boundary Problems
,”
Math. Meth. Appl. Sci.
,
18
(
9
), pp.
697
708
.
42.
Gurlebeck
,
K.
, and
Sprossig
,
W.
,
1997
,
Quarternionic and Clifford Calculus for Physicists and Engineers
,
John Wiley and Sons
,
New York
.
43.
Euler
,
L.
,
2003
,
Lettres à une princesse d’Allemagne: sur divers sujets de physique & de philosophie
,
PPUR Presses Polytechniques
,
Lausanne
.
44.
Venn
,
J.
,
1881
,
Symbolic Logic
,
Macmillan
,
London
.
45.
Carroll
,
L.
,
1897
,
Symbolic Logic
,
Macmillan
,
London
.
46.
Peirce
,
C.
,
1933
,
Collected Papers
,
Harvard University Press
,
Cambridge, MA
.
47.
Roberts
,
D. D.
,
1973
,
The Existential Graphs of Charles S. Peirce
,
The Hague
,
Mouton, France
.
48.
Zeman
,
J.
,
1964
, “
The Graphical Logic of C. S. Peirce
,” Ph.D. thesis, University of Chicago, Chicago, IL.
49.
Zeman
,
J.
,
1997
, “
Peirce’s Graphs
,”
Conceptual Structures: Fulfilling Peirce’s Dream
,
D.
Lukose
,
H.
Delugach
,
M.
Keeler
,
L.
Searle
, and
J.
Sowa
, eds.,
Springer Berlin Heidelberg
, pp.
12
24
.
50.
Gentzen
,
G.
,
1934
, “
Untersuchungen über das logische schließen. I
,”
Math. Zeitschrift
,
39
(
1
), pp.
176
210
.
51.
Gentzen
,
G.
,
1935
, “
Untersuchungen über das logische schließen. II
,”
Math. Zeitschrift
,
39
(
1
), pp.
405
431
.
52.
Brown
,
G. S.
,
1972
,
Laws of Form
,
Julian Press
,
New York
.
53.
Barwise
,
J.
, and
Etchemendy
,
J.
,
1994
, “
Hyperproof
”. In
CSLI Lecture Notes, 216
, Vol.
216
in CSLI Lecture Notes.
Stanford University
.
54.
Barwise
,
J.
, and
Etchemendy
,
J.
,
1996
, “
Heterogeneous Logic
,” In
Logical Reasoning With Diagrams
,
G.
Allwein
and
J.
Barwise
, eds.
Oxford University Press
.
55.
Allwein
,
G.
, and
Barwise
,
J.
eds,
1996
,
Logical Reasoning With Diagrams
(
Studies in Logic and Computation
),
Oxford University Press
,
Oxford, England
.
56.
Buchberger
,
B.
,
2000
, “
Logicographic Symbols: Some Examples of Their Use in Formal Proofs
,”
Manuscript
.
57.
Buchberger
,
B.
,
2001
, “
Logicographic Symbols: A New Feature in Theorema
,”
Symbolic Computation - New Horizons (Proceedings of the 4th International Mathematica Symposium)
,
Y.
Tazawa
, ed., Copyright:
Tokyo Denki University Press
, pp.
23
30
.
58.
Buchberger
,
B.
,
Dupre
,
C.
,
Jebelean
,
T.
,
Kriftner
,
F.
, and
Nakagawa
,
K.
,
2000
, “
The Theorema Project: A Progress Report
,” In
Symbolic Computation and Automated Reasoning: The Calculemus-2000 Symposium
,
D.
asaru
and
W.
Windsteiger
., eds., pp.
98
113
.
59.
Kron
,
G.
,
1939
,
Tensor Analysis of Networks
,
John Wiley and Sons
,
Hoboken, NJ
.
60.
Paynter
,
H.
,
1961
,
Analysis and Design of Engineering Systems
,
MIT Press
,
Cambridge, MA
.
61.
Mast
,
P. W.
,
1972
, “
Graphs and Tensor Manupulations in Complex Coordinates as Problem Solving Techniques in Plane-Problems of Plane Anisotropic Elasticity
,” Ph.D. thesis,
North Carolina State University
,
Raleigh, NC
.
62.
Mast
,
P. W.
,
1973
,
Tensor Manipulations in Complex Coordinates With Applications to the Mechanics of Materials
. Technical Report NRL/7537, US Naval Research Laboratory, May.
63.
Mast
,
P. W.
,
1973
, Solution Graphs: Simple Algebraic Structures for Problems in Linear Anisotropic Elasticity. Technical Report NRL/7577, US Naval Research Laboratory.
64.
Tonti
,
E.
,
1972
, “
On the Mathematical Structure of a Large Class of Physical Theories
,”
Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Nat.
,
52
(
8
), pp.
48
56
.
65.
Tonti
,
E.
,
1972
, “
A Mathematical Model for Physical Theories. Nota I
,”
Atti della Accademia Nazionale dei Lincei. Classe di Scienze Fisiche, Matematiche e Naturali. Rendiconti
,
52
(
2
), pp.
175
181
.
66.
Tonti
,
E.
,
1972
, “
A Mathematical Model for Physical Theories. Nota II
,”
Atti della Accademia Nazionale dei Lincei. Classe di Scienze Fisiche, Matematiche e Naturali. Rendiconti
,
52
(
2
), pp.
350
356
.
67.
Deschamps
,
G.
,
1981
, “
Electromagnetics and Differential Forms
,”
Proc. IEEE
,
69
(
6
), pp.
676
696
.
68.
Tonti
,
E.
,
2013
,
The Mathematical Structure of Classical and Relativistic Physics
,
Springer
,
New York
.
69.
Vieil
,
E.
,
2007
, “
Introduction to Formal Graphs, a New Approach to the Classical Formalism
,”
Phys. Chem. Chem. Phys.
,
9
(
29
), p.
3877
.
70.
Maugin
,
G. A.
,
1999
,
The Thermomechanics of Nonlinear Irreversible Behaviors: An Introduction
,
World Scientific
,
Hackensack, NJ
.
71.
Neumann
,
F.
, and
Meyer
,
O.
,
1885
, Vorlesungen über die Theorie der Elasticität der festen Körper und des Lichtäthers. Nineteenth Century Collections Online (NCCO): Science, Technology, and Medicine: 1780-1925. B. G. Teubner.
72.
Inc., W. R., 2022. Mathematica, Version 13.2. Champaign, IL.
73.
Eringen
,
A. C.
, and
Maugin
,
G. A.
,
1990
,
Electrodynamics of Continua I. Foundations and Solid Media
,
Springer-Verlag
,
New York, NY
.
74.
Tinder
,
R. F.
,
2007
, “
Tensor Properties of Solids, Part Two: Transport Properties of Solids
,”
Synth. Lect. Eng.
,
2
(
1
), pp.
145
236
.
75.
Clayton
,
J. D.
,
2010
,
Nonlinear Mechanics of Crystals
, Vol.
177
,
Springer
,
Dordrecht
.
76.
Lamport
,
L.
,
1986
,
LATE X: A Document Preparation System
,
Addison-Wesley
,
Reading, MA
.
77.
Westergaard
,
H. M.
,
2021
, “
Bearing Pressures and Cracks: Bearing Pressures Through a Slightly Waved Surface or Through a Nearly Flat Part of a Cylinder, and Related Problems of Cracks
,”
ASME J. Appl. Mech.
,
6
(
2
), pp.
A49
A53
.
78.
England
,
A.
,
1971
,
Complex Variable Methods in Elasticity
,
Wiley-Interscience
,
London
.
79.
Papkovish
,
P.
,
1932
, “
Solution générale des équations differentielles fondamentales d’élasticité exprimée par trois fonctions harmoniques
,”
Compt. Rend. Acad. Sci
,
195
, pp.
513
515
.
80.
Neuber
,
H.
,
1934
, “
Ein neuer ansatz zur lösung räumlicher probleme der elastizitätstheorie. der hohlkegel unter einzellast als beispiel
,”
ZAMM - J. Appl. Math. Mech. / Zeitschrift für Angewandte Mathematik und Mechanik
,
14
(
4
), pp.
203
212
.
81.
Galerkin
,
B.
,
1930
, “
Contribution a la solution generale du probleme de la theorie de l’elasticite dans le cas de trois dimensions
,”
Comptes Rendus de l'academie des sciences
,
190
, pp.
1047
1048
.
82.
Westergaard
,
H.
,
1952
,
Theory of Elasticity and Plasticity
(Vol.
3
,
Harvard Monographs In Applied Science
),
Harvard University Press
,
Cambridge, MA
, p.
12
.
83.
Barber
,
J. R.
,
2010
,
Displacement Function Solutions
,
Springer Netherlands
,
Dordrecht
, pp.
321
332
.
84.
Airy
,
G. B.
,
1862
, “
On the Strains in the Interior of Beams
,”
Thirty-Second Meeting of the British Association for the Advancement of Science
,
Cambridge, UK
,
October
, British Association for the Advancement of Science, pp.
82
86
.
85.
Airy
,
G. B.
,
1863
, “
Iv. on the Strains in the Interior of Beams
,”
Philos. Trans. R. Soc. Lond.
,
153
(
31 December 1863
), pp.
49
79
.
86.
Lekhnitskii
,
S. G.
,
1963
,
Theory of Elasticity of an Anisotropic Elastic Body
,
Holden Day
,
San Francisco, CA
.
87.
Rosen
,
D. W.
, and
Peters
,
T. J.
,
1996
, “
The Role of Topology in Engineering Design Research
,”
Res. Eng. Des.
,
8
(
2
), pp.
81
98
.
88.
Braha
,
D.
, and
Maimon
,
O.
,
1998
,
A Mathematical Theory of Design: Foundations, Algorithms and Applications
,
Springer
,
New York, NY
.
89.
Braha
,
D.
, and
Reich
,
Y.
,
2003
, “
Topological Structures for Modeling Engineering Design Processes
,”
Res. Eng. Des.
,
14
(
4
), pp.
185
199
.
90.
Le Masson
,
P.
, and
Mcmahon
,
C.
,
2016
, “
Armand Hatchuel et Benoit Weil La théorie C-K, un fondement formel aux théories de l’innovation
,” Les grands auteurs du management de l’innovation et de la créativité. In Quarto - Editions Management et Société, pp.
588
613
.
91.
Diskin
,
Z.
, and
Maibaum
,
T.
,
2012
, “
Category Theory and Model-Driven Engineering: From Formal Semantics to Design Patterns and Beyond
,”
Electron. Proc. Theor. Comput. Sci.
,
93
, pp.
1
21
.
92.
Giesa
,
T.
,
Spivak
,
D. I.
, and
Buehler
,
M. J.
,
2012
, “
Category Theory Based Solution for the Building Block Replacement Problem in Materials Design
,”
Adv. Eng. Mater.
,
14
(
9
), pp.
810
817
.
93.
McDonald
,
J.
, and
Anton
,
J.
,
2001
, SPECWARE-Producing Software Correct by Construction.
94.
Williamson
,
K.
,
Healy
,
M.
, and
Barker
,
R.
,
2001
, “
Industrial Applications of Software Synthesis Via Category Theory–Case Studies Using Specware
,”
Automat. Softw. Eng
.,
8
(
1
), pp.
7
30
.
95.
Williamson
,
K.
,
2001
,
Systems Synthesis: Towards a New Paradigm and Discipline for Knowledge, Software, and System Development and Maintenance, Report Mathematics and Computing Technology, Boeing Phantom Works
.
96.
Michopoulos
,
J. G.
,
Lambrakos
,
S. G.
, and
Iliopoulos
,
A.
,
2010
, “
On a Data and Requirements Driven Multi-Scale Framework Linking Performance to Materials
,”
ASME 2010 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
Montreal, Quebec, Canada
,
August 15–18
, pp.
197
210
.
97.
Zeigler
,
B.
,
2000
,
Theory of Modeling and Simulation: Integrating Discrete Event and Continuous Complex Dynamic Systems
,
Academic Press
,
San Diego, CA
.