Abstract
When it comes to multiphysics modeling and simulation, the ever-improving advances of computational technologies have forced the user to manage higher resource complexity while at the same time they are motivating the modeling of more complex systems than before. Consequently, the time for the user’s iterations within the context space characterizing all choices required for a successful computation far exceeds the time required for the runtime software execution to produce acceptable results. This paper presents metacomputing as an approach to address this issue, starting with describing this high-dimensional context space. Then it highlights the abstract process of multiphysics model generation/solution and proposes performing top-down and bottom-up metacomputing. In the top-down approach, metacomputing is used for automating the process of generating theories, raising the semantic dimensionality of these theories in higher dimensional algebraic systems that enable simplification of the equational representation, and raising the syntactic dimensionality of equational representation from 1D equational forms to 2D and 3D algebraic solution graphs that reduce solving to path-following. In the bottom-up approach, already existing legacy codes evolving over multiple decades are encapsulated at the bottom layer of a multilayer semantic framework that utilizes category theory based operations on specifications to enable the user to spend time only for defining the physics of the relevant problem and not have to deal with the rest of the details involved in deploying and executing the solution of the problem at hand. Consequently, these two metacomputing approaches enable the automated generation, composition, deployment, and execution of directly computable multiphysics models.
Introduction
In the discipline of computational multiphysics, computational technologies have been evolving toward improving the efficiency, accuracy, and scope of relevant computational models that encapsulate the computational representation of various physical systems. However, in their great majority of the relevant approaches exploit pre-existing components that enable utilization of specific physics and also do not address the role of the user in developing, composing, and deploying the resulting computational assemblies. Instead of only spending time do define the problem to be solved or modeled from a physics perspective, the user is also responsible for configuring the necessary input files and computational resources required. Furthermore, the time for composing a theory and developing and deploying its computational implementations and iterating in high-dimensional context space is spanned by bases involving choice that need to be made for obtaining acceptable solutions. In most cases, these activities take much longer time than the execution of the constructed computational implementation. However and paradoxically, the great majority of innovations in hardware and software development have been focusing their efforts to improve the runtime execution performance of the relevant simulations and models, and not to improve the truly inefficient part of these models, which is their development, deployment, and reuse that it consistently takes much longer time for the user.
In order to address the inefficiency stemming from the user being in an iterative loop for formulating theories and their associated models along with their composition and deployment on available computational fabrics, we have recently proposed that metacomputing approaches are needed to replace activities accomplished traditionally by the user to those that are accomplished by the computer but at higher level of abstraction [1,2]. In this paper, we present a high-level outline of both top-down and bottom-up metacomputing efforts as an example of moving the human activity from the user’s brain to the computational fabric available in a manner that computes what needs to be computed. In the top-down context, theories and their models that in some cases take decades to be developed, can be derived in minutes and/or seconds if proper metacomputing approaches are applied. Furthermore, if the user considers their embedding in spaces of higher semantic and syntactic dimensionality, then they reduce to simpler and more powerful representations. More specifically, utilizing the fact that when increasing the semantic dimensionality of a model from one where all field quantities are defined in the algebra of reals, to that of hypercomplex algebras (HAs), the problem to be solved can turn out to be a much simpler one and in some cases instead of solving the partial differential equations (PDEs), it requires just algebraic evaluation. For example, systems of coupled PDEs describing the spatiotemporal evolution of fields to be determined, require the numerical solutions of these equations when the algebra of reals is used. If alternatively, the theory is projected to the semantics of complex or quaternion algebras, then the problem may no longer require numerical solution of PDEs but instead, it may require just a straightforward evaluation of a system of algebraic equations expressed in terms of complex or quaternionic potentials. On the syntactic dimensionality side, it can be demonstrated that instead of writing equations from left to right in the usual one-dimensional paradigm, their transformation to a directed acyclic graph (DAG) has the potential to generate algebraic solution graphs (ASGs) where the unknown quantities can be evaluated by simple path traversal between known and unknown quantities. Examples of developing the metacomputing infrastructure that automates the semantic and syntactic dimensionality are presented. On the bottom-up context, it is assumed that legacy codes developed over several decades are available and need to be composed and deployed in a manner that takes the user’s iterative adjustment role out of this incremental, inefficient and painful (for the user) loop. The potential of using resource and problem specifications under the context of category theory is demonstrated as way to implement metacomputing on specifications. In this manner, the user only needs to spend time to define the problem as he/she needs to solve or model only once and let the proper infrastructure obtain a directly computable model of it. The discussion of the future directions of these efforts is closing this paper.
The multiphysics modeling context space will be described briefly to expose the complexity of the decisions and user has to make in order to generate and/or solve a computational multiphysics model (CMM). Then a high-level description of the workflow required to develop and compute multiphysics modeling activities will follow. To demonstrate the feasibility and effectiveness of top-down metacomputing, three metacomputing software modules will be described. They are the “computational multiphysics equation builder” (CMEB) for deriving theories and their equational models, the “reals to hypercomplex” (R2hC) transformer module that raises the semantic dimensionality of an equational model by mapping the state variables participating in the relevant theories that defined over the field of reals to new ones defined over higher dimensional algebras such as the complex numbers (for 2D problems) and quaternions for 3D problems, and the “equations to graphs” (e2g) that embeds 1D equational systems to directed acyclic graphs termed algebraic solution graphs. To demonstrate the feasibility and effectiveness of bottom-up metacomputing approach, a multi-meta-level architecture of a computational framework is outlined and its first computational implementation is presented. Finally, the present work ends with conclusions.
Multiphysics Modeling Context Space.
Equation-based CMMs are traditionally described in terms of state variables along with an analytical description that represents a logical conjunction of equations being true and forming an equational theory. Furthermore, its computational form encapsulates the numerical form in a software representation to be executed on a hardware infrastructure, as reflected by the labels of the context space bases axes of Fig. 1. These four bases of “physics model specification,” “numerical model specification,” “software model specification,” and “hardware model specification” are themselves aggregated subspaces of collapsed bases as shown in Fig. 1. The groupings of these sub-bases are depicted in the corresponding groups outlined with distinct rectangles. While the “physics model” contained in the aggregation is termed “physics specification of analytic model,” the numerical, software, and hardware instances of the model are all grouped to form the “computational model.” Each of these four bases is themselves subspaces, spanned by several context sub-bases, and described in detail elsewhere [1,2].

Four-folded bases of the context space where multiphysics models are embedded along with their respective lists of sub-bases. The analytic physics model specification consists of 12 sub-bases, while the computational model is spanned from three bases, the numerical, the software, and the hardware specification bases each of which consists of 6, 12, and 5 sub-bases respectively, for a total of 35 sub-bases.

Four-folded bases of the context space where multiphysics models are embedded along with their respective lists of sub-bases. The analytic physics model specification consists of 12 sub-bases, while the computational model is spanned from three bases, the numerical, the software, and the hardware specification bases each of which consists of 6, 12, and 5 sub-bases respectively, for a total of 35 sub-bases.
It is important to mention here that these four-folded or 35 unfolded bases of the context space depicted in Fig. 1 are not meant to be exhaustively all-inclusive, nor are they fixed in time as computational technologies evolve.
The purpose of describing the context space of an arbitrary CMM here is to draw attention to the fact that the user must make decisions reflecting a succession of points forming a trajectory in this (at least a 35-dimensional) context space. If the results of such decisions are not adequate, then the user must then continuously iterate and follow a helicoidal path in this space that hopefully converges to an acceptable terminal outcome. This is a very time-consuming process, often unlikely to yield satisfactory results within desired time requirements.
The last point raised in the previous section can be considered as the main motivation for an alternate approach based on metacomputing. Our experiences over multiple decades of modeling and simulation exercises have revealed that incrementally searching this context space for problem-solution requirements is dramatically more expensive than the actual execution time of the model algorithmics. This is because the user must manually identify, implement, and manage a high-dimensional problem specification without assistance that depends entirely on the user’s prior experience and knowledge. Meanwhile, the computer is used strictly for executing the symbolic or numeric processing associated with the computational implementation of the CMM. More specifically, our findings across multiple instances of developing and executing CMMs suggest that the user is spending 75–98% of the total wall clock time required for the effort, while the remainder is spent in the computational fabric. Clearly, this situation motivates the development of a methodology that enables the users to shift the burden of implementation from their own reasoning to the computational fabric as much as possible. Additional motivation for developing a metacomputing infrastructure stems from the fact that it may improve accuracy, precision, length scale bridging, and other desirable performance metrics.
A way to further identify the roles of the user and the computer in the derivation and evaluation or solution of CMMs is to consider the abstract workflow of CMM generation activities depicted in Fig. 2 [3]. Individual rectangles denote each major activity, while the thick arrows denote the succession and data flow from source to target. The first activity to be instantiated on the far left of Fig. 2 is the specification definition by the researcher of the state space variables representing the fields of interest for a particular system. These are usually given in terms of conjugate state variables, the product of which has units of energy density. Then, the researcher has to invoke and formulate the specific form of the conservation laws applicable to the system in their local or global form. Following this, the researcher needs to develop the constitutive laws required to make the conservation laws algebraically closed, because their number is half as high as the number of field variables characterizing the state of the system. Furthermore, the researcher may want to use formulation axioms like those associated with the constitutive theories, or axioms that enforce certain properties like frame reference invariance, equipresence, neighborhood, etc. A more detailed description of this process will be given later in the present work.

Activities workflow for the formation and solution of equation-based multiphysics models and the respective background computing execution embedding
The combinations of the last three activities enable the derivation of the governing equations (usually PDEs) represented by the next node to the right. After a boundary value problem that represents the CMM of the physical system is defined, the derived PDEs must be solved and the results visualized. The “person” and a “computer monitor” icons of Fig. 2 denote that almost exclusively, the solution and visualization of the PDEs require the use of numerical computing and the presence of the user to handle all the necessary dependencies. All other activities, in their great majority, have been handled by the individual researcher using pen-and-paper computing. Some small exceptions of using symbolic computing have appeared in the past, for derivation and analytical solution of the PDEs and the associated formulation of the relevant boundary value problems (BVPs). It is implicit that all such activities are valid only if the researcher is aware of the meta-theoretic procedures necessary for making this an admissible workflow. These involve the belief that the conservation laws and thermodynamics are valid and should not be ignored. Furthermore, some implementation meta-axioms may be relevant here as well. Examples include whether infinitesimal strain tensors or finite strain tensors should be used, or if the material properties will be considered constants or dependent on some of the state variables such as temperature. Although the majority of the focus for CMMs during the recent decades has been invested in the solution of the PDEs representing the CMM at hand, a significant effort is required to derive these PDEs as indicated by the nodes to the left of the “solve the governing PDEs” activity in Fig. 2.
An alternate way to focus on the CMM generation process is to look at the hierarchical modeling structure depicting the various incarnations of a model attempting to mimic the behavior of a physical system as shown in Fig. 3. The two major approaches for implementing both the workflow and the individual activities associated with the derivation and solution of CMMs as depicted in Fig. 2, and in the CMM hierarchy as depicted in Fig. 3, are the user-defined Top-Down Architecture Approach (TDAA) and the semi-automatically defined Bottom-Up Architecture Approach (BUAA). Both of these approaches have been explored and initial steps have been demonstrated recently in Refs. [1,2]. An outline of them will be provided here to demonstrate their feasibility and benefits as a motivational opportunity for the future.

Modeling hierarchy for a physical system and associated focus on the analytical and computational models
TDAA for Metacomputing.
Prior to describing the TDAA, we describe here the opportunities associated with the semantic and syntactic spaces associated with CMMs in the context of the TDAA.
Semantic Space.
Within the semantic context, the variables and coefficients participating in a CMM representation can be instantiated not just in the field of real numbers, and have been done for the great majority of available bibliography. HAs, being products of the Caley–Dickson construction and as members of the set Clifford algebras that provide the grammar for geometric calculus and as quantizations of exterior algebra, offer the capability for a higher semantic dimensionality representation of equational theories. Their benefits in enabling lower equational complexity are largely under-recognized and under-utilized.
It is important to highlight that the key idea enabling the utilization of HAs and specifically complex and quaternion algebras for expressing 2D and 3D continuum multiphysiscs problems is based on a key observation: If the algebra of complex numbers is defined as and the quaternion algebra is defined as and if the is the orthonormal basis of , and is the orthonormal basis of , then for all vectors the corresponding complex number , and for all vectors the corresponding quaternion . Therefore, the dimensional equivalence between and and between and permits the approximations and respectively.
Limited efforts to exploit complex-valued potential functions defined over complex variables (i.e., approaches involving the first HA above the reals) have been identified in the past for 2D problems in elasticity [4–10], thermoelasticity [11,12], hygrothermoelasticity [13], fluid mechanics [14], etc. For all these cases, the problem of solving a single or a set of PDEs is reduced to the problem of evaluating algebraic equations expressed in terms of complex potential functions, thus drastically simplifying the original problem. Similarly, during the last two decades, some work has been published involving quaternion algebra for solving 3D problems in elasticity [15–28], fluid mechanics [29,25,30], multiphysics problems such as electromagnetism [31–40], and thermoelasticity [41,42,20]. For all these cases, the problem of solving a single or a set of PDEs is reduced to the problem of evaluating algebraic equations expressed in terms of quaternionic potentials, thus drastically simplifying the original problem.
Consequently, it appears that allowing an equational representation to be redefined in a higher dimensionality semantic space is equivalent to solving a problem by making it somewhat of a non-problem.
Syntactic Space.
Similarly, in the syntactic space context, it is often forgotten that the traditional equational representation is a string sequence of equational terms read from left to right that are actually embedded in a 1D syntactic space. It is also often forgotten that the solution of equations via algebraically-enabled rewriting methods is restricted by our insistence on writing equations in the 1D string formalism. The historical dominance of sentential representation systems in the history of modern logic has obscured several important facts about diagrammatic systems. One of them is that several well-known diagrammatic systems were available as heuristic tools before the era of modern logic and algebra. Euler circles, Venn diagrams, and Lewis Carroll’s squares have been widely used for certain types of syllogistic reasoning [43–45]. Another not well-known story is that a founder of modern symbolic logic, Peirce, not only revised Venn diagrams but also invented a graphical system called “existential graphs,” which has been proven to be equivalent to a predicate language [46–49]. In the 1930s, Gentzen introduced the 2D expression called “sequent” and demonstrated its efficiency for performing logical expression evaluation [50,51]. Furthermore, Brown [52] introduced another 2D diagrammatic system that was also focused on Boolean logic and propositional calculus. Subsequently, a few investigators in the area of reasoning and logic extended the syntactic dimensionality to 2D motivated by the advent of computational progress in 2D graphical user interfaces [53–55]. However, these graphical representation systems have not been used in formal contexts such as proofs because they are considered to be unreliable, or they often cannot capture general cases. Responding to the need for tools that can reconcile the apparently opposing issues of formal rigor and intuitive understanding, Buchberger proposed the new concept of logographic symbols [56,57] and implemented them in the theorem prover “Theorema” [58]. Nevertheless, none of these 2D diagrammatic and symbolic efforts addressed the equational aspects associated with theories of continua in terms of graph representations involving nodes and arrows connecting them.
To the authors’ knowledge, the first attempt to utilize graphs (weighted and undirected) for describing circuit networks was given by Kron [59]. Subsequently, equational representations of dynamical systems in the form of Bond graphs were introduced by Paynter [60]. Then, DAGs were introduced for the first time as ASGs by Mast [61–63] for representing and solving elasticity problems expressed over tensor quantities defined in the algebra of complex numbers and in Ref. [9], it was proposed to extend them for multiphysics of continua. Independently and unaware of the ASG efforts, Tonti introduced DAGs for equational theories representation [64–66] with all quantities labeling DAG nodes and edges of the graphs defined over the field of reals to denote equational theories. In the 1980s, Deschamps utilized DAGs with nodes representing scalar and vector quantities to represent Maxwellian electromagnetics. Finally, “Formal” graphs were introduced by Deschamps [67] for electromagnetics, where the nodes and edges were labeled by scalar and vector-based equational components.
It should be underlined here that although in form, the ASGs are reminiscent of Tonti [68], Bond [60], Kron [59], Deschamps [67], and Formal [69] graphs (TBKDF graphs hereafter), they are functionally very different from ASGs due to two critical features:
Although TBKDF graphs enable compositionality, they are not endowed with direct computability because they do not contain isomorphic syntactic and semantic evaluation capability. The typing of the labels of the arrows involves operators that do not allow the syntactic operator of concatenation to act consistently for all arrows signifying tensor product or function application as ASGs do.
TBKDF graphs are not capable of expressing algebraic theories in terms of the two required fundamental operators where one distributes of the other (e.g., as tensor multiplication distributes over tensor addition) to enable algebraic semantics of tensor operations. Therefore, path-finding algorithms cannot be implemented for constructing the compositional operations implementing transitivity for direct evaluation, as in the case of ASGs.
TDAA Approach Outline.
To automate as many of the workflow activities depicted in Fig. 2 and introduce the benefits of semantic and syntactic dimensionality raising, we have expanded and somewhat re-factored it, resulting in Fig. 4 as described in Ref. [1]. It should be noted that two new activity nodes have been inserted to represent the semantic and the syntactic dimensionality raising of the models derived from the previous activities. The computational embeddings of the activity nodes are represented by the background rectangles. These signify both the pre-existing state-of-the-art, and the new one described later in this work.

Proposed activity workflow, refactoring, and expanding for the formation and solution of equation-based multiphysics models. The respective background computational embedding, along with computational modules (dashed lines) that implement the corresponding functionalities, are also shown.
The software modules developed for automating these activities are represented in Fig. 4 by the dashed outlined rectangles with curved corners, and they contain the activities they are implementing. The CMEB enables the activities of (1) defining the state variable pairs describing the state of a multiphysics system, (2) formulation of the conservation laws, (3) their algebraic closure through the development of constitutive relations, (4) the enrichment from various axioms of the classical constitutive theory as well as other assumptions, and finally (5) the generation of the respective algebraically closed system of PDEs governing the spatiotemporal behavior of the system of interest. The R2hC module is responsible for converting the PDEs developed by the previous module in terms of variables defined over the field of real numbers to a set of equations defined over the fields of complex numbers or quaternions . This effectively raises the semantic dimensionality of the models and reduces their representational complexity. The e2g module is responsible for converting the equational form of the model developed by the previous module to an algebraic solution graph. This module effectively raises the syntactic embedding dimensionality to two or three dimensions and reduces both the representational and solution complexity.
Continuum Multiphysics Equation Builder.
The CMEB system is a computational infrastructure for the automated derivation, composition, and deployment of models. It enables a user to be responsible only for specifying the physics/engineering problem to be simulated and frees them from the tasks of manually selecting the details of a simulation tool, its connectivity, and associated data files and other dependencies. The entire algorithmic development of the system is based on the following meta-axioms for open and closed continuum multiphysics systems:
Conservation laws for mass, linear and angular momentum, and energy are holding [3].
The electromagnetic principles of electric charge conservation, Gauss law, Gauss law for magnetism, Faraday’s law of induction (Maxwell–Faraday law), and Ampere’s circuital law (with Maxwell’s addition) are holding. The last four laws representing Maxwell’s PDEs of electrodynamics can also be represented by the two conservation laws of magnetic vector and electric scalar potentials [3].
The laws of thermodynamics of continua [70] are holding.
Among all possible formalisms available, the particular formalism utilized for implementing thermodynamics in CMEB is that of the theory of irreversible processes near equilibrium [70].
Neumann’s principle, or the principle of symmetry [71], states that “if a crystal is invariant with respect to certain symmetry operations, any of its physical properties must also be invariant with respect to the same symmetry operations.” This principle is taken to hold for any (not necessarily) crystalline material that exhibits symmetry of any kind (i.e., laminated composite materials homogenized at either the lamina or laminate levels).
The CMEB framework establishes constitutive relations, employs the relevant conservation laws, and introduces formulation axioms in order to automatically derive constitutive and governing equations for continua exposed to multiple fields. It is evident that the computational and prototyping resource for implementing CMEB should be capable of symbolic manipulation, which is the main activity performed by the various investigators on the topic. The Mathematica [72] symbolic algebra and programming environment was chosen for these tasks mainly because of its rich set of features and maturity.
The CMEB infrastructure applies intelligent automation through a design implementation mainly consisting of an architecture involving four sub-modules. Its outline is depicted in Fig. 5. These sub-modules are described as follows:

Outline of the workflow implemented by the CMEB module for generating CMMs defined over the field of real numbers
Part I—User Input: Through an interrogative custom-generated graphical user interface (GUI), the user is asked to provide only six selections to generate the PDEs that represent and govern the CMM. The user also has the option to make declarative incorporation of additional characteristics of the continuum theory of interest.
The following user input options are available for each selection widget of the GUI:
Physics: elastic, thermal, chemical, electrical, and magnetic.
Independent variables: (strain or stress), (entropy density or temperature), (chemical species concentration or chemical potential per unit volume), (electric displacement field or electric vector field), and (magnetic displacement field or magnetic flux density field). The remaining variables of each pair are the dependent ones.
Number of charged and uncharged species given by a positive integer.
Material symmetry class: anisotropic, monoclinic, orthotropic, transversely isotropic, cubic, or isotropic material symmetry.
Order of constitutive laws: linear, quadratic, or cubic, etc.
Spatial dimension: 2D or 3D.
Part II—Derivation of Coupled Constitutive Laws From a Thermodynamic Potential: As eluded by the previous sub-module description, the state of the system defined by the input section can generally be described by four pairs of conjugate field variables. They are for structural mechanics, thermal physics (heat transport), electric physics, and magnetic physics. In addition, there are n pairs for mass transport physics for n chemical species diffusion, making a possible maximum total of 4 + n conjugate pairs. The user selections for variables implicitly define which thermodynamic potential will be used. However, the user has the alternative option to directly select one of many thermodynamic potentials (corresponding to the choice of conjugate pairs with regard to which components of the pairs will be independent and dependent variables) for introducing them in the first and second laws of thermodynamics. For the case of a single chemical species, there are five independent variable fields. Therefore there are 25 = 32 possible thermodynamic potentials. Among the 32 possible thermodynamic potential functions (for the case of a single chemical species), the framework enables the selection of one that can be constructed for a given selection of independent variables representing the system’s state. Among the 32, any choice is possible however the GUI provides the user with the option to select one of the named or well-known functions: the internal energy density , the Helmholtz free energy density , the generalized enthalpy density , the Gibbs free energy density or the Landau (or grand) potential density , where represent the stress and strain second-order tensors, the temperature, entropy, species concentration in the continuum, the entropy, the electric displacement, electric field, magnetic field, and the magnetic flux density vectors respectively.
It should be noted that the nine constitutive axioms of causality, determinism, local action, material frame indifference, dissipation, equipresence, time reversal, memory, and admissibility [73] are participating in the declarative part of the implementation as specified by the user.
Subsequently, this sub-module of CMEB performs a multivariate Taylor series expansion of the selected thermodynamic potential about the origin up to the m + 1 order where m is the order of the desired constitutive law also defined by the input sub-module. Finally, due to the second law of thermodynamics, the constitutive laws of the system for the dependent variables are determined by symbolically evaluating the first-order derivatives of Taylor’s series expansion with respect to the respective conjugate (independent) variables.
Part III—Incorporate Selected Symmetry in Constitutive Forms: Neumann’s principle is applied to simplify the form of the constitutive equations when certain material symmetries are known. Specifically, for each selected (by the user via the input module) symmetry class, physics, and order of constitutive laws, a set of matrices are generated based on an internal algorithm that follows Neumann’s principle or principle of symmetry [74,75]. This principle was originally derived for crystalline substances, but here it has been endowed to hold for any materials that exhibit symmetries due to their constituent makeup (such as laminated composites) by the fourth meta-axiom above. Therefore, each symmetry class is described by a well-known material symmetry group G (a set of tensors).
In this sub-module, an algorithm was written to solve for the elements of the relevant tensors C of various orders such that where the symmetry transformations Q ∈ G. Rayleigh product is defined for any tensor rank, therefore, this algorithm generates constitutive material laws for any physics and any order.
Examples of tensors obtained by this algorithm are (a) stiffness tensor (effects of symmetry on fourth-rank tensors), (b) piezo-electric tensor (effects of symmetry on third-rank tensors), (c) dielectric permeability, electromagnetic and magnetic-permeability tensors, electric and thermal conductivity, thermal expansion, and hygroscopic expansion (effects of symmetry on second-rank tensors).
Part IV—Automated Derivation of 2D or 3D Field Governing Equations: Utilizing the local form of the conservation laws of mass, momentum, and energy, this module derives a set of PDEs that correspond to the physics selected. However, these PDEs are algebraically not closed as both dependent and independent variables participate, clearly indicating that there are fewer equations than unknowns present. To generate an algebraically closed system of PDEs, this sub-module applies the constitutive equations derived by the previous sub-module along with additional relationships required. That is to say, that it applies the gradient equations for materials kinematics connecting the displacement vector with the strain tensor and then the constitutive law connecting the stress tensor with all additional fields (if present). It also applies the Fourier constitutive law of heat conduction, Fick’s first law of diffusion, and Gauss’s law for magnetism. This derived and algebraically closed system of governing equations can be written both in 2D and 3D based on the user’s selection in the input sub-module. This system includes a mass transport PDE for each of the participating species (originating from the mass conservation), the structural equilibrium PDE (originating from the conservation of momentum) equations, coupled heat conduction equation (originating from the energy conservation), and Maxwell’s equations for electromagnetic fields in their original or magnetic vector and electric scalar potential forms.
Finally, this sub-module of CMEB has a rewriting engine to generate the derived equations in LaTeX [76] and PDF format for publication purposes.
A Three Field Computational Multiphysics Model Example: Linear Anisotropic Hygro-Thermo-Elasticity.
Also, the vertical line signifies that this is the value of the partial derivative of the quantity to the left of it, while the quantities in the subscripts are constant in a manner consistent with the theory of thermodynamics of irreversible processes.
It should be noted that in the nomenclature above and for the rest of the present work, regular italic symbols represent scalar quantities, bold italic symbols represent vectors, bold, regular symbols represent for second-order tensors, and double-lined symbols represent tensors of order higher than second.
The final product of CMEB is the governing PDEs, and therefore they can either be solved numerically by following any of the available discretization methods (i.e., finite differences, finite volumes, finite elements, lattice Boltzmann method, etc.) or alternatively, be transformed to simpler formalisms (involving algebraic equations or simpler PDEs) by the “R2hC” module described in the next section.
Semantic Equation Dimensionality Raiser From to hyperComplex ( or ) Algebras framework
R2hC Architecture and Functionality Description.
The benefits of algebraic dimensionality raising by rewriting theoretical formalisms developed by using variables defined over the field of reals to those defined over the field of complex number have been recognized since the beginning of the twentieth century as described in the Introduction. In this regard, the R2hC module is responsible for converting the PDEs developed by CMEB in terms of variables defined over the field of real numbers to a set of algebraic equations defined over the fields of complex numbers or quaternions . This effectively raises the semantic dimensionality of the models and reduces their representational complexity. The Mathematica [72] symbolic algebra and programming environment was also chosen for its implementation.
The R2hC infrastructure applies intelligent automation through a design implementation mainly consisting of an architecture involving the sub-modules depicted in Fig. 6. These sub-modules are described as follows:

Architectural outline of the R2hC module workflow for generating CMMs defined over the field of complex numbers or quaternions
Part I—User Input: Similar to the case of CMEB, the user is asked to provide only four selections needed by R2hC through an interrogative GUI. The user also has the option to make declarative incorporation of additional characteristics of the continuum theory of interest. The input options are as follows:
Physics: elastic, thermal, chemical, electrical, and magnetic.
Spatial dimension: 2D or 3D.
Material symmetry class: anisotropic, monoclinic, orthotropic, transversely isotropic, cubic, or isotropic.
Problem domain/topology: Finite simply-connected (i.e., without inclusions/holes), finite with inclusions, or infinite with inclusions.
Part II—Field Governing Equations: This module takes the PDE output of the CMEB framework and prepares it for solving various problems based on the user’s selections in the previous sub-module. Alternatively, it allows the user to define the PDE of interest. The dimensionality of the problem domain specified in the previous sub-module is used here to enable the branching transformation from the algebra of onto the algebra of or . It also prepares the PDE forms for the appropriate transformations to be implemented in the following sub-modules.
Part III— Mapping for 2D Problems: The following steps are implemented in this sub-module to enable solving 2D problems defined over the algebra of :
Select proper PDE formalism based on isotropic or anisotropic material symmetries.
Select appropriate methodology for utilizing complex potentials. The options vary from the single physics ones as described by Refs. [4,77,5,78] or the multiphysics ones utilizing the Papkovish–Neuber representation [79,80] or the Galerkin–Westergaard representation of potential functions [81–83]. Although some of these have been developed for 3D problems, they can be applied to 2D problems as well. In the current implementation of R2hC, only the Kolosov–Muskhelishvili (K–M) approach involving holomorphic complex potential functions has been implemented.
Formulate the isotropic Airy’s biharmonic PDE and the associated function [84,85] along the Navier PDE for displacements enhanced by additional physics in terms of Neuber, Papkovish, or Galerkin potentials for isotropic solids. Alternatively, for anisotropic bodies, formulate the anisotropic Airy’s PDE as introduced by Lekhnitskii [7,86].
Perform Airy, and Navier mapping by introducing the complex numbers commutative algebra where z and are the complex variable and its conjugate while i is the imaginary unit, and then apply chain-rule of differentiation and the Cauchy–Riemann conditions.
Perform algebraic reduction in terms of involved potentials that now are all functions defined as
Select a particular domain topology to distinguish among simply connected, multiply connected, finite or infinite domains of applicability.
Define boundary and/or continuity conditions to select proper complex potential functions such as they are satisfied by the selected general forms of the complex potentials.
Formulate Riemann–Hilbert problem for multiply-connected domains or power series with collocation method for simply-connected domains according to the methods described in [4,5,78].
Determine required admissible complex potentials.
A typical illustrative example that demonstrates the benefits of semantic dimensionality raising from to is shown for the case of isotropic and anisotropic elasticity in Fig. 7, where the case of quasi-static elasticity without body forces is considered. In , the solution comes either from solving the three partial differential equations of equilibrium or the biharmonic Airy PDE. However, in only a system of five algebraic equations need to be evaluated (called the K–M relations [4,5] for the isotropic case) in terms of two complex potential functions. For the anisotropic case, the equations are evaluated in terms of six complex potential functions [7,86]. Therefore, the problem of solving the appropriate PDEs in has now been reduced to the problem of determining complex potentials required to evaluate algebraic equations.

Schematic description of the effects that dimensionality raising from to has for the case of the isotropic and anisotropic elasticity. Instead of solving PDEs, we only need to identify complex potentials that, when replaced into an algebraic system of equations, the unknown quantities (stress and displacement components) can be determined by simple evaluation.

Schematic description of the effects that dimensionality raising from to has for the case of the isotropic and anisotropic elasticity. Instead of solving PDEs, we only need to identify complex potentials that, when replaced into an algebraic system of equations, the unknown quantities (stress and displacement components) can be determined by simple evaluation.
Part IV— Mapping for 3D Problems: The following steps are required to solve a 3D problem in a manner analogous to that described earlier for the 2D problems case:
Select proper PDE formalism based on isotropic or anisotropic material symmetries.
Select appropriate methodology for utilizing quaternion-based potentials. The options vary from the single physics ones as in elasticity [15–28], fluid mechanics [29,25,30] or the multiphysics ones such as electromagnetism [31–34,36,37,39,40], thermoelasticity [41,42,20] utilizing the Papkovish–Neuber representation [79,80], of potential functions, involving quaternion potential functions. In the current implementation of R2hC, the 3D extension of the K–M relations according to Ref. [20] has been initiated.
Formulate the isotropic Airy’s biharmonic and Navier’s PDEs or their extended forms (for additional physics) and the associated functions [84,85] enhanced by additional physics in terms of Neuber, Papkovish, or Galerkin potentials for isotropic solids. The case of 3D anisotropic bodies does not have a Navier extension yet; therefore, it has not been considered for this sub-module.
Perform mapping by introducing the quaternion non-commutative algebra where i, j, k are imaginary numbers satisfying the multiplication rules: i2 = j2 = k2 = −1; i j = −j i = k; j k = −k j = i; k i = −i k = j and where , , are respectively the quaternion variable, its scalar part, its vector part, its conjugate, its k-involute, its norm, and its inverse (when q ≠ 0). Then apply the chain-rule of differentiation, and the generalized Cauchy–Riemann conditions for [42].
Perform algebraic reduction in terms of involved potentials that now are all functions defined as .
Select a particular domain topology to distinguish among simply connected, multiply connected, finite or infinite domains of applicability.
Define boundary and/or continuity conditions in order to select proper complex potential functions such as they are satisfied by the selected general forms of the complex potentials.
Formulate the problem for multiply-connected domains or collocation method for simply-connected domains according to the methods described in any of Refs. [15–28].
Determine required admissible quaternion potentials.
A typical illustrative example that demonstrates the benefits of semantic dimensionality raising from to is shown for the case of isotropic thermo-elasticity (without heat conduction) in Fig. 8, where the case of quasi-static elasticity with body forces is considered for the case of constant temperature throughout the domain. In , the solution comes from solving the three partial differential equations of equilibrium folded in the momentum equation. However, in only a system of five algebraic equations need to be evaluated (called the generalized K–M relations [20] for the isotropic case) in terms of three quaternionic potential functions.

Schematic description of the effects that dimensionality raising from to has for the case of the isotropic thermoelasticity without conduction. Instead of solving PDEs, we only need to identify quaternionic potentials that, when replaced into an algebraic system of equations, the unknown quantities (stress and displacement components) can be determined by simple evaluation.

Schematic description of the effects that dimensionality raising from to has for the case of the isotropic thermoelasticity without conduction. Instead of solving PDEs, we only need to identify quaternionic potentials that, when replaced into an algebraic system of equations, the unknown quantities (stress and displacement components) can be determined by simple evaluation.
Syntactic Embedding Dimensionality Raising Via the Equations to Graphs Framework
Equations to Graphs Architecture and Functionality Description.
The benefits of raising the syntactic embedding dimensionality of equations have been recognized since the middle of the twentieth century—as described in the Introduction. This necessitates rewriting theoretical formalisms originally developed using variables defined over in the form of equational theories expressed in 1D space. These may be rewritten in the form of 2D or 3D ASGs, where the labels of both the nodes and the edges are defined over hypercomplex algebras. For achieving this automated embedding dimensionality raising from equations to graphs, the module e2g was designed and developed within the Mathematica symbolic algebra system [72]. In general, the e2g module is a multi-modal interactive tool for equational parsing, ASG-DAG building, manipulation, and computation. More specifically, the e2g system implements and executes the following functionalities:
Parses a set of equations describing a particular problem in continuum mechanics provided as an input in LaTeX format.
Rewrites these equations in the form of a 2D or 3D DAG based on user preferences.
Enables utilization of graph embedding algorithms for generating alternate 2D or 3D representations. Embeddings that apply a physical parameterization (i.e., of elastic springs physics on the edges and Coulomb electrostatic fields on the nodes) to control the graph layout are particularly effective.
Enables successive composition of sub-problem sub-graphs into the full ASG to prevent cognitive overload.
To endow this DAG with the ability for direct composition and computability, it expresses all symbols related to edge and node labels in terms of quantities defined over the appropriate hypercomplex algebra ( for 2D problems and for 3D problems).
Utilizes edge typing to distinguish between tensor multiplication and addition to enable tensor polynomial expressions in the context of the ASG.
Allows expression synthesis based on the selection of “target” and “source” nodes in the ASG. These user-selected nodes correspond to what needs to be computed, from which know quantities, respectively.
Facilitates and implements a path-finding algorithm for connecting the selected source and target nodes to enable symbolic and numerical computing based on ASG traversal.
Provides the capability of automatic expression term substitution and simplification to produce optimized human-parsable output.
Enables capability for isolation and display of individual sub-graphs of interest.
Provides an interactive output expression manipulator.
Enables interactive 3D visualization (rotation, panning, scaling of graphs) and computing functionality to gain an understanding of the underlying problem structure easily.
Provides first-person-view ASG-DAG visualization intended for immersive virtual-reality environments.
Provides the user with preferences for sizing, styling, and placing elements of the resulting ASG-DAGs
Semantics and Operational Outline of Algebraic Solution Graphs.
The basic ASG semantics implemented by e2g are as follows. Each directed edge of the graph connects an in-node to an out-node in that direction only. If a directed edge between two nodes is of type 0 (denoted by a solid line), to obtain the quantity signified by the out-vertex label, the value of the edge label must be applied to the value of the in-vertex label. Here “apply” means either tensor multiplication or function application. If the arrow between two nodes is of type 1 (a dashed line) then the value of the edge’s out-vertex label is calculated by summating the quantities given by applying the edge labels of all type 1 edges incident to the out-node in question to the values given by their respective in-node labels.
ASGs enable computability over graph paths (thus permitting the name “solution” graphs) in addition to their abilities for relational representation of relevant quantities. This is because the node labels correspond to tensors, and the edge labels correspond to function application of three types: tensor product, tensor sum, or substitution. When this is considered from the perspective of the possibility that the tensors are defined over (for 2D problems) or over (for 3D problems), formulations by the use of e2g lead to compact and invariant tensor formulations as described separately in Ref. [1].
Note the idiomatic use of the operators =, +, and to define each equation. Each equation has the form where y and the xi are expressions corresponding to solution graph vertices, and the fi corresponds to solution graph edges. The equation format of indicates that the node y is to be taken as the appropriate zero-valued scalar or vector, or tensor. In most regular cases, no symbol is used for the operator °. However, here this symbol had to be introduced for enabling the parser of e2g to consume the respective token and disambiguate from empty space.
It should be noted that the highlighted paths in Fig. 9 are computed by e2g when a user selects a target and a source node to denote what needs to be computed (target node) from what is known (source node). In the example of Fig. 9, the user requested the evaluation of the stress tensor Tup(zn) from the displacements Ds(zn). The e2g tool automatically invokes the application of Dt(zn) that is necessary for computing the intermediate node Est(zn) and highlights the path connecting the origin Ds(zn) with the target Tup(zn). The 1D equational form equivalent to this path is displayed in the bottom-left corner of the visualization.
Equations to Graphs Utilization Example for Anisotropic Elastic Media.
The challenge problem selected to demonstrate the power of the proposed approach and the utilization of e2g is that of the two semi-infinite bonded anisotropic elastic domains with partial cracks along the interface, loaded at infinity with a tensile load inclined to the global frame of reference. This problem was selected because, in the general case, it is associated with many biomaterial applications involving multiphysics of temperature, moisture, and electric field excitation in addition to mechanical excitation, and it is related to composite materials, additive manufacturing parts, piezo-ceramics, sensors, actuators, etc. The particular version of the problem focusing on the continuum mechanics aspects was selected to ensure verification based on previous work [62].
This problem is described by a composition of various sub-problems involving 123 tensor equations that, in turn, involve tensorial quantities and operators defined over the field of the complex numbers . In the process of exploiting the current version of e2g, several sub-problems were constructed in the form of respective sub-graphs. Specifically, the Hilbert Arc formalism associated with this problem is shown in Fig. 10. Similarly, Fig. 11 shows the boundary displacement vector sub-graph, the differential constraints sub-graph, and the boundary stress vector sub-graph.

Hilbert arc subgraph for the anisotropic half-planes with interfacial cracks problem, as produced by e2g

Boundary displacement vector, differential constraints, and boundary stress vector sub-graphs for anisotropic half-planes with semi-infinite interfacial cracks loaded at infinity problem, as produced by e2g
Since e2g implements automatic composition of these sub-graphs, the ASG formalism of the complete problem can now be automatically constructed and is shown in Fig. 12 by providing the LaTeX source of 123 algebraic tensor equations representing the formulation of the problem as presented in Refs. [61,62,63]. Clearly, the complexity of such ASGs may be cognitively overwhelming. For this reason, e2g includes a first-person-view capability within a virtual or augmented reality user interface.

View of the complete ASG for the problem of the bonded anisotropic media with semi-infinite cracks on the interface loaded at infinity, as produced by e2g
Bottom-Up Metacomputing
Objectives for the Bottom-Up Architecture Approach.
The goal of what is proposed herein is to remove the user out of the helicoidal loop for creating, composing, deploying, and executing a CMM based on multiphysics modeling functionality encapsulated in legacy codes as shown in Fig. 1. Therefore the two objectives of our effort are to:
Identify what human activities and processes are amenable to abstraction and transition to a computational fabric that executes metacomputing operations in the spirit that the computer generates at the meta-level what is to be computed at the actual computational level associated with traditional computing activities in a manner that addresses the issues mentioned earlier.
Implement a prototype of this meta-computing infrastructure and verify its functionality and efficiency.
Prior Work and State of the Art.
The BUAA is being pursued in an effort to address many challenges associated with the specialized knowledge overload required for deriving and deploying and executing manually physics models. Many investigators and teams have developed systems automating various aspects of computational model generation, composition, and execution for associated problem solving. These efforts can be classified based on the granularity of functionality encapsulations, the ancillary utilities and the mode of usage and abstraction. The six main categories are the unstructured and domain-specific libraries, the legacy driven computer-aided engineering systems that evolved to be CMM capable, the dedicated PDE solver systems, and the multiphysics environments focusing on the solution of systems of PDEs and finally problem solving environments. They all have been described in detail elsewhere [2].
The most common limitations of these systems are the lack of addressing holistically all bases of CMMs context space as they have defined in Ref. [2]. Another major limitation is the fact that more importantly these systems require an intensive participation of the user not only in defining the problem to be solved but also in all other details associated with the optimal deployments of the software in the available computational and networking infrastructure.
To address both the issues associated with the burden of the user to compose computational and deploy directly computable CMMs from pre-existing computational units in a manner that does not require that the user to know of all details pertaining to the choices involved with the context space of CMMs as described in Ref. [2] we proposed an approach that will be described in the following section and is based on performing operations acting on specifications of computational units that are defined in the context of category theory, as described in the subsequent section.
The following section describes the approach followed for the software design and development required for performing the required meta-computing operations.
BUAA Approach Outline
BUAA Technical Issues.
The two most important technical issues associated with the ability to implement a BUAA are as follows:
Lack of context-free symbolic constructs capable of representing the semantics of the functionality of existing multiphysics computational modules: Efforts have been made to demonstrate metaprogramming, program synthesis, and automated software generation via various methodologies (mainly for business applications) by enabling computing over the specifications of the respective software modules. However, to the proposers’ knowledge, there has never been an effort to express the specification of existing multiphysics aware modules in a manner that captures their composability with others. An abstraction is required to capture both the external and the internal functional specification for each computational module. It has to be context-free so that computation over the relevant expressions can be consumed unambiguously by a metacomputing framework.
Lack of ability to perform metacomputing on specification entities representing functional modules, problem descriptions, and computational resources: Although compositional frameworks for developing applications have been demonstrated when specifications of the available modules are provided, they have been limited to non-multiphysics applications and do not generate provably correct constructs [87–90]. The realization that the co-limit operator in category theory (CT) enables composition/integration has been introduced by the software development community two decades ago [91–95]. The suitability and feasibility of utilizing categorical operators for engineering applications have been demonstrated for the case of designing a composite panel by the PI in Ref. [96].
Overview of the Approach.
The proposed approach involves the consideration of multiple specification meta-levels and computation over specifications. The two main constructs proposed to be developed and integrated for addressing the above-mentioned issues.

Graphical depiction of the atomic specification to be considered as a composition of the DEVS and the DESS specifications for each process within the meta-computing layers
To address the second issue, considered the instances of the atomic entities encapsulating functional specifications, as objects in the category of specifications and then utilize compositional software (such as SPECWARE [93]), implementing categorical operators such as the “limit” and “colimit” constructs to generate provably correct deployments of modular assemblies of CMMs that effectively will function as Directly Coomputable Multiphysics Models (DCMMs) as shown in Fig. 14. Intrinsic to CT-based composition is that it ensures provably correct deployments and is effectively acting as a CMMs compiler for generating DCMMs. This approach is anticipated to automatically produce code needed to enable the underlying composability required. The proposed CT-based approach will enable automated generation of missing files or scripts that implement the composition of pre-existing modules as well as data transformation modules ensuring compatibility of domains and co-domains associated with outputs and inputs of communicating modules.
![(a) Typical representation of the Colimit operation within CT, (b) Colimit application for the design of complex object T4 from simpler ones (T1, T2, T3) via commutative diagrams of the functors si, and (c) Colimit application for developing a composition of three models, one referring to the aerothermostructural model encapsulated in CMSoft AERO-S [33], and another for the fluid domain CMM encapsulated in CMSoft AERO-F [34] and the computable interface model needed for passing data from one code to the other](https://asmedc.silverchair-cdn.com/asmedc/content_public/journal/computingengineering/23/6/10.1115_1.4063103/1/m_jcise_23_6_060820_f014.png?Expires=1704232898&Signature=suqaubQoZljjzxJwnTxiuBTt1ProMmX66BZdXf8tFbrxuxc-Hw4prCEcg1XpyxjSn-PimSj3yOl4f~C7Q9KP4NpFKChAo74ajfld6BUE1VgH2cEwUIeMtY6lrrrqqohPe3N6OwjXBGLj5JqlmCEJegKgtRHCwTwhoBjQxStPhXkPaWIFE~idkBE68rO00~ODHStkNxrMQbB99E0neBGpAOf~UeljogXqVSNvsVkYHR9qgfeWy2GWUjEkYXdjhj0sjnA4XzTZv96lt7j~OA2hzhYrptBrMN9WHhIgP8INnolGeJVbTPpLuN64oix6sUQ76KFDKrO-bdb9LjwrMBRwNw__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
(a) Typical representation of the Colimit operation within CT, (b) Colimit application for the design of complex object T4 from simpler ones (T1, T2, T3) via commutative diagrams of the functors si, and (c) Colimit application for developing a composition of three models, one referring to the aerothermostructural model encapsulated in CMSoft AERO-S [33], and another for the fluid domain CMM encapsulated in CMSoft AERO-F [34] and the computable interface model needed for passing data from one code to the other
![(a) Typical representation of the Colimit operation within CT, (b) Colimit application for the design of complex object T4 from simpler ones (T1, T2, T3) via commutative diagrams of the functors si, and (c) Colimit application for developing a composition of three models, one referring to the aerothermostructural model encapsulated in CMSoft AERO-S [33], and another for the fluid domain CMM encapsulated in CMSoft AERO-F [34] and the computable interface model needed for passing data from one code to the other](https://asmedc.silverchair-cdn.com/asmedc/content_public/journal/computingengineering/23/6/10.1115_1.4063103/1/m_jcise_23_6_060820_f014.png?Expires=1704232898&Signature=suqaubQoZljjzxJwnTxiuBTt1ProMmX66BZdXf8tFbrxuxc-Hw4prCEcg1XpyxjSn-PimSj3yOl4f~C7Q9KP4NpFKChAo74ajfld6BUE1VgH2cEwUIeMtY6lrrrqqohPe3N6OwjXBGLj5JqlmCEJegKgtRHCwTwhoBjQxStPhXkPaWIFE~idkBE68rO00~ODHStkNxrMQbB99E0neBGpAOf~UeljogXqVSNvsVkYHR9qgfeWy2GWUjEkYXdjhj0sjnA4XzTZv96lt7j~OA2hzhYrptBrMN9WHhIgP8INnolGeJVbTPpLuN64oix6sUQ76KFDKrO-bdb9LjwrMBRwNw__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
(a) Typical representation of the Colimit operation within CT, (b) Colimit application for the design of complex object T4 from simpler ones (T1, T2, T3) via commutative diagrams of the functors si, and (c) Colimit application for developing a composition of three models, one referring to the aerothermostructural model encapsulated in CMSoft AERO-S [33], and another for the fluid domain CMM encapsulated in CMSoft AERO-F [34] and the computable interface model needed for passing data from one code to the other
Architecture Outline.
The outline of the architecture of the proposed computational infrastructure from the user perspective is depicted in Fig. 15. The core functional process of this architecture is the node “meta-computing synthesis and composition” that serves as model compiler that takes the four specification descriptions of the physics problem, the numerical problem, the software resources, and the hardware resources, and produces the respective DCMM. It should be noted that the system is designed such that the user can have access through the GUI to the meta-level computing control widgets expressing the physics encapsulation and relevant workflows as well as the results produced when the DCMM is executed. The specifications and descriptions are captured in terms of a representation that have both user accessible but also internal representations to be consumed by the meta-computing synthesis and composition process.
The challenges associated with the bottom-up approach have been identified to be the following:
Non-existence of unique manner for encoding computable specifications.
Non-existing specification of computational resources (numerical, software, hardware).
No access to legacy code architects/developers on the structure and intent of internal modules in legacy codes.
To address these challenges we have developed a plan that focused on:
Exploring specification capturing methodologies and select one to implement for encoding computable specifications.
Utilizing selected methodology to capture the specification of computational resources (numerical, software, hardware).
Obtaining access of AERO suite architects/developers to capture the specifications of the structure and intent of internal modules.

Outline block diagram of the internal architecture of the metacomputing synthesis and composition node of Fig. 5, depicting the internal form of the meta-computing and computing layers of the proposed framework along with the associated transformation meta-processes

Outline block diagram of the internal architecture of the metacomputing synthesis and composition node of Fig. 5, depicting the internal form of the meta-computing and computing layers of the proposed framework along with the associated transformation meta-processes
It is important to highlight here, that the main computing infrastructure in the “application runtime layer” will be the modules of the AERO suite of codes developed by CMSoft Inc. A more detailed description of the AERO suite that justifies its selection is presented in [2].
Consequently, the internal architecture of the metacomputing synthesis and composition node of Fig. 15 is shown in Fig. 16. The left side of this diagram represents the internal metacomputing representation of the overall semantic specification computing where two metacomputing levels are shown. The required specifications for the physics composition involved (level L1) and the corresponding computational resources specification captured via the DEVS and DESS representations. The right side of the diagram in Fig. 16 shows the user exposed syntactic form of the physics specification widgets both in terms of the level L2 layer of the workflow entities as perceived by the user, and in terms if the level L3 workflow as expected by the legacy code and corresponds to that of level L2. On the bottom of the right side of Fig. 16, the computing layer is depicted that to contain the scripts generated by the meta-computing layers along with all input files required in order to run the composition of all executables associated with the legacy code that represent the embodiments of the DCMM. The bottom left of Fig. 16 presents the legends associated with L0, L1 metacomputing layers and the computing layer.
Important Implementation Details.
The view presented in the right side of Fig. 16 is splitting the representation plane to the computing and specification/metacomputing layers. The top level of the specification and metacomputing layer enables the user to assemble the meta-specification of a physics problem of interest in terms of physics-relevant computational abstractions. This will be achieved by using a meta-meta integrated development environment (m2IDE) capable of capturing the Work Flow for the Physics Formulation Specification (WFPFS) in the form of a 2D graph capturing the workflow between physics-specific entities that a multiphysics domain expert is already aware. We will therefore be calling this tool WFPFS-m2IDE.
The output of this facility when passed through the “Graph Transformer 1” will be a lower level meta-specification representation of the interconnectivity between the various files and modules of the legacy code (in our case the AERO suite) entities. This representation will also have a graph view that may be inspected by the user in graphical form. We will be calling this facility the Legacy Code wWorkflow Specification meta-IDE (LCWS-mIDE).
The output of these resources will be generating all the Bash (or other type of shell) script required to automate the binding of all data files and legacy code modules for producing the solution of the defined problem. We will be referring to this script as the runtime implementation script.
Finally, the output of this code is spawning the executables of the legacy code modules and will be performing the necessary file system input/output while the will be endowed with facilities to handle the idiosyncrasies of the networking and runtime fabrics thus implementing the respective DCMM.
Work Flow for Physics Formulation Specification m2IDE and Legacy Code Workflow Specification mIDE.
The need for the agile development of the work flow for physics formulation specification meta-meta-IDE and the LCWS-mIDE environments led to a comparative evaluation of available infrastructures for code development.
The implementation language for all metacomputing infrastructure resources was selected to be C++ due to its maturity and level of optimization flexibility of the generated binaries relative to the widest range of computing architectures.
Based on the software development criteria described in Ref. [2], we also selected Qt Creator integrated development environment based on our familiarity and prior experience with it.
A critical component of the required toolchain is the library enabling the development of 2D graphs as they are necessary for the graphical specification for the multiphysics problem in the WFPFS-m2IDE and the legacy code workflow specification in the LCWS-mIDE. Our final choice was the “nodeeditor” library2 that had already build-in Qt support.
The core functionality for both the WFPFS-m2IDE and LCWS-mIDE is common and for the sake of space we will only describe it from the perspective of WFPFS-m2IDE where the user can define the physics problem in terms of the physics specification of a problem of interest.
The view of the graphical user interface of the WFPFS-m2IDE is presented in Fig. 17 where the toolbar, the graph, and the text pane areas are shown. The user uses the widgets of the toolbar to create, open, and save files containing the graphs created or modified by the user in the graph pane. The toolbar also contains node editing widgets for copy and paste of nodes as well as zoom to fit and style management and do and undo controls. The user uses the graph pane to create nodes with input and output ports as well as click and drag for creating port connections depicted as arrows to represent the flow of data. Finally, the text pane was added to aid the user to see the dynamic creation of the input files required for implementing the runtime configuration governing the execution of the directly computable CMM that corresponds to the physics problem created by the user in the graph pane.

Graphical user interface of the WFPFS-m2IDE for enabling capture of the physics problem specification, depicting the toolbar, graph, and text panes
Figure 17 also shows the specific physics workflow specification of the Conjugate Heat Transfer (CHT) problem discussed in Ref. [2]. The corresponding module connectivity and data flow architecture of the legacy AERO suite modules that are created by the WFPFS-m2IDE for the LCWS-mIDE level is presented in Fig. 18.

AERO suite modules data flow reflecting the LCWS-mIDE level, and corresponds to a hypersonic CHT problem
Conclusions
From top-down architecture approach perspective, the work presented here attempts to demonstrate that the media we select to develop and express CMMs, can limit our ability to solve problems. These selections also limit our ability to see a problem from other semantic and syntactic perspectives that may enable much easier problem solutions and enable direct computability. Consequently, this effort focused primarily on describing the architecture, development, demonstration, and performance evaluation of a metacomputing framework from a top-down and a bottom-up perspectives.
The top-down framework generates the forms of directly computable CMMs that, in turn, can address problem solutions in continuum multiphysics at the computing level. This framework is comprised of three metacomputational modules. The prototyping of the metacomputing and computing layers associated with these modules was implemented in the Mathematica symbolic environment.
The first metacomputing module is the CMEB, which derives the constitutive and field equations to be solved for particular continuum multiphysics problems at the computing level. The quantities (state variables and relevant operators) are defined within the algebra of reals, .
The second metacomputing module is the R2hC projector that expresses field equations derived initially in terms of state variables defined within the algebra of reals to field PDEs or/and algebraic equations with variables defined in the algebras of complex numbers, for 2D problems and of quaternions, for 3D problems. Thus, this module effectively increases the semantic dimensionality of the applicable formulation algebra and reduces complex forms to simpler ones in the hypercomplex algebras scope. This module also has a computing layer invokable by the user if desired for solving specific problems and demonstrates directly computable CMM capability. This module has been demonstrated for some 2D problems and needs to be extended more to address 3D problems.
The third metacomputing module is the e2g that converts equational theories expressed as conjunctions of equations written in the traditional 1D form to ASGs that are DAGs with embedded computability. Solving a problem has been mapped to the operation of following a path between a source representing the known quantities and a target node representing the unknown quantity to be computed. In this manner, e2g enables directly computable CMMs. Thus this module effectively increases the syntactic dimensionality of model representations from 1D to 2D and 3D. This module needs to be extended more for 3D problems involving quaternionic quantities to achieve its originally intended functionality.
From bottom-up architecture approach perspective, the work presented here attempts to demonstrate that when legacy codes are desired for CMM computing, then categorical metacomputing of specification along with properly designed integrated development environments (IDEs) can enable metacomputing that facilitates takes the user out of the iterative role in the context space of CMM computing. In particular, a multilayer metacomputing architecture of a framework was proposed that enable the user to utilize the specifications of the available resources to generate directly computable CMMs with the help of work flow for physics formulation specification and legacy code workflow specification meta-IDEs.
All the metacomputing facilities presented here are characterized by the unique feature of taking the user out of the loop for constructing, composing, and deploying CMMs and do it such that CMMs appear to be directly computable at a fraction of the time required if these technologies have not been utilized in the first place. This recent experience indicates that the opportunities for exploiting metacomputing have just began and they can only be generalized and refined further to enable a new dimension of utilization of computational resources and user experience.
Footnote
Acknowledgment
The authors would like to acknowledge support for this effort by the Defense Advanced Research Agency for its support under solicitation PA-19-02 via MIPR HR0011046726 and the Office of Naval Research via the core funding of the US Naval Research Laboratory. JGM would like to express his deep gratitude to Dr. P. W. Mast (NRL retired), whose unparalleled insight and vision in the 1990s both fed and inspired the thirst for discovering and acting on ideas related to the role of the media of mathematics can have on limiting or benefiting research and development activities associated with CMM representation. Finally, JGM would like to also express his deep appreciation to Dr. R. Badaliance (NRL retired) for enabling and encouraging JGM’s professional focus on the topics related to the present work.
Conflict of Interest
There are no conflicts of interest.
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.