Graphical Abstract Figure
Graphical Abstract Figure
Close modal

Abstract

Early-stage design of complex systems is considered by many to be one of the most critical design phases because that is where many of the major decisions are made. The design process typically starts with low-fidelity tools, such as simplified models and reference data, but these prove insufficient for novel designs, necessitating the introduction of high-fidelity tools. This challenge can be tackled through the incorporation of multifidelity models. The application of multifidelity (MF) models in the context of design optimization problems represents a developing area of research. This study proposes incorporating compositional kernels into the autoregressive scheme (AR1) of multifidelity Gaussian processes, aiming to enhance the predictive accuracy and reduce uncertainty in design space estimation. The effectiveness of this method is assessed by applying it to five benchmark problems and a simplified design scenario of a cantilever beam. The results demonstrate significant improvement in the prediction accuracy and a reduction in the prediction uncertainty. Additionally, the article offers a critical reflection on scaling up the method and its applicability in early-stage design of complex engineering systems, providing insights into its practical implementation and potential benefits.

1 Introduction

Early-stage design of complex engineering systems is critical since most of the major design decisions are being made at this stage while engineers have minimal knowledge about the design. The importance of early design stages has been recognized in various engineering fields, including ship [1] and aircraft design [2]. More specifically, design decisions that dictate the vessel’s overall configuration reduce design freedom and commit a significant portion of the overall cost. Therefore, engineers must conduct comprehensive design space exploration to identify design trends and trade-offs, leading to informed design decisions.

Throughout the design process, engineers employ a range of analysis methods with varying fidelities to evaluate different designs. At opposite ends of the spectrum, high-fidelity (HF) methods, often based on first principles, provide highly accurate results but entail significant computational expenses, while low-fidelity (LF) methods yield less accurate results while requiring lower computational costs. Traditionally, engineers have used LF methods and tools to explore the broad multidimensional design space of complex systems [2]. These methods include simplified physical models, empirical and semi-empirical methods [3], and trends based on historical data [4]. Based on engineering experience, these methods are suitable for conventional designs for which we have a lot of gained knowledge.

However, nowadays, there is a drive toward innovative designs driven by the need to address challenges such as the integration of autonomous systems [5], enhancing safety [6], and promoting sustainability [7]. This necessitates the design of systems with capabilities that surpass our current state-of-the-art designs. Some examples of novel hull designs in naval architecture are the tumblehome wave-piercing hull [8] employed in the Zumwalt-class destroyer, the aluminum trimaran design [9] used for the USS Independence frigate, the AXE bow [10] used in various yacht designs, and the asymmetric hull configuration found in the Baltika icebreaker [11]. In aerospace, the flying-V [12] represents an innovative aircraft design that integrates the fuselage into the wing structure, aiming to enhance its performance.

Incorporating new features into systems introduces added uncertainty, primarily because engineers lack prior experience with these unique systems [13]. This introduced uncertainty needs to be quantified and taken into account in order to support decision-making [14]. Thus, in order to mitigate this heightened uncertainty, it becomes important to introduce HF analysis at an earlier stage in the design process. This is essential because relying solely on LF models and tools to examine the design space may result in designs that are not feasible or suboptimal when the underlying physics is not appropriately captured [15]. Additionally, in the case of ship design, engineers rely on established rules and guidelines prescribed by classification societies to guide their design practices. However, when dealing with novel ships featuring unconventional shapes and sizes, blindly following the class society formulations proves insufficient [16], as these guidelines are primarily based on conventional ship knowledge. A similar challenge is present across different design domains. Currently, advanced HF models are able to capture the underlying physics that dictate a design’s performance, yet they require a significant amount of computational resources, human input, and extended lead times [17,18]. However, it is unrealistic to evaluate a sufficient number of designs using expensive HF methods in order to construct an approximation of the design space. Therefore, a significant challenge in the optimization of vessel design is the efficient increase of modeling accuracy required to capture the crucial physics that restricts or allows for a specific concept [19].

To address these issues, a promising approach is to construct multifidelity (MF) models, which are defined as models combining various fidelities [20]. According to Ref. [19], models are considered multifidelity when there is synergistic use of different mathematical descriptions (i.e., different physics typically represented by different governing equations, boundary conditions, or parametric attributions) in the analysis or design procedure. The goal of MF models is to achieve computational efficiency through the use of LF models while maintaining precision through the use of the HF model [21]. MF models are state-of-the-art for solving complex problems. For a comprehensive understanding of the various MF techniques, an extensive review of the various MF techniques can be found in Refs. [20,21]. MF models have demonstrated notable achievements across various engineering domains such as engineering design and analysis, and applied mathematics. They have been used to tackle diverse problems, encompassing solving partial differential equations [22], aiding in complex analyses like estimating the wave-induced vertical bending moment on ships [23,24], and supporting design applications [25].

Several state-of-the-art methods for design applications have been extended to an MF scheme. Examples of such methods include Monte Carlo (MC), Gaussian processes (GPs), and neural networks (NNs). For example, Ng and Willcox [26] proposed an MF estimator based on the control variate MC method. Diverse methods exist in the literature for constructing MF NNs architectures. For instance, Meng and Karniadakis introduced a composite NN leveraging MF data. In their approach, the first NN approximates the LF data, while the second and third NNs model the correlation between the low- and high-fidelity data [27]. Examples of different approaches for building MF-NN architectures can be found in Refs. [28,29]. According to the findings reported in Guo et al. [29], the co-Kriging model demonstrated superior performance compared to the MF-NN architectures in the case of the Forrester function. Conversely, in the case of more complex functions such as the Jump Forrester function, the opposite trend was observed [29]. To estimate the uncertainty associated with the prediction, Meng et al. [30] introduced a MF Bayesian NN framework. Within this framework, uncertainty was estimated by predicting the uncertainty of learning the correlation between the low- and high-fidelity data. GPs have found extensive application in design problems. Their primary advantages lie in their ability to provide uncertainty quantification pertaining to predictions, which can be effectively used in the context of design optimization. Another advantage is their effectiveness in addressing problems characterized by small data regimes.

Despite the ongoing developments in MF models, there are still significant areas in design applications that remain unexplored. For example, for multidisciplinary design problems, Mainini et al. [15] argue that there is no mathematical framework that is capable of determining (1) which design disciplines, (2) the degree of coupling for analysis tools, (3) the level of accuracy necessary to capture the crucial physics of a specific design, (4) where the data is best collected, and (5) how to make optimal design decisions with limited computational resources. Furthermore, Peherstorfer et al. [21] emphasize that for design frameworks, it is crucial to construct frameworks that do not solely focus on models but include additional information sources, so that decision-makers can effectively utilize a wide range of available information.

In design applications, a primary challenge lies in the necessity for a larger HF dataset to attain precise predictions, particularly for complex and higher-dimensional problems. Expanding the HF dataset poses difficulties since each data point is a product of computationally expensive analyses or, in some cases, physical experiments. Consequently, the acquisition of a substantial HF dataset is constrained by the limited computational budget available. To address this challenge, the present work proposes the integration of compositional kernels in a design framework based on the autoregressive scheme of MF GPs. The main idea is that a more accurate approximation of the design space can be attained by utilizing knowledge about the underlying structure of the design space revealed by the compositional kernels. The goal is to build a framework where fewer HF evaluations are necessary to create an accurate MF model, resulting in a reduction in computational cost. The proposed method has been applied to five benchmark problems, as well as to a cantilever beam design problem as a first step toward its application to early-stage design of complex engineering systems such as naval vessels.

Section 2 provides an overview of relevant work on the exploration of design space structure and uncertainty quantification for design applications. Section 3 provides the technical background of the proposed method. Section 4 examines the results from the conducted case studies. Finally, Sec. 5 outlines the conclusions derived from the research and offers recommendations for future research.

2 Relevant Works

The article addresses early-stage design exploration of complex systems by introducing an MF-GP-based method. In this section, the fundamental concepts of the topic are discussed individually, and relevant studies are presented. These concepts include exploration of design space’s structure, uncertainty quantification in design exploration, and GPs.

2.1 Design Space: Exploration of Its Structure.

Early-stage design of complex systems deals with multidimensional design spaces, and this forms a significant challenge. In practical applications, the design space often involves a large number of design variables. Therefore, the scalability of modeling methods becomes a crucial consideration. For instance, the design variables can range from a few dozen, as seen in the hydrostructural optimization of hydrofoils with a 17-dimensional design space [31], to thousands of variables in the case of an aerostructural optimization benchmark problem for commercial transport aircraft [32]. Another important challenge to address during this design stage is the quantification of uncertainty (UQ). As highlighted by Ref. [14], “UQ creates value to the extent that it holds the possibility of changing a decision that would otherwise be made differently.”

The literature has examined numerous approaches aimed at facilitating design exploration. For example, Di Fiore et al. [33] proposed incorporating both information extracted from data and domain knowledge to facilitate the conceptual design of re-entry vehicles. Furthermore, Singh and Willcox [34] developed a framework grounded in Bayesian statistics and decision theory. This framework integrates information from different stages of a product’s lifecycle to enhance decision-making in the design process. The method proposed in the present article is founded on the premise that the design space under investigation possesses a specific structure, which can be uncovered and leveraged to enhance the efficiency and effectiveness of design exploration.

The idea of uncovering the patterns within the design space has been studied by Melati et al. [35]. The researchers proposed a machine learning-based approach rooted in pattern recognition to effectively map and characterize the multidimensional design space of nanophotonic components. Through pattern recognition techniques, the authors successfully unveiled relationships among an initial sparse set of optimized designs, thereby reducing the number of characterized variables.

In the context of GPs, the covariance matrix via the kernel function can be used to define patterns in the design space. The efficacy of employing an appropriate kernel function for Bayesian optimization was demonstrated by Moss et al. [36]. In their work, the authors introduced a Bayesian optimization method for raw strings that seamlessly incorporates a string kernel, showcasing the power and effectiveness of this approach. In addition, the study conducted by Palar et al. [37] explores the potential of composite kernel learning and model selection in enhancing the accuracy of bi-fidelity GPs.

While the main focus of the current work aligns with the research paper of Palar et al. [37], the approach of defining the kernel functions differs. The approach of Palar et al. [37] is to build the compositional kernels as a weighted sum of basis kernels, and the weights are treated as hyperparameters. In contrast, the present study proposes an optimization routine where the kernel functions of the different fidelities are sequentially built. Thus, the kernel function for the ith fidelity is built based on the kernel function of the lower fidelity model i1. The proposed method can be extended to an sth fidelity problem setup.

2.2 Uncertainty Quantification in Design Exploration.

UQ is a widely discussed topic within the realm of design applications [38]. Defining uncertainty poses challenges since it relates to a lack of knowledge. Uncertainty is commonly categorized into two types: aleatory and epistemic [39]. Aleatory uncertainty pertains to natural randomness that is inherent in the system such as the outcomes of tossing dice and drawing cards, and thus, it cannot be reduced or eliminated. On the other hand, epistemic uncertainty arises from a lack of knowledge or information. This type of uncertainty can be reduced as one gains a better understanding of the problem through further investigation and learning. However, there are arguments asserting that uncertainty can only be epistemic. As described in Ref. [40], uncertainty is an inherent aspect of the human brain and is not an inherent property of the external world. Probability, serving as a measure of uncertainty, reflects an individual’s state of mind rather than an absolute state of affairs. In this article, we follow the common categorization to aleatory and epistemic uncertainty.

A typical example of aleatory uncertainty in this context of hydrodynamic analysis is the probabilistic formulation of ocean waves. Due to the inherent randomness and variability of wave conditions, a probabilistic approach is essential to adequately address the associated uncertainties in the hydrodynamic analysis. On the other hand, Mavris et al. [2] discuss several examples of epistemic uncertainties that are pertinent to early-stage design, including the treatment of assumptions, ambiguous requirements, the fidelity of different codes, economic uncertainties, and technological risks. Furthermore, according to Collete [41], the key epistemic uncertainties for ship structures involve operational profiles and behavior, model uncertainty, and the influence of the human engineer. Hence, within the domain of early-stage design for complex engineering systems, engineers confront both aleatory and epistemic uncertainties. In certain instances, as highlighted by Ref. [41] in the context of ship structures, numerous studies tend to simplify or overlook epistemic uncertainties.

UQ can be used to improve early-stage design exploration of novel and complex systems. First, UQ can be used to make more informed and optimal design decisions [38,42]. Second, when dealing with innovative concepts, there is an inherent introduction of additional uncertainty into the design exploration problem. The additional uncertainty arises due to the inherent lack of knowledge concerning the performance of such systems. A real-world example in industries like automotive and aerospace involves the use of prototyping to acquire additional insights into the performance of new engineering systems. It is noteworthy to emphasize that, particularly in the domain of ship design, the large physical dimensions, the complexity, and the fact that ship are not being built in large series render the construction of full-scale prototypes unfeasible [1]. Therefore, it becomes important to consider and account for this uncertainty in order to effectively navigate the design space and make reliable design decisions.

A viable mathematical framework for addressing epistemic uncertainty lies within the family of Bayesian methods. Bayesian methods possess an inherent capacity to quantify uncertainty arising from our limited knowledge. This is because Bayesian statistics quantify the degree of belief in the truth of a particular proposition, rather than the frequency of event occurrence. For a more comprehensive discussion on the distinction between frequentist and Bayesian statistics, readers are referred to Ref. [39].

2.3 Gaussian Processes.

The present article focuses on GPs, a subset of Bayesian methods. In general, a GP is a collection of random variables such that any subset of these variables is jointly Gaussian [43]. The MF schemes of GPs incorporate data obtained from various fidelities. One of the schemes is the linear autoregressive scheme (AR1) proposed by Kennedy and O’Hagan [44]. In addition to this, two nonlinear schemes have been introduced: the nonlinear autoregressive Gaussian process (NARGP) proposed by Perdikaris et al. [45] and the deep Gaussian processes (deep GPs) proposed by Damianou and Lawrence [46]. GPs and MF-GPs have demonstrated their effectiveness in addressing design optimization problems across different engineering fields such as aircraft design [47], ship design [48], and materials design [49]. Their popularity widely comes from the fact that they are well suited for small data regimes [50], which aligns them to the problem of early-stage exploration of novels systems where there is inherently limited data available. In contrast to AR1, which assumes a linear dependency between the fidelities, NARGP, and deep GPs are capable of capturing more complex nonlinear dependencies among the fidelities. A general trend, based on aerospace-related engineering problems as shown in [51], is that linear schemes exhibit superior performance in scenarios with limited data compared to nonlinear schemes, which require a larger amount of data for effective training. The present research targets engineering problems in the small data regime; thus, the AR1 scheme was selected as the basis for this work.

While design optimization has seen successful applications, ongoing research efforts are necessary to fully harness the potential of GPs for early-stage exploration purposes. This study investigates the concept of exploring the structure of the design space with the aim of leveraging this information to enhance prediction accuracy and reduce uncertainty. Such an approach holds potential for design applications in domains such as ship and aircraft design, where HF analyses, including computational fluid dynamics computations and experiments, can be significantly (computationally) expensive. Consequently, reducing the number of HF simulations would yield benefits for tackling such problems. The article builds upon the work of Charisi et al. [52] by expanding the model beyond the bi-fidelity case. This is achieved by developing compositional kernels for each fidelity level sequentially, starting from lower to higher fidelity. Furthermore, extensive analysis of the model’s performance is conducted across multiple case studies involving analytical benchmark problems with varying attributes, enhancing our understanding of its applicability to diverse design problems.

3 Methods

This section contains the technical details of the proposed method in Sec. 3.1, which is composed of two primary components: (1) the MF-GPs and (2) the compositional kernels. The mathematical formulation of GPs and MF-GPs is provided in Sec. 3.2, while the optimization process for the compositional kernels is described in Sec. 3.3.

3.1 Proposed Method.

The authors propose integrating compositional kernels to the linear autoregressive scheme (AR1) to facilitate design exploration. The integration of the compositional kernels aims to capture the shape of the underlying HF design space with the goal of making improved predictions with less HF analysis data.

The core concept revolves around seeking the optimal compositional kernel, comprising kernels that effectively capture distinct characteristics of the design space, such as linear or periodic patterns. Constructing these compositional kernels involves solving a discrete optimization problem. The optimization of the compositional kernel is guided by the analysis data utilized as the training set for the MF-GP.

Let us assume that the design problem involves models with fidelities ranging from 1 to s, where fidelity 1 represents the lowest model and fidelity s represents the highest fidelity model. The first step is to build a compositional kernel for the data of the lowest fidelity (fidelity 1) based on a single fidelity GP model (technical details in Sec. 3.2). For each fidelity i ranging from 2 to s, a compositional kernel is built based on the bi-fidelity GP model (technical details in Sec. 3.2) using fidelity i data and fidelity i+1 data. The process is summarized in Algorithm 1 and Fig. 1.

Fig. 1
Flowchart of the proposed method
Fig. 1
Flowchart of the proposed method
Close modal

Algorithm 1: Compositional Kernel optimization for MF-GPs

input:  [Xi(j),Yi(j)], Vf, k;  /* training data:i[1,Nj], j[0,s], vector of fidelities, number of basis kernels */

output:Vkopt;   /* vector of optimal compositional kernels */

 1 S{k1,k2,,kn}; /* wherekiare the basis kernel functions */

 2 So{addition,multiplication};

 3 forl:=1 to kdo

 4   V1AllCombinations(S,l);

 5   V2AllCombinations(So,l1);

 6   fori:=1tolength(V1)do

 7     forj:=1tolength(V2)do

 8       Apply the operations described in V2j to functions in V1i to build kcompij;

 9       VkVkkcompij

10     end

11  end

12 end

13 forftoVfdo

14   iff=min(Vf)then

15     forkcompijinVkdo

16       Build a SF GP model using [Xi=1,..,Nf(f),Yi=1,..,Nf(f)];

17       Calculate BIC from Eq. (19);

18     end

19   end

20   else

21     forkcompijinVkdo

22       Build a MF-GP model using

       [Xi=1,..,Nf(f),Yi=1,..,Nf(f)][Xi=1,..,Nf1(f1),Yi=1,..,Nf1(f1)];

23       Calculate BIC from Eq. (19);

24     end

25   end

26   Find the optimal kernel kcompopt with the minimum BIC value;

27   VkoptVkopt{kcompopt};

28 end

3.2 Gaussian Processes: From the Single Fidelity to the Multifidelity Scheme.

The mathematical formulation for the GPs is taken from Ref. [43]. A GP is defined as “a collection of random variables, any finite number of which have a joint Gaussian distribution, and it is fully characterized by its mean and covariance function” [43]. GPs are used to build approximations of real-world processes f(x), which can be fully defined by a mean μ(x) and a covariance function k(x,x) according to Eqs. (1)(3):
(1)
(2)
(3)
The available analysis or experimental data can be described according to Eq. (4):
(4)
where f represents the function to be approximated and ε represents the error term. GPs belong to the family of Bayesian methods. For Bayesian methods, a critical element of the analysis is the prior distribution. The prior distribution encodes our prior knowledge or assumptions regarding the unknown function f. The prior distribution of the observed data X and the test data X is determined according to Eq. (5):
(5)
where f* are the function values evaluated at the test locations X*. A common practice is to assign the prior a zero mean [43], since data can be normalized to have a zero mean, and a kernel function Kij=k(xi,xj;θ). In Bayesian learning, the prior distribution is revised by incorporating the observed data, resulting in the formation of the posterior distribution. Mathematically, the prior distribution is conditioned on the observed data to form the posterior distribution according to Eqs. (6)(8):
(6)
(7)
(8)
where K=K(X,X), k**=k(x*,x*), and k*=k(x*). There are various methods to optimize the kernel hyperparameters such as cross-validation and maximum likelihood estimation [53]. In the present work, the maximization of the marginal log-likelihood was applied. The marginal log-likelihood is defined according to Eq. (9).
(9)
The autoregressive scheme proposed by Kennedy and O’Hagan [44] assumes a linear dependency of the various fidelity models. It is assumed that there are s levels of code fidelity (ft(x))s=1,,s modeled by GPs (Ft(x))s=1,,s, where XεURd. The code fidelity increases from 1 to s, thus fs is the most accurate model. The mathematical formulation follows the description in Ref. [54]. The model is based on the Markov property, described in Eq. (10), which states that given the nearest point Ft1(x), we can learn no more for Ft(x) from any other Ft1(x) for xx. This assumption leads to the autoregressive model.
(10)
The submodels are connected according to Eq. (11). The higher fidelity function connects to the lower fidelity function via a scaling function ρt (Eq. (13)) and an additive function δt (Eq. (14)). The scaling function ρt determines the scale factor and the correlation degree between two successive levels of code. The function δt is a Gaussian process independent of Ft1(x) (Eq. (12)). The lowest fidelity function F1 is described by Eq. (15).
(11)
(12)
(13)
(14)
(15)
where gt1(x) is a vector of qt1 regression functions, rt(x,x) is a correlation function, μt(x) is a vector of pt regression functions, βt is a pt-dimensional vector, βt1 is a qt1-dimensional vector, and σt2 is a positive real number. The trend parameters are denoted as β=(β1T,,βST)T, the adjustment parameters are represented as βρ=(βρ1T,,βρST)T, and the variance parameters are expressed as σ=(σ1,,σS). The predictive model of the highest fidelity response fs is calculated according to Eqs. (16)(18).
(16)
(17)
(18)
where V(s) represents the covariance matrix of F(s), ts(x) denotes the vector of covariances between Fs(x) and F(s), H(s)β stands for the mean of F(s), h(s)(x)Tβ is the mean of Fs(x), and υFs2(x) expresses the variance of Fs(x). For further details regarding the mathematical formulations, the reader is referred to the original papers [44,54]. The method of optimizing the hyperparameters is similar to the one explained for single fidelity GPs by maximizing the marginal log-likelihood (Eq. (9)).

3.3 Compositional Kernels.

The kernel, a measure of similarity between data points [43], incorporates the prior beliefs and knowledge about the function f. Kernel validity demands symmetry and positive semi-definiteness. Previous studies have produced basis functions for constructing valid covariance matrices, such as the periodic kernel for modeling repeating functions [55].

Duvenaud et al. [56] introduced compositional kernels, which are formed by combining a limited number of basis kernels through addition or multiplication. The idea of the method was to decompose the function to be learned into interpretable components.

For constructing the compositional kernels, a set of basis kernel functions was determined. The set included the exponential and squared exponential kernels, the linear kernel, the Brownian kernel, the white noise kernel, the Matérn 3/2 and Matérn 5/2 kernels, the constant kernel, and the periodic kernel. When selecting a limited number of basis functions to compose the compositional kernel, an exhaustive search method was employed. More, specifically, vector V1 contains all possible combinations of k basis kernels out of nine possible functions. Vector V2 contains all possible combinations of operations (addition, multiplication) for the k basis kernels. Each combination of operations described in V2 is applied to every element of V1. This process yields a final vector V3. Each element of V3 is assessed based on the Bayesian information criterion BIC as proposed in the original paper [56]. BIC is defined according to Eq. (19):
(19)
where n is the number of training data, khyp is the number of hyperparameters, and L is the maximized likelihood value. BIC consists of two components, a penalty term based on the number of model parameters and a term based on the likelihood function. The benefit of using BIC over maximizing the marginal log-likelihood lies in its consideration of the kernel function’s complexity. By favoring functions with fewer hyperparameters, BIC helps prevent overfitting. The main idea of building the compositional kernel is shown in Algorithm 1.

4 Case Studies

This section presents the case studies and the results. The case studies considered encompass a range of analytical benchmark problems proposed by Ref. [15], as well as an engineering problem involving a cantilever beam. The case studies involving the analytical functions are the following:

  • the Forrester function (used for conceptualization)

  • the Jump Forrester function

  • the ND Rosenbrock function

  • the Heterogeneous function

  • the 2D shifted-rotated Rastrigin function

Following a notation similar to that of the previous sections, the models are numbered as follows: the HF model is labeled as s, while the LF models are sequentially numbered from 1 to s1, where 1 represents the lowest fidelity among the LF models. The python packages used included GPy [57] and Emukit [58]. Latin hypercube sampling technique was employed for the selection of analysis points, and a total of 20 different design of experiments (DoEs) were used to calculate statistics pertaining to prediction errors. Throughout the case studies, the reference model refers to the AR1 model with the squared exponential kernel, whereas the proposed model refers to the AR1 model with compositional kernels. The prediction error was evaluated using two measures, namely, R2 and the normalized root-mean-square error (RMSE), which are expressed as follows according to Eqs. (20) and (21):
(20)
(21)
where yi refers to the observed value of each data point i, yi^ refers to the predicted value of each data point i, y¯ refers to the mean of the observed values for all the data points, and N refers to the total number of the samples. Both errors are used to give a more thorough understanding of the quality of the predictions. More specifically, R2 measures the proportion of variability in a dependent variable, which can be captured by using the independent variable [59]. In the context of linear models, this measure provides a good intuitive understanding as its value ranges from 0 to 1 [60]. For nonlinear models, such as GPs, R2 is defined, but it is not confined to the range (0,1) [61]. A negative value would suggest that the model’s performance is poorer than the average of the predicted values. The metric R2 alone is inadequate for fully assessing the specific models’ performance; therefore, it was used in conjunction with RMSE. The latter metric assesses the accuracy of the model in terms of the residual error.

4.1 Baseline Example: The Forrester Function.

The primary objective of this particular case study, as opposed to the others, is to facilitate comprehension and visualization of the concept of exploring the shape of the design space. For this investigation, we adopt the Forrester function as described in Eqs. (22) and (23) to represent the design space.
(22)
(23)

This case discusses and demonstrates the benefit of compositional kernels in early design exploration. For this analysis, 5 HF and 25 LF analysis data were used. Figure 2 shows the LF approximation represented by the blue line, the HF approximation depicted in orange, and the prediction illustrated by the black dashed line. As shown in Fig. 2(a), the prediction using the AR1 scheme with the squared exponential kernel and the given observational data does not effectively model the HF function. On the other hand, as shown in Fig. 2(b), the prediction using the AR1 scheme and the given observational data give an accurate prediction of the HF function, which represents the design space. In this instance, the kernel for the LF data was represented as a product of a linear kernel and a white noise kernel, while the squared exponential kernel was employed for the HF data. This specific case visually demonstrates that the additional information provided by the compositional kernel about the structure of the HF function can improve the performance of the framework. Thus, it is possible to make more accurate predictions with less HF data, thereby reduce the required computational cost. It is clear that the proposed approach incurs additional computational cost for developing the compositional kernels; however, the computational time and costs of running the MF analysis are less than obtaining additional HF computational or physical experimental data.

Fig. 2
Forrester function using 5 HF and 25 LF data: (a) reference model: RBF kernel and (b) proposed model: optimized kernel
Fig. 2
Forrester function using 5 HF and 25 LF data: (a) reference model: RBF kernel and (b) proposed model: optimized kernel
Close modal

4.2 Addressing Discontinuities: The Jump Forrester Function.

The Jump Forrester is a variation of the Forrester aimed at introducing discontinuities. The Jump Forrester is described by Eqs. (24) and (25).
(24)
(25)

In this particular bi-fidelity case study, we used a total of 25 LF points while varying the number of HF points in the range of 5–15. The Latin hypercube sampling method was employed to generate 20 different datasets to determine the statistics of the error measures. The results are presented in Tables 1 and 2.

Table 1

Error measures calculated for the Jump Forrester function varying number of HF points

DoEGP HFGP HFRef. modelRef. modelProp. modelProp. model
R2 (std)RMSE (std)R2 (std)RMSE (std)R2 (std)RMSE (std)
(5,25)0.18940.24090.69530.12970.74350.1280
(0.3832)(0.0569)(0.4458)(0.0787)(0.2294)(0.0547)
(8,25)0.54510.17720.87040.09710.85540.0996
(0.2742)(0.0544)(0.0545)(0.0191)(0.0968)(0.0316)
(10,25)0.50520.18050.80850.10580.87320.0892
(0.3329)(0.0695)(0.2703)(0.0574)(0.1296)(0.0403)
(15,25)0.73220.12350.90950.07980.93840.0626
(0.2826)(0.0706)(0.0490)(0.0217)(0.0682)(0.0271)
DoEGP HFGP HFRef. modelRef. modelProp. modelProp. model
R2 (std)RMSE (std)R2 (std)RMSE (std)R2 (std)RMSE (std)
(5,25)0.18940.24090.69530.12970.74350.1280
(0.3832)(0.0569)(0.4458)(0.0787)(0.2294)(0.0547)
(8,25)0.54510.17720.87040.09710.85540.0996
(0.2742)(0.0544)(0.0545)(0.0191)(0.0968)(0.0316)
(10,25)0.50520.18050.80850.10580.87320.0892
(0.3329)(0.0695)(0.2703)(0.0574)(0.1296)(0.0403)
(15,25)0.73220.12350.90950.07980.93840.0626
(0.2826)(0.0706)(0.0490)(0.0217)(0.0682)(0.0271)
Table 2

Assessment of the various models for the Jump Forrester function

DoEImprovement ref. modelImprovement prop. modelImprovement prop. model
compared to the GP HF (%)compared to the GP HF (%)compared to the ref. model (%)
(5,25)46471
(8,25)45443
(10,25)415115
(15,25)354922
DoEImprovement ref. modelImprovement prop. modelImprovement prop. model
compared to the GP HF (%)compared to the GP HF (%)compared to the ref. model (%)
(5,25)46471
(8,25)45443
(10,25)415115
(15,25)354922

The results demonstrate that both, the proposed and reference models, exhibit similar performance in the cases of five and eight HF points. However, it outperforms the reference model when using 10 and 15 HF points. The improvement ranges from 1 to 22% depending on the number of HF points, but it is negative (–3%) in the case of eight HF points. This suggests that while the proposed model shows the potential for significant advancements in scenarios with limited data, where both the SF model and the reference model struggle to accurately represent the underlying design space, the amount of HF data needs to be adequate to properly capture the function’s structure. The most substantial improvement is observed when using 15 HF points. A representative case is illustrated in Fig. 3. The design space includes 10 HF and 25 LF data points. The SF model in Fig. 3(a) fails to capture the function. The reference model (Fig. 3(b)) is better but not entirely accurate. In contrast, the proposed model (Fig. 3(c)) demonstrates superior accuracy in predicting the function. In this case, the kernel function for the LF data was a multiplication of a linear and a Brownian kernel, while for the HF data, the Matérn 5/2 kernel was chosen. Based on these findings, it can be concluded that neither model adequately captures the discontinuity; however, the proposed model exhibits enhanced capability in capturing the function within the remaining domain.

Fig. 3
Jump Forrester using 10 HF and 25 LF points: (a) GP HF model, (b) reference model, and (c) proposed model
Fig. 3
Jump Forrester using 10 HF and 25 LF points: (a) GP HF model, (b) reference model, and (c) proposed model
Close modal

4.3 Scalability: The ND Rosenbrock Function.

As previously mentioned, in practical applications, the design space often involves a large number of design variables. Therefore, the scalability of modeling methods becomes a crucial consideration. To illustrate the performance of the proposed model in such design spaces, the ND Rosenbrock function was employed as a representative test case. The Rosenbrock function was evaluated in various dimensions, ranging from 4D to 20D. Equations (26) and (27) were used to describe the Rosenbrock function.
(26)
(27)
where xiε[2,2].

In this specific bi-fidelity case study, we systematically increased the volume of HF and LF data in alignment with the number of dimensions. The quantity of HF data ranged from 45 to 125, while LF data varied from 140 to 300 data points. Importantly, this approach has no impact on the relative performance of the models, as all three models were trained with identical data volumes. To assess the statistics of error measures, LHS was employed to generate 20 distinct datasets. The resulting outcomes are presented in Tables 3 and 4. In the case of all three models, the results reveal a decline in their performance with the increasing dimensionality. This outcome aligns with our expectations, as higher dimensionality brings about greater complexity, necessitating a larger volume of training data to achieve the same level of accuracy. Despite augmenting the training data as the problem scaled, the extent of this increase did not compensate for the heightened complexity. The proposed model outperforms the reference model in all the examined cases and showed a significant improvement. It is noteworthy that in the context of 20 dimensions, the reference model proves ineffective in predicting the function (R2=0.1913), while the proposed model maintains a satisfactory level of accuracy (R2=0.6381).

Table 3

Error measures calculated for the ND Rosenbrock function varying number of HF points

DimensionsGP HFGP HFRef. modelRef. modelProp. modelProp. model
DoER2 (std)RMSE (std)R2 (std)RMSE (std)R2 (std)RMSE (std)
40.21430.11270.96850.02280.99550.0085
(45,140)(0.2886)(0.0243)(0.0108)(0.0037)(0.0020)(0.0018)
6−0.00920.14490.81880.05940.98240.0187
(55,160)(0.0368)(0.0027)(0.1065)(0.0155)(0.0078)(0.0040)
80.00650.12470.64890.07250.93990.0301
(65,180)(0.0404)(0.0026)(0.1421)(0.0157)(0.0315)(0.0059)
10−0.011980.12030.55300.07830.85740.0447
(75,180)(0.0153)(0.0009)(0.1844)(0.0163)(0.0495)(0.0067)
15−0.002240.12320.11730.11430.70370.0660
(100,250)(0.0186)(0.0012)(0.2470)(0.0176)(0.1341)(0.0115)
200.00320.1402−0.19130.15230.63810.0838
(125,300)(0.0216)(0.0015)(0.1690)(0.0122)(0.0751)(0.0083)
DimensionsGP HFGP HFRef. modelRef. modelProp. modelProp. model
DoER2 (std)RMSE (std)R2 (std)RMSE (std)R2 (std)RMSE (std)
40.21430.11270.96850.02280.99550.0085
(45,140)(0.2886)(0.0243)(0.0108)(0.0037)(0.0020)(0.0018)
6−0.00920.14490.81880.05940.98240.0187
(55,160)(0.0368)(0.0027)(0.1065)(0.0155)(0.0078)(0.0040)
80.00650.12470.64890.07250.93990.0301
(65,180)(0.0404)(0.0026)(0.1421)(0.0157)(0.0315)(0.0059)
10−0.011980.12030.55300.07830.85740.0447
(75,180)(0.0153)(0.0009)(0.1844)(0.0163)(0.0495)(0.0067)
15−0.002240.12320.11730.11430.70370.0660
(100,250)(0.0186)(0.0012)(0.2470)(0.0176)(0.1341)(0.0115)
200.00320.1402−0.19130.15230.63810.0838
(125,300)(0.0216)(0.0015)(0.1690)(0.0122)(0.0751)(0.0083)
Table 4

Assessment of the various models for the ND Rosenbrock function

DimensionsDoEImprovement ref. modelImprovement prop. modelImprovement prop. model
compared to the GP HF (%)compared to the GP HF (%)compared to the ref. model (%)
4(45,140)809263
6(55,160)598768
8(65,180)427658
10(75,200)356343
15(100,250)74642
20(125,300)84045
DimensionsDoEImprovement ref. modelImprovement prop. modelImprovement prop. model
compared to the GP HF (%)compared to the GP HF (%)compared to the ref. model (%)
4(45,140)809263
6(55,160)598768
8(65,180)427658
10(75,200)356343
15(100,250)74642
20(125,300)84045

One of the well-known challenges to address when using Gaussian processes is that the computational complexity of training a GP model is known to be O(N3), where N represents the number of data points [62]. This cubic complexity poses challenges when dealing with large datasets or high-dimensional design spaces. This significantly impacts the procedure of constructing compositional kernels, making it computationally expensive for high-dimensional input spaces. Figure 4 displays the escalating computational costs plotted against the dimensions of the function. To provide an indication of this increase, the average time of building the compositional kernels for the 4D Rosenbrock function is 4728 s, which was 13 times lower than the time required for the 20D Rosenbrock function, which was 64,557 s.

Fig. 4
Computational cost comparison across various dimensions of the ND Rosenbrock function
Fig. 4
Computational cost comparison across various dimensions of the ND Rosenbrock function
Close modal

4.4 Discovering Complex Patterns: The Heterogenous Function.

Complex design spaces are often characterized by intricate structures. To evaluate the performance of models in solving such design spaces, a commonly employed analytical function is the heterogeneous function, known for its localized and multi-modal behavior [15]. The 1D heterogeneous function is described by Eqs. (28) and (29).
(28)
(29)
where 0x1. In the analysis, similar to the Jump Forrester function, the number of HF points varied from 5 to 15, while the number of LF points remained constant at 25. The results are summarized in Tables 5 and 6.
Table 5

Error measures calculated for the heterogeneous function varying the number of HF points

DoEGP HF R2 (std)GP HF RMSE (std)Ref. model R2 (std)Ref. model RMSE (std)Prop. model R2 (std)Prop. model RMSE (std)
(5,25)–0.39390.44760.13870.30810.61440.2115
(1.6100)(0.2197)(1.5754)(0.2423)(0.7309)(0.1550)
(8,25)0.54760.25990.91140.11890.95690.0876
(0.3719)(0.1145)(0.0785)(0.0409)(0.0022)(0.0022)
(10,25)0.69410.21600.91270.11930.95760.0869
(0.2431)(0.0888)(0.0639)(0.0367)(0.0017)(0.0018)
(15,25)0.88130.13230.90660.12710.95760.0869
(0.1401)(0.0606)(0.0291)(0.0220)(0.0021)(0.0022)
DoEGP HF R2 (std)GP HF RMSE (std)Ref. model R2 (std)Ref. model RMSE (std)Prop. model R2 (std)Prop. model RMSE (std)
(5,25)–0.39390.44760.13870.30810.61440.2115
(1.6100)(0.2197)(1.5754)(0.2423)(0.7309)(0.1550)
(8,25)0.54760.25990.91140.11890.95690.0876
(0.3719)(0.1145)(0.0785)(0.0409)(0.0022)(0.0022)
(10,25)0.69410.21600.91270.11930.95760.0869
(0.2431)(0.0888)(0.0639)(0.0367)(0.0017)(0.0018)
(15,25)0.88130.13230.90660.12710.95760.0869
(0.1401)(0.0606)(0.0291)(0.0220)(0.0021)(0.0022)
Table 6

Assessment of the various models for the heterogeneous function

DoEImprovement ref. model compared to the GP HF (%)Improvement prop. model compared to the GP HF (%)Improvement prop. model compared to the ref. model (%)
(5,25)315331
(8,25)546626
(10,25)456027
(15,25)43432
DoEImprovement ref. model compared to the GP HF (%)Improvement prop. model compared to the GP HF (%)Improvement prop. model compared to the ref. model (%)
(5,25)315331
(8,25)546626
(10,25)456027
(15,25)43432

Notably, the MF approach demonstrates a significant advantage over the SF approach, particularly across the range of tested HF points. Moreover, the proposed model exhibits improved prediction accuracy across all the tested DoEs. The improvement in the predictions of the proposed model compared to the reference model ranges from 26% to 32%. Insights into the performance of the models can be gained from Fig. 5, which specifically focuses on the case where a dataset of 5 HF points and 25 LF points is shown. In Figs. 5(a) and 5(b), it is evident that both the SF model and the reference model struggle to accurately predict the shape of the function. The proposed model employed a kernel function comprising the multiplication of the linear and Brownian kernels for the LF data, while a squared exponential kernel was used for the HF data. Notably, the proposed model achieves a more precise representation of the function throughout the entire domain. However, one drawback of the method is that the uncertainty bounds are reduced even in the area close to x=0, where the model fails to capture the structure of the function.

Fig. 5
Heterogeneous function using 5 HF and 25 LF points: (a) GP HF model, (b) reference model, and (c) proposed model
Fig. 5
Heterogeneous function using 5 HF and 25 LF points: (a) GP HF model, (b) reference model, and (c) proposed model
Close modal

4.5 Noisy Observations: The 2D Shifted-Rotated Rastrigin Function.

In this case study, the 2D shifted-rotated Rastrigin function was employed. This function is characterized by multimodal behavior. In practical applications, the analysis data and experimental data used for design optimization often contain noise. Therefore, it is important to investigate the performance of the model while dealing with noisy training data. To investigate this, a noise term edata was added to the 2D shifted-rotated Rastrigin function, taken from Ref. [15]. Thus, for this analysis, Eqs. (30) and (31) are used. The function can be visualized in Fig. 6.
(30)
where the resolution error er is defined according to Eq. (34).
(31)
where
(32)
(33)
where xiε[0.1,0.2] for i=1,..,D, R is the rotation matrix, θ=0.2, and x* is the location of the global optimum at [0.1,0.1]T.
(34)
with α(ϕ)=Θ(ϕ), w(ϕ)=10πΘ, β(ϕ)=0.5πΘ(ϕ), and Θ(ϕ)=10.0001ϕ. For the present case study, we chose ϕ=2500.
Fig. 6
Visualization of the Rastrigin function
Fig. 6
Visualization of the Rastrigin function
Close modal

The outcomes are displayed in Tables 7 and 8. It is evident that the GP HF model falls short in capturing the underlying function. In contrast, both the proposed and the reference model demonstrated substantial enhancements. The reference model shows improvements ranging from 21% to 34%, while the proposed model delivers enhancements between 61% and 75%. In terms of statistical error analysis, we generated RMSE plots for the three models over 20 iterations to assess error convergence. As illustrated in Figs. 7(a)7(c), it is clear that the error values reach a plateau around iteration 16. The case study was extended to a three-fidelity scenario, and the outcomes are presented in Tables 9 and 10. These results exhibit analogous trends to the bifidelity case, with the distinction that both the proposed and reference models show greater improvements in predictions, ranging from 32% to 63% and from 72% to 79%, respectively. Emphasizing the impact of incorporating additional models, it is worth noting that this results in escalating computational costs. Figure 8 illustrates the computational cost for both the bifidelity and trifidelity scenarios.

Fig. 7
Convergence of the RMSE based on the Rastrigin bifidelity case study: (a) GP HF model, (b) reference model, and (c) proposed model
Fig. 7
Convergence of the RMSE based on the Rastrigin bifidelity case study: (a) GP HF model, (b) reference model, and (c) proposed model
Close modal
Fig. 8
Computational cost comparison based on the Rastrigin function case
Fig. 8
Computational cost comparison based on the Rastrigin function case
Close modal
Table 7

Error measures calculated for the 2D Rastrigin function varying number of HF points (bifidelity case)

DoEGP HF R2 (std)GP HF RMSE (std)Ref model R2 (std)Ref model RMSE (std)Prop model R2 (std)Prop model RMSE (std)
(5,100)0.55390.30230.19100.21140.79250.1118
(0.7471)(0.0628)(0.4308)(0.0704)(0.0604)(0.0153)
(10,100)0.23980.27350.21300.20390.83110.1013
(0.3482)(0.0355)(0.5096)(0.0820)(0.0371)(0.0102)
(15,100)0.06270.25470.27270.20020.85430.0942
(0.1489)(0.0186)(0.3860)(0.0671)(0.0248)(0.0082)
(20,100)0.11690.22600.49440.15860.86920.0878
(0.3703)(0.0558)(0.4170)(0.0761)(0.0599)(0.0179)
(25,100)0.03320.23840.39450.15310.90280.0762
(0.2743)(0.0497)(0.4485)(0.0847)(0.0325)(0.0127)
(30,100)0.10180.22670.52240.14890.94580.0559
(0.3339)(0.0610)(0.4562)(0.0845)(0.0323)(0.0142)
DoEGP HF R2 (std)GP HF RMSE (std)Ref model R2 (std)Ref model RMSE (std)Prop model R2 (std)Prop model RMSE (std)
(5,100)0.55390.30230.19100.21140.79250.1118
(0.7471)(0.0628)(0.4308)(0.0704)(0.0604)(0.0153)
(10,100)0.23980.27350.21300.20390.83110.1013
(0.3482)(0.0355)(0.5096)(0.0820)(0.0371)(0.0102)
(15,100)0.06270.25470.27270.20020.85430.0942
(0.1489)(0.0186)(0.3860)(0.0671)(0.0248)(0.0082)
(20,100)0.11690.22600.49440.15860.86920.0878
(0.3703)(0.0558)(0.4170)(0.0761)(0.0599)(0.0179)
(25,100)0.03320.23840.39450.15310.90280.0762
(0.2743)(0.0497)(0.4485)(0.0847)(0.0325)(0.0127)
(30,100)0.10180.22670.52240.14890.94580.0559
(0.3339)(0.0610)(0.4562)(0.0845)(0.0323)(0.0142)
Table 8

Assessment of the various models for the 2D Rastrigin function (bifidelity case)

DoEImprovement ref. model compared to the GP HF (%)Improvement prop. model compared to the GP HF (%)Improvement prop. model compared to the ref. model (%)
(5,100)306347
(10,100)256250
(15,100)216352
(20,100)306145
(25,100)276855
(30,100)347562
DoEImprovement ref. model compared to the GP HF (%)Improvement prop. model compared to the GP HF (%)Improvement prop. model compared to the ref. model (%)
(5,100)306347
(10,100)256250
(15,100)216352
(20,100)306145
(25,100)276855
(30,100)347562
Table 9

Assessment of the various models for the 2D Rastrigin function (three fidelity case)

DoEGP HF R2 (std)GP HF RMSE (std)Ref model R2 (std)Ref model RMSE (std)Prop model R2 (std)Prop model RMSE (std)
(5,50,100)−0.94170.31310.40890.17300.86630.0884
(2.7959)(0.1454)(0.4451)(0.0797)(0.0622)(0.0200)
(10,50,100)−0.22530.27100.34280.18420.91450.0717
(0.4342)(0.0414)(0.4362)(0.0800)(0.0267)(0.0101)
(15,50,100)0.03850.23850.69960.12060.92340.0673
(0.3127)(0.0461)(0.3448)(0.0623)(0.0319)(0.0130)
(20,50,100)−0.01540.24720.55540.14120.94400.0577
(0.2139)(0.0348)(0.4671)(0.0856)(0.0189)(0.0102)
(25,50,100)0.03940.23810.76270.10250.95270.0525
(0.2688)(0.0474)(0.3339)(0.0636)(0.0217)(0.0121)
(30,50,100)0.18260.21150.87000.07730.96550.0452
(0.3981)(0.0736)(0.2054)(0.0446)(0.0137)(0.0089)
DoEGP HF R2 (std)GP HF RMSE (std)Ref model R2 (std)Ref model RMSE (std)Prop model R2 (std)Prop model RMSE (std)
(5,50,100)−0.94170.31310.40890.17300.86630.0884
(2.7959)(0.1454)(0.4451)(0.0797)(0.0622)(0.0200)
(10,50,100)−0.22530.27100.34280.18420.91450.0717
(0.4342)(0.0414)(0.4362)(0.0800)(0.0267)(0.0101)
(15,50,100)0.03850.23850.69960.12060.92340.0673
(0.3127)(0.0461)(0.3448)(0.0623)(0.0319)(0.0130)
(20,50,100)−0.01540.24720.55540.14120.94400.0577
(0.2139)(0.0348)(0.4671)(0.0856)(0.0189)(0.0102)
(25,50,100)0.03940.23810.76270.10250.95270.0525
(0.2688)(0.0474)(0.3339)(0.0636)(0.0217)(0.0121)
(30,50,100)0.18260.21150.87000.07730.96550.0452
(0.3981)(0.0736)(0.2054)(0.0446)(0.0137)(0.0089)
Table 10

Assessment of the various models for the 2D Rastrigin function (three fidelity case)

DoEImprovement ref. model compared to the GP HF (%)Improvement prop. model compared to the GP HF (%)Improvement prop. model compared to the ref. model (%)
(5,50,100)447249
(10,50,100)327461
(15,50,100)497244
(20,50,100)437749
(25,50,100)577849
(30,50,100)637942
DoEImprovement ref. model compared to the GP HF (%)Improvement prop. model compared to the GP HF (%)Improvement prop. model compared to the ref. model (%)
(5,50,100)447249
(10,50,100)327461
(15,50,100)497244
(20,50,100)437749
(25,50,100)577849
(30,50,100)637942

4.6 Simplified Design Problem: The Cantilever Beam.

The proposed framework was tested on a structural design problem involving a cantilever beam. This particular problem was chosen because it serves as a simplified representation of real-world, complex engineering problem, such as estimating lifetime loads on intricate structures like aircraft or ships. The formulation of the problem was taken from Ref. [51] and modified.

The cantilever beam is shown in Fig. 9(a). The square-section beam is fixed to the wall on one end, while a concentrated load is applied to the opposite end. In addition, there is a hole on the side that is anchored to the wall. The aim is the calculation of the developed von Mises (VM) stress. The problem is set up as a bi-fidelity problem, where the LF method is the analytical estimation of the maximal von Mises (VM) stress and the HF method is the numerical estimation of the maximal VM stress. Furthermore, the problem was modeled as a two-dimensional (2D) problem, with the independent variables being the beam’s length (L), and diameter (d). The problem domain was defined within the ranges of Lε[2.0,3.0] m, and dε[0.25,0.4] m. The applied force was established as a constant value of 950 kN.

Fig. 9
Cantilever beam case study: (a) schematic representation and (b) equivalent von Mises stresses calculated by ansys
Fig. 9
Cantilever beam case study: (a) schematic representation and (b) equivalent von Mises stresses calculated by ansys
Close modal
The equations can be found in Ref. [63]. To accurately calculate the Von Mises (VM) stress through analytical means, both the shear force and the bending moment, exerted along the beam’s length when it is subjected to a transverse force, need to be accounted for. The maximal bending stress σb is calculated according to Eq. (35):
(35)
(36)
where Mb is the bending moment, I is the moment of inertia, F is the applied force, and L is the beam’s length. For square cross sections, the moment of inertia I is calculated based on Eq. (37).
(37)
The calculation for the average shear force, τ, is determined according to Eq. (38):
(38)
where Q is the first moment of area with respect to the neutral axis that lies above the point of interest.
(39)
The maximal VM stress is calculated according to Eq. (40).
(40)
In the numerical model, a hole was incorporated into the cantilever beam design. The dimensions of the hole were determined based on the beam’s main dimensions according to Eqs. (41)(43). The material properties of the beam were specified as follows: steel with a Young’s modulus (E) of 2e+11 GPa and a Poisson’s ratio (ν) of 0.30. The model was developed developed using ansys software.
(41)
(42)
(43)

The outcomes are presented in Tables 11 and 12. These results indicate that the reference model yields results that are on par with the SF model. This can primarily be attributed to the considerable disparity between the LF fidelity model and the HF model, which is attributed to the presence of the hole. In contrast, the predictions of the proposed model are closer to the HF design space. Based on Table 12, the improvement compared to the SF model ranged from 15% to 24%. An example of the problem can be visualized in Fig. 10.

Fig. 10
2D structural problem of a cantilever beam using 20 HF data and 50 HF data: (a) design spaces for the cantilever beam, (b) reference model, and (c) proposed model
Fig. 10
2D structural problem of a cantilever beam using 20 HF data and 50 HF data: (a) design spaces for the cantilever beam, (b) reference model, and (c) proposed model
Close modal
Table 11

Error measures calculated for the cantilever beam varying the number of HF points

DoEGP HF R2 (std)GP HF RMSE (std)Ref model R2 (std)Ref model RMSE (std)Prop model R2 (std)Prop model RMSE (std)
(10,50)−0.12860.1892−0.23480.19790.07100.1578
(0.8933)(0.0748)(1.0045)(0.0784)(1.0450)(0.0989)
(15,50)0.31590.14840.36260.14370.55610.1127
(0.4988)(0.0535)(0.4623)(0.0518)(0.5115)(0.0627)
(20,50)0.42640.13980.42550.14000.53390.1190
(0.3890)(0.0514)(0.3847)(0.0504)(0.4326)(0.0648)
(25,50)0.61220.11490.58750.11840.68830.0975
(0.1661)(0.0290)(0.1872)(0.0308)(0.2393)(0.0418)
(30,50)0.62000.11290.59410.11700.69890.0991
(0.1840)(0.0256)(0.1851)(0.0250)(0.1951)(0.0322)
(35,50)0.65620.10730.64840.10850.73910.0910
(0.1528)(0.0162)(0.1575)(0.0158)(0.1798)(0.0308)
DoEGP HF R2 (std)GP HF RMSE (std)Ref model R2 (std)Ref model RMSE (std)Prop model R2 (std)Prop model RMSE (std)
(10,50)−0.12860.1892−0.23480.19790.07100.1578
(0.8933)(0.0748)(1.0045)(0.0784)(1.0450)(0.0989)
(15,50)0.31590.14840.36260.14370.55610.1127
(0.4988)(0.0535)(0.4623)(0.0518)(0.5115)(0.0627)
(20,50)0.42640.13980.42550.14000.53390.1190
(0.3890)(0.0514)(0.3847)(0.0504)(0.4326)(0.0648)
(25,50)0.61220.11490.58750.11840.68830.0975
(0.1661)(0.0290)(0.1872)(0.0308)(0.2393)(0.0418)
(30,50)0.62000.11290.59410.11700.69890.0991
(0.1840)(0.0256)(0.1851)(0.0250)(0.1951)(0.0322)
(35,50)0.65620.10730.64840.10850.73910.0910
(0.1528)(0.0162)(0.1575)(0.0158)(0.1798)(0.0308)
Table 12

Assessment of the various models for the cantilever beam

DoEImprovement ref. model compared to the GP HF (%)Improvement prop. model compared to the GP HF (%)Improvement prop. model compared to the ref. model (%)
(10,50)51720
(15,50)32422
(20,50)01515
(25,50)31517
(30,50)41215
(35,50)11516
DoEImprovement ref. model compared to the GP HF (%)Improvement prop. model compared to the GP HF (%)Improvement prop. model compared to the ref. model (%)
(10,50)51720
(15,50)32422
(20,50)01515
(25,50)31517
(30,50)41215
(35,50)11516

5 Discussion

5.1 Discussion on the Presented Case Studies.

In summary, the findings demonstrated that the incorporation of compositional kernels significantly enhanced the predictive capabilities of the AR1. Various analytical benchmark problems were simulated to thoroughly test the proposed model.

The 1D Jump Forrester function, representing a discontinuous space, was treated as a bi-fidelity problem. The number of HF points ranged from 5 to 15, while LF points remained constant at 25. The proposed model yielded an improvement in predictions, reaching up to 22% in the case of 15 HF points. Similarly, the 1D heterogeneous function case study followed a modeling approach akin to the 1D Jump Forrester scenario. The proposed model exhibited improvement, with predictions reaching up to 32% in the case of 15 HF points. The 2D shifted-rotated Rastrigin function, employed to assess multimodal behavior, was modeled both as a bi-fidelity and tri-fidelity problem. In the bi-fidelity scenario, the HF points ranged from 5 to 30, while LF points remained constant at 100. An improvement of 62% was observed with 30 HF points. In the tri-fidelity case, HF points ranged from 5 to 30, medium-fidelity points were held constant at 50, and LF points remained constant at 100. In this case, the improvement was measured 49% for the majority of the cases. For the cantilever beam problem, the reference model produced results comparable to the SF GP. However, the proposed model demonstrated improved results, showing enhancements ranging from 15% to 22%. Overall, these results hold promise for the application of the model in addressing complex design problems within multidimensional spaces.

5.2 Critical Reflection on Scaling-Up the Method to Address Early-Ship Design of Complex Vessels.

The objective of the proposed method is to facilitate early-stage design exploration of complex vessels. While the presented case studies involved a lower complexity level, it is crucial to critically reflect on the scalability of the method. The authors believe that the method has undergone extensive testing in various analytical case studies, which simulate challenges sharing similarities with real design problems, such as the presence of discontinuities and complex patterns. Furthermore, the chosen case studies demonstrate good alignment with benchmark problems that hold wide acceptance within the research design community. An important consideration when applying the suggested method to high-dimensional realistic design problems is the increased computational costs associated with the development of the compositional kernels. The associated computational cost depends on the dimensionality of the problem, the size of the training set, and the number of analysis methods used in the MF model. The latter is not of interest because inherently these design problems deal with small data regimes. The impact of problem dimensionality was explored in the ND Rosenbrock case study. Additionally, the effect of integrating additional fidelity models was examined in the case of the Rastrigin function. Overall, the integration of compositional kernels introduces a trade-off between the computational benefits arising from reduced training dataset sizes and the supplementary computational expenses stemming from kernel optimization. The determining factor in this trade-off is contingent upon the nature of the design problem. Particularly in scenarios characterized by design exploration tasks featuring KPIs that are expensive to evaluate, we assert that the integration of compositional kernels presents a promising avenue.

The expansion of the method to tackle high-dimensional problems will inevitably result in increased computational expenses, presenting a notable challenge. However, it is important to note that HF analysis techniques in ship design problems such as computational fluid dynamics (CFD) analysis and model tests can be significantly (computationally) expensive. Therefore, the authors believe that the application of the proposed method and the subsequent reduction in the required number of HF simulations will yield computational benefits for design space exploration problems.

5.3 Recommendations for Future Research.

Several suggestions for future research efforts are worth considering. First, the method should be tested to real engineering engineering problems to ensure its capability to effectively capture the complex patterns inherent in real multidimensional design spaces. In the context of real-world applications, another significant consideration involves evaluating the enhancements in accuracy and the reduced uncertainty in predicting the design space. In addition, it is important to assess whether the computational benefits stemming from the reduced necessity for HF simulations are offset by the computational costs associated with optimizing the kernel function.

Acknowledgment

This publication is part of the project “Multi-fidelity Probabilistic Design Framework for Complex Marine Structures” (project number TWM.BL.019.007) of the research program “Topsector Water & Maritime: The Blue Route,” which is (partly) financed by the Dutch Research Council (NWO). The authors thank DAMEN, the Netherlands Defence Materiel Organisation (DMO), and MARIN for their contribution to the research.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

References

1.
Andrews
,
D.
,
2018
, “
The Sophistication of Early Stage Design for Complex Vessels
,”
Inter. J. Maritime Eng.
,
160
(
18
), p.
12
.
2.
Mavris
,
D.
,
DeLaurentis
,
D.
,
Bandte
,
O.
, and
Hale
,
M.
,
1998
, “
A Stochastic Approach to Multi-disciplinary Aircraft Analysis and Design
,”
36th AIAA Aerospace Sciences Meeting and Exhibit
,
Reno, NV
,
Jan. 12–15
.
3.
Papanikolaou
,
A.
,
2014
,
Ship Design: Methodologies of Preliminary Design
,
Springer
,
The Netherlands
.
4.
Horvath
,
B. L.
, and
Wells
,
D. P.
,
2018
, “
Comparison of Aircraft Conceptual Design Weight Estimation Methods to the Flight Optimization System
,”
2018 AIAA Aerospace Sciences Meeting
,
Kissimmee, FL
,
Jan. 8–12
.
5.
Chaal
,
M.
,
Ren
,
X.
,
BahooToroody
,
A.
,
Basnet
,
S.
,
Bolbot
,
V.
,
Banda
,
O. A.
, and
Gelder
,
P. V.
,
2023
, “
Research on Risk, Safety, and Reliability of Autonomous Ships: A Bibliometric Review
,”
Safety Sci.
,
167
, p.
106256
.
6.
van Essen
,
S.
, and
Seyffert
,
H.
,
2022
, “
Finding Dangerous Waves – Towards an Efficient Method to Obtain Wave Impact Design Loads for Marine Structures.
Proceedings of the ASME 2022 41st International Conference on Ocean, Offshore and Arctic Engineering. Volume 1: Offshore Technology
,
Hamburg, Germany
,
June 5–10
.
7.
Geertsma
,
R. D.
,
Negenborn
,
R. R.
,
Visser
,
K.
, and
Hopman
,
J. J.
,
2017
, “
Design and Control of Hybrid Power and Propulsion Systems for Smart Ships: A Review of Developments
,”
Appl. Energy.
,
194
, pp.
30
54
.
8.
Kim
,
B. S.
,
Park
,
D. M.
, and
Kim
,
Y.
,
2022
, “
Study on Nonlinear Heave and Pitch Motions of Conventional and Tumblehome Hulls in Head Seas
,”
Ocean. Eng.
,
247
, p.
110671
.
9.
Fuentes
,
D.
,
Salas
,
M.
,
Tampier
,
B. G.
, and
Troncoso
,
C.
,
2015
, “Structural Design and Optimisation of an Aluminium Trimaran,”
Analysis and Design of Marine Structures V
,
G.
Soares
, and
Shenoi
, eds.,
CRC Press
,
Boca Raton, FL
, pp.
549
558
.
10.
Rijkens
,
A.A.K
, and
Mikelic
,
A.
,
2022
, “
The Hydrodynamic Comparison Between a Conventional and an Axe Bow frigate Hull
,”
16th International Naval Engineering Conference and Exhibition incorporating the International Ship Control Systems Symposium
,
Delft, Netherlands
,
Nov. 8–10
.
11.
Hovilainen
,
M.
,
Romu
,
T.
,
Heinonen
,
T.
, and
Uuskallio
,
A.
,
2016
, “
Initial Operational Experience From the Oblique Icebreaker
,” All Days, OTC, St. John's, Newfoundland and Labrador, Canada, Oct. 24–26.
12.
Oosterom
,
W.
, and
Vos
,
R.
,
2022
, “
Conceptual Design of a Flying-V Aircraft Family
,”
AIAA AVIATION 2022 Forum
,
Chicago, IL & Virtual
,
June 27–July 1
.
13.
Charisi
,
N. D.
,
Hopman
,
H.
, and
Kana
,
A.
,
2022
, “
Early-Stage Design of Novel Vessels: How Can We Take a Step Forward?
14th International Marine Design Conference
,
Vancouver, Canada
,
June 26–30
.
14.
Bickel
,
J. E.
, and
Bratvold
,
R. B.
,
2008
, “
From Uncertainty Quantification to Decision Making in the Oil and Gas Industry
,”
Energy Exploration Exploit.
,
26
(
5
), pp.
311
325
.
15.
Mainini
,
L.
,
Serani
,
A.
,
Rumpfkeil
,
M. P.
,
Minisci
,
E.
,
Quagliarella
,
D.
,
Pehlivan
,
H.
,
Yildiz
,
S.
,
Ficini
,
S.
,
Pellegrini
,
R.
,
Di Fiore
,
F.
,
Bryson
,
D.
,
Nikbay
,
M.
,
Diez
,
M.
, and
Beran
,
P.
,
2022
, “
Analytical Benchmark Problems for Multifidelity Optimization Methods
,”
AVT-354 Multi-Fidelity Methods for Military Vehicle Design
,
Varna, Bulgaria
,
Sept. 26–28
.
16.
Parunov
,
J.
,
Guedes Soares
,
C.
,
Hirdaris
,
S.
,
Iijima
,
K.
,
Wang
,
X.
,
Brizzolara
,
S.
,
Qiu
,
W.
,
Mikulić
,
A.
,
Wang
,
S.
, and
Abdelwahab
,
H. S.
,
2022
, “
Benchmark Study of Global Linear Wave Loads on a Container Ship With Forward Speed
,”
Marine Struct.
,
84
, p.
103162
.
17.
Decker
,
K.
, and
Mavris
,
D.
,
2022
, “
Modeling Hypersonic Vehicle Performance and Operations Using a Multi-Fidelity Reduced Order Modeling Approach
,”
AVT-354 Multi-Fidelity Methods for Military Vehicle Design
,
Varna, Bulgaria
,
Sept. 26–28
.
18.
Papageorgiou
,
A.
,
Tarkian
,
M.
,
Amadori
,
K.
, and
Ölvander
,
J.
,
2018
, “
Multidisciplinary Design Optimization of Aerial Vehicles: A Review of Recent Advancements
,”
Inter. J. Aeros. Eng.
,
2018
, p. 4258020.
19.
Beran
,
P. S.
,
Bryson
,
D. E.
,
Thelen
,
A. S.
,
Diez
,
M.
, and
Serani
,
A.
,
2020
, “
Comparison of Multi-fidelity Approaches for Military Vehicle Design
,”
AIAA Aviation 2020 Forum
,
Virtual Event
,
June 15–19
.
20.
Fernández-Godino
,
M. G.
,
Park
,
C.
,
Kim
,
N. -H.
, and
Haftka
,
R. T.
,
2016
, “
Review of Multi-fidelity Models
,”
AIAA. J.
,
57
(
5
), pp.
2039
2054
.
21.
Peherstorfer
,
B.
,
Willcox
,
K.
, and
Gunzburger
,
M.
,
2018
, “
Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization
,”
SIAM Review
,
60
(
3
), pp.
550
591
.
22.
Raissi
,
M.
,
Perdikaris
,
P.
, and
Karniadakis
,
G. E.
,
2016
, “
Inferring Solutions of Differential Equations Using Noisy Multi-fidelity Data
,”
J. Comput. Phys.
,
335
, pp.
736
746
.
23.
Drummen
,
I.
,
Hageman
,
R. B.
, and
Stambaugh
,
K.
,
2022
, “
Multifidelity Approach for Predicting Extreme Global Bending Load Effects
,”
9th International Conference on Hydroelasticity in Marine Technology
,
Rome, Italy
,
June 10–13
.
24.
Guth
,
S.
,
Champenois
,
B.
, and
Sapsis
,
T. P.
,
2022
, “
Application of Gaussian Process Multi-fidelity Optimal Sampling to Ship Structural Modeling
,”
34th Symposium on Naval Hydrodynamics
,
Washington, DC
,
June 26–July 1
.
25.
Ng
,
L. W. T.
, and
Willcox
,
K. E.
,
2015
, “
Monte Carlo Information-Reuse Approach to Aircraft Conceptual Design Optimization Under Uncertainty
,”
J. Aircraft
,
53
(
2
), pp.
1
12
.
26.
Ng
,
L. W.
, and
Willcox
,
K. E.
,
2014
, “
Multifidelity Approaches for Optimization Under Uncertainty
,”
Inter. J. Num. Methods Eng.
,
100
(
10
), pp.
746
772
.
27.
Meng
,
X.
, and
Karniadakis
,
G. E.
,
2020
, “
A Composite Neural Network that Learns From Multi-fidelity Data: Application to Function Approximation and Inverse PDE Problems
,”
J. Comput. Phys.
,
401
, p.
109020
.
28.
Pawar
,
S.
,
San
,
O.
,
Vedula
,
P.
,
Rasheed
,
A.
, and
Kvamsdal
,
T.
,
2022
, “
Multi-fidelity Information Fusion With Concatenated Neural Networks
,”
Sci. Rep.
,
12
(
1
), p.
5900
.
29.
Guo
,
M.
,
Manzoni
,
A.
,
Amendt
,
M.
,
Conti
,
P.
, and
Hesthaven
,
J. S.
,
2022
, “
Multi-fidelity Regression Using Artificial Neural Networks: Efficient Approximation of Parameter-Dependent Output Quantities
,”
Comput. Methods. Appl. Mech. Eng.
,
389
, p.
114378
.
30.
Meng
,
X.
,
Babaee
,
H.
, and
Karniadakis
,
G. E.
,
2021
, “
Multi-fidelity Bayesian Neural Networks: Algorithms and Applications
,”
J. Comput. Phys.
,
438
, p.
110361
.
31.
Bonfiglio
,
L.
,
Perdikaris
,
P.
, and
Karniadakis
,
G.
,
2018
, “
A Probabilistic Framework for Multidisciplinary Design: Application to the Hydrostructural Optimization of Supercavitating Hydrofoils
,”
Inter. J. Num. Methods Eng.
,
116
(
4
), pp.
246
269
.
32.
Brooks
,
T. R.
,
Kenway
,
G. K.
, and
Martins
,
J. R.
,
2017
, “
Undeflected Common Research Model (uCRM): An Aerostructural Model for the Study of High Aspect Ratio Transport Aircraft Wings
,”
35th AIAA Applied Aerodynamics Conference
,
Denver, CO
,
June 5–9
.
33.
Di Fiore
,
F.
,
Maggiore
,
P.
, and
Mainini
,
L.
,
2021
, “
Multifidelity Domain-aware Learning for the Design of Re-entry Vehicles
,”
Structural and Multidisciplinary Optimization
,
64
(
23
), pp.
1
19
.
34.
Singh
,
V.
, and
Willcox
,
K. E.
,
2021
, “
Decision-Making Under Uncertainty for a Digital Thread-Enabled Design Process
,”
ASME J. Mech. Des.
,
143
(
9
), p.
091707
.
35.
Melati
,
D.
,
Grinberg
,
Y.
,
Kamandar Dezfouli
,
M.
,
Janz
,
S.
,
Cheben
,
P.
,
Schmid
,
J. H.
,
Sánchez-Postigo
,
A.
, and
Xu
,
D. X.
,
2019
, “
Mapping the Global Design Space of Nanophotonic Components Using Machine Learning Pattern Recognition
,”
Nat. Communicat.
,
10
(
1
), pp.
1
9
.
36.
Moss
,
H. B.
,
Beck
,
D.
,
Gonzalez
,
J.
,
Leslie
,
D. S.
, and
Rayson
,
P.
,
2020
, “
BOSS: Bayesian Optimization Over String Spaces
,”
Adv. Neural Inform. Process. Syst.
,
2020
(
12
).
37.
Palar
,
P. S.
,
Zuhal
,
L. R.
, and
Shimoyama
,
K.
,
2020
, “
Gaussian Process Surrogate Model With Composite Kernel Learning for Engineering Design
,”
AIAA. J.
,
58
(
4
), pp.
1864
1880
.
38.
Hulse
,
D.
,
Hoyle
,
C.
,
Tumer
,
I. Y.
, and
Goebel
,
K.
,
2021
, “
How Uncertain Is Too Uncertain? Validity Tests for Early Resilient and Risk-Based Design Processes
,”
ASME J. Mech. Des.
,
143
(
1
), p.
011702
.
39.
O’Hagan
,
T.
,
2004
, “
Dicing With the Unknown
,”
Significance
,
1
(
3
), pp.
132
133
.
40.
North
,
D. W.
,
2017
,
Decision Analytic and Bayesian Uncertainty Quantification for Decision Support
,
Springer International Publishing
,
Cham
, pp.
1361
1399
.
41.
Collette
,
M.
,
2017
,
Uncertainty Approaches in Ship Structural Performance
,
Springer International Publishing
,
Cham
, pp.
1567
1588
.
42.
Aughenbaugh
,
J. M.
, and
Paredis
,
C. J.
,
2006
, “
The Value of Using Imprecise Probabilities in Engineering Design
,”
ASME J. Mech. Des.
,
128
(
4
), pp.
969
979
.
43.
Rasmussen
,
C. E.
, and
Williams
,
C. K. I.
,
2006
,
Gaussian Processes for Machine Learning
, Vol.
3176
,
The MIT Press
, p.
63
.
44.
Kennedy
,
M. C.
, and
O’Hagan
,
A.
,
2000
, “
Predicting the Output From a Complex Computer Code When Fast Approximations Are Available
,”
Biometrika
,
87
(
1
), pp.
1
13
.
45.
Perdikaris
,
P.
,
Raissi
,
M.
,
Damianou
,
A.
,
Lawrence
,
N. D.
, and
Karniadakis
,
G. E.
,
2017
, “
Nonlinear Information Fusion Algorithms for Data-Efficient Multi-fidelity Modelling
,”
Proc. R. Soc. A: Math., Phys. Eng. Sci.
,
473
(
2198
), p. 20160751.
46.
Damianou
,
A. C.
, and
Lawrence
,
N. D.
,
2012
, “
Deep Gaussian Processes
,”
J. Mach. Learn. Res.
,
31
, pp.
207
215
.
47.
Feldstein
,
A.
,
Lazzara
,
D.
,
Princen
,
N.
, and
Willcox
,
K.
,
2020
, “
Multifidelity Data Fusion: Application to Blended-Wing-Body Multidisciplinary Analysis Under Uncertainty
,”
AIAA. J.
,
58
(
2
), pp.
889
906
.
48.
Scholcz
,
T.
, and
Klinkenberg
,
J.
,
2022
, “
Hull-Shape Optimisation Using Adaptive Multi-Fidelity Kriging
,”
AVT-354 Multi-Fidelity Methods for Military Vehicle Design
,
Varna, Bulgaria
,
Sept. 26–28
.
49.
Bessa
,
M. A.
,
Glowacki
,
P.
, and
Houlder
,
M.
,
2019
, “
Bayesian Machine Learning in Metamaterial Design: Fragile Becomes Supercompressible
,”
Adv. Mater.
,
31
(
48
), p.
1904845
.
50.
Nitzler
,
J.
,
Biehler
,
J.
,
Fehn
,
N.
,
Koutsourelakis
,
P. S.
, and
Wall
,
W. A.
,
2022
, “
A Generalized Probabilistic Learning Approach for Multi-fidelity Uncertainty Quantification in Complex Physical Simulations
,”
Comput. Methods. Appl. Mech. Eng.
,
400
, p.
115600
.
51.
Brevault
,
L.
,
Balesdent
,
M.
, and
Hebbal
,
A.
,
2020
, “
Overview of Gaussian Process Based Multi-fidelity Techniques With Variable Relationship Between Fidelities, Application to Aerospace Systems
,”
Aeros. Sci. Tech.
,
107
, p.
106339
.
52.
Charisi
,
N. D.
,
Kana
,
A.
, and
Hopman
,
H.
,
2022
, “
Compositional Kernels to Facilitate Multi-fidelity Design Analysis: Applications for Early-Stage Design
,”
AVT-354 Multi-Fidelity Methods for Military Vehicle Design
,
Varna, Bulgaria
,
Sept. 26–28
53.
Bachoc
,
F.
,
2013
, “
Cross Validation and Maximum Likelihood Estimations of Hyper-parameters of Gaussian Processes With Model Misspecification
,”
Comput. Stat. Data Anal.
,
66
, pp.
55
69
.
54.
Le Gratiet
,
L.
, and
Garnier
,
J.
,
2014
, “
Recursive Co-Kriging Model for Design of Computer Experiments With Multiple Levels of Fidelity
,”
Inter. J. Uncertainty Quantifi.
,
4
(
5
), pp.
365
386
.
55.
Duvenaud
,
D. K.
,
2014
, “
Automatic Model Construction With Gaussian Processes
,” Ph.D. thesis,
University of Cambridge
,
56.
Duvenaud
,
D.
,
Lloyd
,
J. R.
,
Grosse
,
R.
,
Tenenbaum
,
J. B.
, and
Ghahramani
,
Z.
,
2013
, “
Structure Discovery in Nonparametric Regression Through Compositional Kernel Search
,” Proceedings of the 30th International Conference on Machine Learning, PMLR
28
(
3
), pp.
1166
1174
.
57.
GPy
,
2012
, “GPy–A Gaussian Process (GP) Framework in Python,” https://github.com/SheffieldML/GPy
58.
GitHub - EmuKit/emukit
, “A Python-Based Toolbox of Various Methods in Decision Making, Uncertainty Quantification and Statistical Emulation: Multi-fidelity, Experimental Design, Bayesian Optimisation, Bayesian Quadrature, etc,” https://github.com/EmuKit/emukit
59.
James
,
G.
,
Witten
,
D.
,
Hastie
,
T.
, and
Tibshirani
,
R.
,
2014
,
An Introduction to Statistical Learning: with Applications in R
,
Springer Publishing Company
,
Inc
.
60.
Spiess
,
A. N.
, and
Neumeyer
,
N.
,
2010
, “
An Evaluation of R2 as an Inadequate Measure for Nonlinear Models in Pharmacological and Biochemical Research: A Monte Carlo Approach
,”
BMC Pharmacol.
,
10
(
1
), pp.
1
11
.
61.
Cameron
,
A. C.
, and
Windmeijer
,
F. A.
,
1997
, “
An R-squared Measure of Goodness of Fit for Some Common Nonlinear Regression Models
,”
J. Econ.
,
77
(
2
), pp.
329
342
.
62.
Liu
,
H.
,
Ong
,
Y.-S.
,
Shen
,
X.
, and
Cai
,
J.
,
2020
, “
When Gaussian Process Meets Big Data: A Review of Scalable GPs
,”
IEEE Trans. Neural Netw. Learn. Syst.
,
31
(
11
), pp.
4405
4423
.
63.
Öchsner
,
A.
,
2021
,
Classical Beam Theories of Structural Mechanics
,
Springer
.