0
Research Papers

Discrepancy Prediction in Dynamical System Models Under Untested Input Histories

[+] Author and Article Information
Kyle Neal

Department of Civil and
Environmental Engineering,
Vanderbilt University,
2201 West End Avenue,
Box 1831, Station B,
Nashville, TN 37235
e-mail: kyle.d.neal@vanderbilt.edu

Zhen Hu

Department of Industrial and Manufacturing
Systems Engineering,
University of Michigan-Dearborn,
2340 Heinz Prechter Engineering
Complex (HPEC)
Dearborn, MI 48128
e-mail: zhennhu@umich.edu

Sankaran Mahadevan

Department of Civil and
Environmental Engineering,
Vanderbilt University,
2201 West End Avenue,
Box 1831, Station B,
Nashville, TN 37235
e-mail: sankaran.mahadevan@vanderbilt.edu

Jon Zumberge

Energy/Power/Thermal Division,
Air Force Research Laboratory,
Wright-Patterson AFB,
Dayton, OH 45433
e-mail: jon.zumberge@us.af.mil

1Corresponding author.

Contributed by the Design Engineering Division of ASME for publication in the JOURNAL OF COMPUTATIONAL AND NONLINEAR DYNAMICS. Manuscript received May 31, 2018; final manuscript received August 14, 2018; published online January 7, 2019. Assoc. Editor: Kyung Choi.This work is in part a work of the U.S. Government. ASME disclaims all interest in the U.S. Government's contributions.

J. Comput. Nonlinear Dynam 14(2), 021009 (Jan 07, 2019) (13 pages) Paper No: CND-18-1242; doi: 10.1115/1.4041238 History: Received May 31, 2018; Revised August 14, 2018

This paper presents a probabilistic framework for discrepancy prediction in dynamical system models under untested input time histories, based on information gained from validation experiments. Two surrogate modeling-based methods, namely observation surrogate and bias surrogate, are developed to predict the bias of a dynamical system simulation model under untested input time history. In the first method, a surrogate model is built for the observed experimental output, and the model bias for the untested input is obtained by comparing the output of the observation surrogate with the output of the physics-based model. The second method constructs a surrogate model for the bias in terms of the inputs in the conducted experiments. The bias surrogate model is then used to correct the simulation model prediction at each time-step under a predictor–corrector scheme to predict the model bias under untested conditions. A neural network-based surrogate modeling technique is employed to implement the proposed methodology. The bias prediction result is reported in a probabilistic manner, in order to account for the uncertainty of the surrogate model prediction. An air cycle machine case study is used to demonstrate the effectiveness of the proposed bias prediction framework.

Copyright © 2019 by ASME
Your Session has timed out. Please sign back in to continue.

References

ASME, 2006, Guide for Verification and Validation in Computations Solid Mechanics, New York, NY
Oberkampf, W. L. , and Trucano, T. G. , 2002, “ Verification and Validation in Computational Fluid Dynamics,” Prog. Aerosp. Sci, 38(3), pp. 209–272. [CrossRef]
Ling, Y. , and Mahadevan, S. , 2013, “ Quantitative Model Validation Techniques: New Insights,” Reliab. Eng. Syst. Saf., 111, pp. 217–231. [CrossRef]
Rebba, R. , Mahadevan, S. , and Huang, S. , 2006, “ Validation and Error Estimation of Computational Models,” Reliab. Eng. Syst. Saf., 91(10–11), pp. 1390–1397. [CrossRef]
Ferson, S. , Oberkampf, W. L. , and Ginzburg, L. , 2008, “ Model Validation and Predictive Capability for the Thermal Challenge Problem,” Comput. Methods Appl. Mech. Eng., 197(29–32), pp. 2408–2430. [CrossRef]
Rebba, R. , and Mahadevan, S. , 2008, “ Computational Methods for Model Reliability Assessment,” Reliab. Eng. Syst. Saf., 93(8), pp. 1197–1207. [CrossRef]
Ao, D. , Hu, Z. , and Mahadevan, S. , 2017, “ Dynamics Model Validation Using Time-Domain Metrics,” J. Verif. Valid. Uncertain. Quantif., 2(1), p. 011004. [CrossRef]
Jiang, X. , and Mahadevan, S. , 2008, “ Bayesian Wavelet Method for Multivariate Model Assessment of Dynamic Systems,” J. Sound Vib., 312(4–5), pp. 694–712. [CrossRef]
McFarland, J. , and Mahadevan, S. , 2008, “ Error and Variability Characterization in Structural Dynamics Modeling,” Comput. Methods Appl. Mech. Eng., 197(29–32), pp. 2621–2631. [CrossRef]
Li, C. , and Mahadevan, S. , 2014, “ Uncertainty Quantification and Output Prediction in Multi-Level Problems,” AIAA Paper No. AIAA 2014-0124.
Red-Horse, J. R. , and Paez, T. L. , 2008, “ Sandia National Laboratories Validation Workshop: Structural Dynamics Application,” Comput. Methods Appl. Mech. Eng., 197(29–32), pp. 2578–2584. [CrossRef]
Van Buren, K. , Reilly, J. , Neal, K. , Edwards, H. , and Hemez, F. , 2017, “ Guaranteeing Robustness of Structural Condition Monitoring to Environmental Variability,” J. Sound Vib., 386, pp. 134–148. [CrossRef]
Hemez, F. , and Doebling, S. , 2000, “ Validation of Structural Dynamics Models at Los Alamos National Laboratory,” AIAA Paper No. AIAA-2000-1437.
Doebling, S. , Schultze, J. , and Hemez, F. , 2002, “ Overview of Structural Dynamics Model Validation Activities at Los Alamos National Laboratory,” AIAA Paper No. AIAA-2002–1643.
Sankararaman, S. , and Mahadevan, S. , 2013, “ Bayesian Methodology for Diagnosis Uncertainty Quantification and Health Monitoring,” Struct. Control Heal. Monit., 20(1), pp. 88–106. [CrossRef]
Jiang, X. , and Mahadevan, S. , 2011, “ Wavelet Spectrum Analysis Approach to Model Validation of Dynamic Systems,” Mech. Syst. Signal Process., 25(2), pp. 575–590. [CrossRef]
Sargent, R. G. , 2013, “ Verification and Validation of Simulation Models,” J. Simul., 7(1), pp. 12–24. [CrossRef]
Liu, Y. , Chen, W. , Arendt, P. , and Huang, H.-Z. , 2011, “ Toward a Better Understanding of Model Validation Metrics,” ASME J. Mech. Des., 133(7), p. 071005. [CrossRef]
Oberkampf, W. L. , and Barone, M. F. , 2006, “ Measures of Agreement Between Computation and Experiment: Validation Metrics,” J. Comput. Phys., 217(1), pp. 5–36. [CrossRef]
Doucet, A. , Freitas, N. , and Gordon, N. , 2001, “ An Introduction to Sequential Monte Carlo Methods,” Sequential Monte Carlo Methods in Practice, Springer, New York, pp. 3–14.
Hombal, V. , and Mahadevan, S. , 2011, “ Bias Minimization in Gaussian Process Surrogate Modeling for Uncertainty Quantification,” Int. J. Uncertain. Quantif., 1(4), pp. 321–349. [CrossRef]
Winokur, J. , Conrad, P. , Sraj, I. , Knio, O. , Srinivasan, A. , Thacker, W. C. , Marzouk, Y. , and Iskandarani, M. , 2013, “ A Priori Testing of Sparse Adaptive Polynomial Chaos Expansions Using an Ocean General Circulation Model Database,” Comput. Geosci., 17(6), pp. 899–911. [CrossRef]
LeCun, Y. , Bengio, Y. , and Hinton, G. , 2015, “ Deep Learning,” Nature, 521(7553), pp. 436–444. [CrossRef] [PubMed]
Abraham, A. , 2005, “ Artificial Neural Networks,” Handbook of Measuring System Design, Wiley, Chichester, UK.
Yang, H. , Griffiths, P. R. , and Tate, J. D. , 2003, “ Comparison of Partial Least Squares Regression and Multi-Layer Neural Networks for Quantification of Nonlinear Systems and Application to Gas Phase Fourier Transform Infrared Spectra,” Anal. Chim. Acta, 489(2), pp. 125–136. [CrossRef]
Diaconescu, E. , 2008, “ The Use of NARX Neural Networks to Predict Chaotic Time Series,” Wseas Transactions on computer research, 3(3), pp. 182–191. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.472.8071&rep=rep1&type=pdf
Orchard, M. , Kacprzynski, G. , Goebel, K. , Saha, B. , and Vachtsevanos, G. , 2008, “ Advances in Uncertainty Representation and Management for Particle Filtering Applied to Prognostics,” 2008 International Conference on Prognostics and Health Management, pp. 1–6.
Arulampalam, M. S. , Maskell, S. , Gordon, N. , and Clapp, T. , 2002, “ A Tutorial on Particle Filters for Online Nonlinear/non-Gaussian Bayesian Tracking,” IEEE Trans. Signal Process., 50(2), pp. 174–188. [CrossRef]
Bodie, M. , Pamphile, T. , Zumberge, J. , Baudendistel, T. , and Boyd, M. , 2016, “ Air Cycle Machine for Transient Model Validation,” SAE Technical Paper No. 2016-01-2000
Moir, I. I. , Seabridge, A. G. , and Allan, G. , 2008, Aircraft Systems: Mechanical, Electrical, and Avionics Subsystems Integration, Wiley, Chichester, England.
Neese, B. , 1999, Aircraft Environmental Systems, Endeavor Books, Casper, WY.
Lee Rodgers, J. , and Nicewander, W. A. , 1988, “ Thirteen Ways to Look at the Correlation Coefficient,” Am. Stat., 42(1), pp. 59–66. [CrossRef]
Arlot, S. , and Celisse, A. , 2010, “ A Survey of Cross-Validation Procedures for Model Selection,” Stat. Surv., 4, pp. 40–79. [CrossRef]

Figures

Grahic Jump Location
Fig. 1

A static ANN generated in matlab with two hidden layers with 40 and 10 neurons, respectively

Grahic Jump Location
Fig. 2

Two scenarios in computing the model reliability with multiple outputs: (a) simultaneous realization and (b) sequential realization

Grahic Jump Location
Fig. 3

Implementation of the observation surrogate method: (a) training and (b) prediction

Grahic Jump Location
Fig. 4

Implementation of bias surrogate method: training

Grahic Jump Location
Fig. 5

Implementation of the bias surrogate method

Grahic Jump Location
Fig. 6

ACM test bench with locations of observed variables identified [29]

Grahic Jump Location
Fig. 7

Pearson correlation coefficients of measured variables for the observation dataset

Grahic Jump Location
Fig. 8

Mean prediction of the T3 observed for Test 7 of the validation dataset: (a) observation surrogate and (b) bias surrogate

Grahic Jump Location
Fig. 9

Uncertainty in predicting T3obs with (a) observation surrogate and (b) bias surrogate

Grahic Jump Location
Fig. 10

Predicted bias in output variable T3: (a) observation surrogate and (b) bias surrogate

Grahic Jump Location
Fig. 11

Input and output variables across the three components

Grahic Jump Location
Fig. 12

Bias in temperature and pressure outputs across time from observation surrogate and bias surrogate methods at component 3

Grahic Jump Location
Fig. 13

Bias in temperature and pressure outputs across time from observation surrogate and bias surrogate methods at component 4

Grahic Jump Location
Fig. 14

Bias in temperature output across time from observation surrogate and bias surrogate methods at component 5

Grahic Jump Location
Fig. 15

Accumulated model reliability metric for all five output variables (dashed lines represent true values, solid lines are predictions): (a) observation surrogate and (b) bias surrogate

Grahic Jump Location
Fig. 16

Notional schematic describing system reliability calculation

Grahic Jump Location
Fig. 17

Accumulated system-level model reliability metric

Tables

Errata

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In