(1): I do not understand how the word “simulation” in the title can be misinterpreted as “code” or “software.”
(2) and (3): I believe the confusion arises from his use of the definition of uncertainty as “A potential deficiency in any phase of the modeling process that is due to lack of knowledge” (which does not quantify a range within which truth lies with a specified degree of confidence) as opposed to the concepts and definitions used in current experimental uncertainty analysis 1 (which do quantify such a range).
The ranges and both contain (with 95% confidence) the truth T, which is independent of experiment or simulation. The assumption (also made in Oberkampf and Trucano, 2000) that D is “an individual experimental measurement” is inaccurate. The experimental result is D, and is the uncertainty considering any averaging, any correlated systematic uncertainties, and any correlated random uncertainties1.
M is the simulation result with the continuous equations solved exactly with no uncertainty in the inputs but includes the errors due to modeling assumptions. Thus, the assumption “the true value from the model, which is M” is inaccurate. Again, the true value T is independent of experiment orsimulation.
After enlightening discussions over the last two years (particularly with Patrick Roache), my view has evolved to consider “a validated simulation” to mean that a simulation has undergone the validation process and that a level of validation (the larger of |E| and ) has been established. I agree with Oberkampf that “the magnitude of the measure”—in my words, the level of validation—“is not an issue as far as validation is concerned.” However, it follows logically that the qualification “from the perspective of the intended uses of the model” should not be part of a definition of validation since the level of validation of a simulation variable is independent of the intended use of the model.