Numerical simulations have come to be widely used in scientific work. Like experiments, simulations generate large quantities of numbers (output data) that require analysis and constant concern with uncertainty and error. How do simulationists convince themselves, and others, about the credibility of output? The present analysis reconstructs the perspectives related to performing numerical simulations, in general, and the situations in which simulationists deal with uncertain output, in particular. Starting from a distinction between idealized and realistic simulations, the paper presents the principal methods of evaluation in relation to these practices and how different audiences expect different methods. One major challenge in interpreting output data is to distinguish between real and numerical effects. Within the practice of idealized simulations, simulationists hold the underlying model accountable for results that manifest real effects, but because numerical and real effects cannot be distinguished on the basis of what they derive from, attempted causal explanations are rather justifications for their conclusions. At the same time, simulationists' explanations are part and parcel of their contradictory perspectives, according to which they believe in simulations largely due to the underlying model, while painfully recognizing everything they have to add to make computations doable on the basis of this model.
1