Understanding and controlling the sources of discrepancy between tests and simulations
- By Renaud Gras, Co founder – CTO of EikoSim
Adjustment and validation of simulations
Simulations play an enhanced role in the design, the qualification and the certification of industrial products by forming the basis of strategic decisions. Therefore, the role devolved to experimental testing has also been modified. These tests were the essential part of the demonstration of compliance, but now they become references for the validation of numerical simulations [1, 2].
Thus, it is becoming crucial to adjust and validate numerical simulations so that they are as predictive as possible and closely reflect reality. This adjustment makes it necessary to update the mechanical behaviour of the s part or structure using the data collected during one or more tests tailored to enrich the simulation._
Limitations of the traditional approach
Traditionally, when simple material testing is carried out, the applied forces are well controlled, and a conventional instrumentation, such as a strain gauge, equip the test piece. These standardised tests ensure a homogeneous deformation in the sample in order to identify a model parameter [3]. In this context, the determination of complex models requires a lot of tests. Moreover, this strategy cannot be applied to tests on structures for which the geometry and the constitutive model are more complex.
During this step of adjustment and/or validation of the numerical simulation, a loss of data continuity is observed. Indeed, the numerical simulation provides results (among which displacement fields) over the whole part. The information is extremely rich and helps to determine critical areas. Yet during the test, whether strain gauges, displacement sensors or lasers are used as sensors, they are most often localized at a given location. These sensors do not cover the entire part and their measurements are not expressed in the reference frame of the simulation, which makes the comparison between test and calculation more difficult. Furthermore, in case of disagreement between experimental and simulated data, it is often extremely difficult to identify the causes of this difference because the measurements are localized. It is therefore possible that the actual critical area was not instrumented. Does this gap come from the mechanical behaviour, the boundary conditions, the simplifying assumptions made in the model? In order to answer these questions, further simulations or even additional tests on prototypes are needed, which means a cost and a time overrun.

The 3D model to ensure the data continuity
To ensure the numerical continuity, the use of the 3D model from CAD to the validation of the simulation is proposed. The model is used:
- to define test specifications,
- to design and carry out virtual tests implementing all sensors attached to the prototype in its testing environment,
- to perform measurements during the actual test,
- and finally, to adjust and validate the numerical simulation.
To perform the last two steps, the test is instrumented with an imaging device and the measurement during the actual test is measured by Digital Image Correlation [4] using the 3D model as a reference.

Thus, the measurement is performed directly in the reference frame of the numerical simulation, and the results are expressed on the finite element mesh. The comparison between test data and simulation data is therefore immediate. Additionally, in case of disagreement, the areas where the difference is significant can be identified. Additionally, the measurement data is large thanks to the acquisition of a displacement field. These measurements, expressed on the finite element mesh, also enable automated model adjustment using the so-called finite element model updating (FEMU) method. This identification can be weighted by the measurement uncertainty to obtain the new model parameters and their uncertainties in comparison with the data used as a reference. The advent of image-based measurement techniques enables significant measured data enrichment, and the simultaneous identification of several model parameters with only one test thanks to the measured heterogeneous field [5].
The 3D model becomes the basis for a digital twin around which data is gathered.

Towards an accurate estimate of the various sources of deviation
For finite elements simulation three common sources of errors are often encountered: discretization errors, model errors and numerical errors. Checking for discretization errors and numerical errors is a common process in numerical studies, and the validation process is mature enough to keep them under control [6].
Thus, only model errors will be considered here. These are mainly due:
- to the improper specification of the boundary conditions, which differ from what has been done in the actual test conditions,
- to the fact that the constitutive parameters of the model do not reproduce the actual material behaviour accurately.
The integration of test data on the finite element mesh used for the simulation enables the adjustment of this simulation using an identification method. The article recently published by S.Roux and F.Hild provides an optimal identification method based on the choice of an appropriate standard taking measurement uncertainties into account [7].
These sources of measurement uncertainties by Digital Image Correlation (DIC) are the image noise and the camera system calibration. It is also relevant to consider the uncertainties on boundary conditions due to the gaps in assemblies, the misalignment of the cylinders or the distribution of the applied boundary conditions.
When adjusting the simulation, it is essential to examine and separate constitutive model errors from errors due to boundary conditions differences between what has been chosen in simulation and what has been applied during the test. To be fully independent from the errors due to applied forces, FE mesh based digital image correlation is the suitable tool, because then boundary conditions can be measured and directly introduced into the numerical simulation. As was done on the case of a train chassis in collaboration with ALSTOM and CETIM.

If we assume that the camera system calibration has been done correctly, the associated uncertainties can be neglected. Then only measurement uncertainties due to sensor acquisition noise remain. This noise, by propagation through the digital image correlation method, impacts the measured displacement fields and, by propagation through the identification method, the identified parameters. Quantifying this propagation enables the obtention of an uncertainty on the identified parameters thanks to the knowledge of images acquisition noise.
By enforcing the measured boundary conditions in the simulation and taking measurement uncertainties into account in the identification method, it can then be considered that the remaining discrepancies between measurement and simulation results are only due to the constitutive parameters of the model.
Conclusion
The 3D model is used as a digital twin throughout the design cycle and can gather full-field experimental data thanks to image processing techniques. Therefore, it provides the following benefits:
- Ensures the continuity of the design digital chain,
- Aggregates simulation data and test data,
- Validates the numerical simulation and in case of disagreement between experimental and simulated data, adjusts the implemented mechanical model.
Taking the boundary conditions measured by digital image correlation into account in the numerical simulation also enables the dissociation of the errors coming from the application of the boundary conditions in the test and the constitutive model errors. Finally, the estimate of propagation of the images acquisition noise enables a quantitative estimate of uncertainty on the identified parameters.
References
[1] https://www.nafems.org/professional_development/nafems_training/training/verification-validation-des-modeles-et-analyses/
[2] https://www.asme.org/products/codes-standards/v-v-101-2012-illustration-concepts-verification
[3] https://en.wikipedia.org/wiki/Tensile_testing
[4] Dubreuil, L., Dufour, JE., Quinsat, Y. et al. Exp Mech (2016) 56: 1231. https://doi.org/10.1007/s11340-016-0158-x
[5] Jan Neggers, Olivier Allix, François Hild, Stéphane Roux. Big Data in Experimental Mechanics and Model Order Reduction: Today’s Challenges and Tomorrow’s Opportunities. Archives of Computational Methods in Engineering, Springer Verlag, 2018, 25 (1), pp.143-164. https://dx.doi.org/10.1007/s11831-017-9234-3
[6] P. Ladevèze and D. Leguillon. Error estimate procedure in the finite element method and applications. SIAM J. Num. Analysis, 20(3) : 485 – 509, 1983.
[7] Stéphane Roux, François Hild. Optimal procedure for the identication of constitutive parameters from experimentally measured displacement fields. International Journal of Solids and Structures, Elsevier, In press, https://dx.doi.org/10.1016/j.ijsolstr.2018.11.008