To estimate a structure’s mechanical behavior, proactive numerical studies have become common practice in mechanical engineering. Finite element modeling is now involved at all stages of the mechanical design project, from material behavior estimation (especially for composite materials), to intermediate assemblies, all the way up to structural computations.
At each of the previous stages, simulations can predict the behavior of the virtual structure, based on sets of parameters established in technical specifications. Naturally, occasional tests on prototypes often remain necessary to verify the consistency of the simulation results with respect to real world experience, but simulation are taking a larger portion of the design process than ever before.
The overarching influence of simulation on mechanical design has two main consequences:
- Budgets allocated to mechanical component development are regularly decreasing, because many prototype tests tend to be replaced by simulations.
- Simulation reliability must keep improving, because actual prototype testing is constantly postponed to the later stages of a project and so involves higher risks.
Following these trends, technical decision makers allocate shorter and shorter development periods to design offices, even for complex projects involving entire subsystems needing to be redesigned from the ground up, or when new materials are introduced. Nonetheless, there is an aspect of simulations that is often neglected and that can compromise the reliability of a model: there is a difference between verifying and validating.
Verifying a simulation model means making sure that the assumed constitutive behavior (often expressed by an analytical equation linking strain and stress) is accurately reproduced by the simulation algorithm. It can be evaluated from simple test cases, commonly used to assess the reliability of a finite element code. Nowadays, all commercial software is carefully verified to ascertain that the equations available to the end-user are correctly ‘transcribed’ in the algorithm.
For a structural calculation engineer, validating a model means asking and answering several questions: What is the applied load? What is the exact shape of the component? What assumptions should be made to model the interaction of two components? Etc. The technical know-how of the engineer consists in giving the best answers based on his/her personal experience and knowledge of the actual mechanical assembly. The more groundbreaking the product, the more these assumptions can be far removed from reality when the actual prototype is put to the test…
Some conjecture is involved in the previous process. This conjecture is the reason why despite having the most qualified engineers designing a testing setup, unforeseen experimental setbacks may still occur when the actual test is carried out, to the detriment of the whole project. Should this happen, last-minute contingency efforts become unavoidable to recalibrate the simulation so that it corresponds more closely to the experimental measurements. In a worst-case scenario, additional test campaigns must also be carried out to determine the cause of discrepancies between simulations and experiments. Since structural tests usually carry a high financial cost (between 5k€ and 200k€ depending on the assembly complexity), this kind of setbacks are doubly detrimental: production will be delayed, and additional costs and effort will be also be required to complete the project.
To assist companies with this critical issue, the concept to use machine learning to compare experiment and model data directly within a “digital twin” has been gathering speed. A digital twin is a numerical representation of the observed assembly that incorporates all necessary data to validate the assembly operating behavior by comparing model responses to real world information. If needed, it also allows live-corrections of the asset behavior by outputting commands directly to the asset. In the maintenance field, for example, digital twins can be utilized to control and record the crucial data of an asset (temperature, humidity, rotational speed, etc.) so that the operator can instantly detect discrepancies with his own previsions. This principle can be generalized to entire factories…
Even though most of the current examples of digital twins are functioning online, which makes it easier for production teams to gather live information about their operating systems, it is of course possible to make use of digital twins offline, after operation, to use the real-world behavior recorded by the digital twin to study an actual operating case. The application of this concept to 3D structural computation in mechanical engineering is exactly what structural engineers need. When a mechanical characterization test has been carried out (or is being carried out), it is necessary to make sure that the simulation predicted the experimental behavior satisfactorily.
So, what does a digital twin look like in a practical use-case? It is a software into which you can import both measurement data from a variety of sources, and the ‘theoretical’ model representing your component. In this framework, cameras (EikoTwin will process their images) can just be seen as some experimental data among many. As sensors, cameras however bring some very specific advantages to digital twins:
- They make it possible to carry out model-experiment comparisons on the entire component surface (not just on a punctual basis), and to compute a generalized model error. You can picture out this function as the equivalent of MS word’s compare feature: you can instantly tell where similitudes and discrepancies are found in your data.
- At any stage, they enable modifications to the model input parameters that can take additional information on experimental boundary conditions into account. Furthermore, sensitivity analysis can also be carried out to determine which parameter should be modified in priority to reduce the gap between simulation and experiment (material parameters, interfaces, etc.).