SAM
https://sam.ensam.eu:443
The DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.Fri, 12 Jul 2024 15:20:40 GMT2024-07-12T15:20:40ZLearning data-driven reduced elastic and inelastic models of spot-welded patches
http://hdl.handle.net/10985/20416
Learning data-driven reduced elastic and inelastic models of spot-welded patches
REILLE, Agathe; CHAMPANEY, Victor; DAIM, Fatima; TOURBIER, Yves; HASCOET, Nicolas; GONZALEZ, David; CUETO, Elias; DUVAL, Jean Louis; CHINESTA SORIA, Francisco
Solving mechanical problems in large structures with rich localized behaviors remains a challenging issue despite the enormous advances in numerical procedures and computational performance. In particular, these localized behaviors need for extremely fine descriptions, and this has an associated impact in the number of degrees of freedom from one side, and the decrease of the time step employed in usual explicit time integrations, whose stability scales with the size of the smallest element involved in the mesh. In the present work we propose a data-driven technique for learning the rich behavior of a local patch and integrate it into a standard coarser description at the structure level. Thus, localized behaviors impact the global structural response without needing an explicit description of that fine scale behaviors.
Fri, 01 Jan 2021 00:00:00 GMThttp://hdl.handle.net/10985/204162021-01-01T00:00:00ZREILLE, AgatheCHAMPANEY, VictorDAIM, FatimaTOURBIER, YvesHASCOET, NicolasGONZALEZ, DavidCUETO, EliasDUVAL, Jean LouisCHINESTA SORIA, FranciscoSolving mechanical problems in large structures with rich localized behaviors remains a challenging issue despite the enormous advances in numerical procedures and computational performance. In particular, these localized behaviors need for extremely fine descriptions, and this has an associated impact in the number of degrees of freedom from one side, and the decrease of the time step employed in usual explicit time integrations, whose stability scales with the size of the smallest element involved in the mesh. In the present work we propose a data-driven technique for learning the rich behavior of a local patch and integrate it into a standard coarser description at the structure level. Thus, localized behaviors impact the global structural response without needing an explicit description of that fine scale behaviors.Surrogate parametric metamodel based on Optimal Transport
http://hdl.handle.net/10985/22204
Surrogate parametric metamodel based on Optimal Transport
TORREGROSA, Sergio; CHAMPANEY, Victor; HERBERT, Vincent; AMMAR, Amine; CHINESTA SORIA, Francisco
The description of a physical problem through a model necessarily involves the introduction of parameters. Hence, one
wishes to have a solution of the problem that is a function of all these parameters: a parametric solution. However, the
construction of such parametric solutions exhibiting localization in space is only ensured by costly and time-consuming tests,
which can be both numerical or experimental. Numerical methodologies used classically imply enormous computational efforts
for exploring the design space. Therefore, parametric solutions obtained using advanced nonlinear regressions are an essential tool to address this challenge. However, classical regression techniques, even the most advanced ones, can lead to non physical interpolation in some fields such as fluid dynamics, where the solution localizes in different regions depending on the problem parameters choice. In this context, Optimal Transport (OT) offers a mathematical approach to measure distances and interpolate between general objects in a, sometimes, more physical way than the classical interpolation approach. Thus, OT has become fundamental in some fields such as statistics or computer vision, and it is being increasingly used in fields such as computational mechanics. However, the OT problem is usually computationally costly to solve and not adapted to be accessed in an online manner. Therefore, the aim of this paper is combining advanced nonlinear regressions with Optimal Transport in order to implement a parametric real-time model based on OT. To this purpose, a parametric model is built offline relying on Model Order Reduction and OT, leading to a real-time interpolation tool following Optimal Transport theory. Such a tool is of major interest in design processes, but also within the digital twin rationale.
Tue, 30 Nov 2021 00:00:00 GMThttp://hdl.handle.net/10985/222042021-11-30T00:00:00ZTORREGROSA, SergioCHAMPANEY, VictorHERBERT, VincentAMMAR, AmineCHINESTA SORIA, FranciscoThe description of a physical problem through a model necessarily involves the introduction of parameters. Hence, one
wishes to have a solution of the problem that is a function of all these parameters: a parametric solution. However, the
construction of such parametric solutions exhibiting localization in space is only ensured by costly and time-consuming tests,
which can be both numerical or experimental. Numerical methodologies used classically imply enormous computational efforts
for exploring the design space. Therefore, parametric solutions obtained using advanced nonlinear regressions are an essential tool to address this challenge. However, classical regression techniques, even the most advanced ones, can lead to non physical interpolation in some fields such as fluid dynamics, where the solution localizes in different regions depending on the problem parameters choice. In this context, Optimal Transport (OT) offers a mathematical approach to measure distances and interpolate between general objects in a, sometimes, more physical way than the classical interpolation approach. Thus, OT has become fundamental in some fields such as statistics or computer vision, and it is being increasingly used in fields such as computational mechanics. However, the OT problem is usually computationally costly to solve and not adapted to be accessed in an online manner. Therefore, the aim of this paper is combining advanced nonlinear regressions with Optimal Transport in order to implement a parametric real-time model based on OT. To this purpose, a parametric model is built offline relying on Model Order Reduction and OT, leading to a real-time interpolation tool following Optimal Transport theory. Such a tool is of major interest in design processes, but also within the digital twin rationale.Learning the Parametric Transfer Function of Unitary Operations for Real-Time Evaluation of Manufacturing Processes Involving Operations Sequencing
http://hdl.handle.net/10985/20468
Learning the Parametric Transfer Function of Unitary Operations for Real-Time Evaluation of Manufacturing Processes Involving Operations Sequencing
LOREAU, Tanguy; CHAMPANEY, Victor; HASCOËT, Nicolas; MOURGUE, Philippe; DUVAL, Jean-Louis; CHINESTA SORIA, Francisco
For better designing manufacturing processes, surrogate models were widely considered in the past, where the effect of different material and process parameters was considered from the use of a parametric solution. The last contains the solution of the model describing the system under study, for any choice of the selected parameters. These surrogate models, also known as meta-models, virtual charts or computational vademecum, in the context of model order reduction, were successfully employed in a variety of industrial applications. However, they remain confronted to a major difficulty when the number of parameters grows exponentially. Thus, processes involving trajectories or sequencing entail a combinatorial exposition (curse of dimensionality) not only due to the number of possible combinations, but due to the number of parameters needed to describe the process. The present paper proposes a promising route for circumventing, or at least alleviating that difficulty. The proposed technique consists of a parametric transfer function that, as soon as it is learned, allows for, from a given state, inferring the new state after the application of a unitary operation, defined as a step in the sequenced process. Thus, any sequencing can be evaluated almost in real time by chaining that unitary transfer function, whose output becomes the input of the next operation. The benefits and potential of such a technique are illustrated on a problem of industrial relevance, the one concerning the induced deformation on a structural part when printing on it a series of stiffeners.
Fri, 01 Jan 2021 00:00:00 GMThttp://hdl.handle.net/10985/204682021-01-01T00:00:00ZLOREAU, TanguyCHAMPANEY, VictorHASCOËT, NicolasMOURGUE, PhilippeDUVAL, Jean-LouisCHINESTA SORIA, FranciscoFor better designing manufacturing processes, surrogate models were widely considered in the past, where the effect of different material and process parameters was considered from the use of a parametric solution. The last contains the solution of the model describing the system under study, for any choice of the selected parameters. These surrogate models, also known as meta-models, virtual charts or computational vademecum, in the context of model order reduction, were successfully employed in a variety of industrial applications. However, they remain confronted to a major difficulty when the number of parameters grows exponentially. Thus, processes involving trajectories or sequencing entail a combinatorial exposition (curse of dimensionality) not only due to the number of possible combinations, but due to the number of parameters needed to describe the process. The present paper proposes a promising route for circumventing, or at least alleviating that difficulty. The proposed technique consists of a parametric transfer function that, as soon as it is learned, allows for, from a given state, inferring the new state after the application of a unitary operation, defined as a step in the sequenced process. Thus, any sequencing can be evaluated almost in real time by chaining that unitary transfer function, whose output becomes the input of the next operation. The benefits and potential of such a technique are illustrated on a problem of industrial relevance, the one concerning the induced deformation on a structural part when printing on it a series of stiffeners.Hybrid twins based on optimal transport
http://hdl.handle.net/10985/23262
Hybrid twins based on optimal transport
TORREGROSA, Sergio; CHAMPANEY, Victor; AMMAR, Amine; HERBERT, Vincent; CHINESTA SORIA, Francisco
Nowadays data is acquiring an indisputable importance in every field including engineering. In the past, experimental data was used to calibrate state-of-the art models. Once the model was optimally calibrated, numerical simulations were run. However, data can offer much more, playing a more important role than calibration or statistical analysis in the modeling/simulation process. Indeed, today data is gathered and used to train models able to replace complex engineering systems. The more and better the training data, the more accurate the model is. However, in engineering experimental data use to be the best data but also the most expensive in time and computing effort. Therefore, numerical simulations, cheaper and faster, are used instead but, even if they are closed to reality, they always present an error related to the ignorance of the engineer over the complex real system. It seems thus coherent to take advantage of each approach. This leads to the “hybrid twin” rationale. On the one hand, numerical simulations are computed as primary data source, assuming their inherent error. On the other hand, some experimental data is gathered to train a machine learning correction model which fills the prediction-measurement gap. However, learning this ignorance gap becomes difficult in some fields such as fluids dynamics, where a regression over the localized solutions can lead to non physical interpolated solutions. Therefore, the “hybrid twin” methodology proposed in this article relies on Optimal Transport theory, which provides a mathematical framework to measure distances between general objects and a completely different interpolation approach between functions.
Sat, 01 Oct 2022 00:00:00 GMThttp://hdl.handle.net/10985/232622022-10-01T00:00:00ZTORREGROSA, SergioCHAMPANEY, VictorAMMAR, AmineHERBERT, VincentCHINESTA SORIA, FranciscoNowadays data is acquiring an indisputable importance in every field including engineering. In the past, experimental data was used to calibrate state-of-the art models. Once the model was optimally calibrated, numerical simulations were run. However, data can offer much more, playing a more important role than calibration or statistical analysis in the modeling/simulation process. Indeed, today data is gathered and used to train models able to replace complex engineering systems. The more and better the training data, the more accurate the model is. However, in engineering experimental data use to be the best data but also the most expensive in time and computing effort. Therefore, numerical simulations, cheaper and faster, are used instead but, even if they are closed to reality, they always present an error related to the ignorance of the engineer over the complex real system. It seems thus coherent to take advantage of each approach. This leads to the “hybrid twin” rationale. On the one hand, numerical simulations are computed as primary data source, assuming their inherent error. On the other hand, some experimental data is gathered to train a machine learning correction model which fills the prediction-measurement gap. However, learning this ignorance gap becomes difficult in some fields such as fluids dynamics, where a regression over the localized solutions can lead to non physical interpolated solutions. Therefore, the “hybrid twin” methodology proposed in this article relies on Optimal Transport theory, which provides a mathematical framework to measure distances between general objects and a completely different interpolation approach between functions.Data-Driven Modeling for Multiphysics Parametrized Problems-Application to Induction Hardening Process
http://hdl.handle.net/10985/20595
Data-Driven Modeling for Multiphysics Parametrized Problems-Application to Induction Hardening Process
DEROUICHE, Khouloud; GAROIS, Sevan; CHAMPANEY, Victor; DAOUD, Monzer; TRAIDI, Khalil; CHINESTA SORIA, Francisco
Data-driven modeling provides an efficient approach to compute approximate solutions for complex multiphysics parametrized problems such as induction hardening (IH) process. Basically, some physical quantities of interest (QoI) related to the IH process will be evaluated under real-time constraint, without any explicit knowledge of the physical behavior of the system. Hence, computationally expensive finite element models will be replaced by a parametric solution, called metamodel. Two data-driven models for temporal evolution of temperature and austenite phase transformation, during induction heating, were first developed by using the proper orthogonal decomposition based reduced-order model followed by a nonlinear regression method for temperature field and a classification combined with regression for austenite evolution. Then, data-driven and hybrid models were created to predict hardness, after quenching. It is shown that the results of artificial intelligence models are promising and provide good approximations in the low-data limit case.
Fri, 01 Jan 2021 00:00:00 GMThttp://hdl.handle.net/10985/205952021-01-01T00:00:00ZDEROUICHE, KhouloudGAROIS, SevanCHAMPANEY, VictorDAOUD, MonzerTRAIDI, KhalilCHINESTA SORIA, FranciscoData-driven modeling provides an efficient approach to compute approximate solutions for complex multiphysics parametrized problems such as induction hardening (IH) process. Basically, some physical quantities of interest (QoI) related to the IH process will be evaluated under real-time constraint, without any explicit knowledge of the physical behavior of the system. Hence, computationally expensive finite element models will be replaced by a parametric solution, called metamodel. Two data-driven models for temporal evolution of temperature and austenite phase transformation, during induction heating, were first developed by using the proper orthogonal decomposition based reduced-order model followed by a nonlinear regression method for temperature field and a classification combined with regression for austenite evolution. Then, data-driven and hybrid models were created to predict hardness, after quenching. It is shown that the results of artificial intelligence models are promising and provide good approximations in the low-data limit case.Parametric Curves Metamodelling Based on Data Clustering, Data Alignment, POD-Based Modes Extraction and PGD-Based Nonlinear Regressions
http://hdl.handle.net/10985/22377
Parametric Curves Metamodelling Based on Data Clustering, Data Alignment, POD-Based Modes Extraction and PGD-Based Nonlinear Regressions
CHAMPANEY, Victor; PASQUALE, Angelo; AMMAR, Amine; CHINESTA SORIA, Francisco
In the context of parametric surrogates, several nontrivial issues arise when a whole curve shall be predicted from given input features. For instance, different sampling or ending points lead to non-aligned curves. This also happens when the curves exhibit a common pattern characterized by critical points at shifted locations (e.g., in mechanics, the elasticplastic transition or the rupture point for a material). In such cases, classical interpolation methods fail in giving physics-consistent results and appropriate pre-processing steps are required. Moreover, when bifurcations occur into the parametric space, to enhance the accuracy of the surrogate, a coupling with clustering and classification algorithms is needed. In this work we present several methodologies to overcome these issues. We also exploit such surrogates to quantify and propagate uncertainty, furnishing parametric stastistical bounds for the predicted curves. The procedures are exemplified over two problems in Computational Mechanics.
Wed, 01 Jun 2022 00:00:00 GMThttp://hdl.handle.net/10985/223772022-06-01T00:00:00ZCHAMPANEY, VictorPASQUALE, AngeloAMMAR, AmineCHINESTA SORIA, FranciscoIn the context of parametric surrogates, several nontrivial issues arise when a whole curve shall be predicted from given input features. For instance, different sampling or ending points lead to non-aligned curves. This also happens when the curves exhibit a common pattern characterized by critical points at shifted locations (e.g., in mechanics, the elasticplastic transition or the rupture point for a material). In such cases, classical interpolation methods fail in giving physics-consistent results and appropriate pre-processing steps are required. Moreover, when bifurcations occur into the parametric space, to enhance the accuracy of the surrogate, a coupling with clustering and classification algorithms is needed. In this work we present several methodologies to overcome these issues. We also exploit such surrogates to quantify and propagate uncertainty, furnishing parametric stastistical bounds for the predicted curves. The procedures are exemplified over two problems in Computational Mechanics.Parametric analysis and machine learning-based parametric modeling of wire laser metal deposition induced porosity
http://hdl.handle.net/10985/22196
Parametric analysis and machine learning-based parametric modeling of wire laser metal deposition induced porosity
LOREAU, Tanguy; CHAMPANEY, Victor; HASCOET, Nicolas; LAMBARRI, Jon; MADARIETA, Mikel; GARMENDIA, Iker; CHINESTA SORIA, Francisco
Additive manufacturing is an appealing solution to produce geometrically complex parts, difficult to manufacture using traditional technologies. The extreme process conditions, in particular the high temperature, complex interactions and couplings, rich metallurgical transformations and combinatorial deposition trajectories, induce numerous process defects and in particular porosity. Simulating numerically porosity appearance remains extremely complex because of the multiple physics induced by the laser-material interaction, the multiple space and time scales, with a strong impact on the simulation efficiency and performances. Moreover, when analyzing parts build-up by using the wire laser metal deposition —wLMD— technology it can be noticed a significant variability in the porosity size and distribution even when process parameters remain unchanged. For these reasons the present paper aims at proposing an alternative modeling approach based on the use of neural networks to express the porosity as a function of different process parameters that will be extracted from the process analysis.
Fri, 01 Apr 2022 00:00:00 GMThttp://hdl.handle.net/10985/221962022-04-01T00:00:00ZLOREAU, TanguyCHAMPANEY, VictorHASCOET, NicolasLAMBARRI, JonMADARIETA, MikelGARMENDIA, IkerCHINESTA SORIA, FranciscoAdditive manufacturing is an appealing solution to produce geometrically complex parts, difficult to manufacture using traditional technologies. The extreme process conditions, in particular the high temperature, complex interactions and couplings, rich metallurgical transformations and combinatorial deposition trajectories, induce numerous process defects and in particular porosity. Simulating numerically porosity appearance remains extremely complex because of the multiple physics induced by the laser-material interaction, the multiple space and time scales, with a strong impact on the simulation efficiency and performances. Moreover, when analyzing parts build-up by using the wire laser metal deposition —wLMD— technology it can be noticed a significant variability in the porosity size and distribution even when process parameters remain unchanged. For these reasons the present paper aims at proposing an alternative modeling approach based on the use of neural networks to express the porosity as a function of different process parameters that will be extracted from the process analysis.Identification of material parameters in low-data limit: application to gradient-enhanced continua
http://hdl.handle.net/10985/24661
Identification of material parameters in low-data limit: application to gradient-enhanced continua
NGUYEN, Duc-Vinh; JEBAHI, Mohamed; CHAMPANEY, Victor; CHINESTA SORIA, Francisco
Due to the growing trend towards miniaturization, small-scale manufacturing processes have become widely used in various engineering fields to manufacture miniaturized products. These processes generally exhibit complex size effects, making the behavior of materials highly dependent on their geometric dimensions. As a result, accurate understanding and modeling of such effects are crucial for optimizing manufacturing outcomes and achieving high-performance final products. To this end, advanced gradient-enhanced plasticity theories have emerged as powerful tools for capturing these complex phenomena, offering a level of accuracy significantly greater than that provided by classical plasticity approaches. However, these advanced theories often require the identification of a large number of material parameters, which poses a significant challenge due to limited experimental data at small scales and high computation costs. The present paper aims at evaluating and comparing the effectiveness of various optimization techniques, including evolutionary algorithm, response surface methodology and Bayesian optimization, in identifying the material parameter of a recent flexible gradient-enhanced plasticity model developed by the authors. The paper findings represent an attempt to bridge the gap between advanced material behavior theories and their practical industrial applications, by offering insights into efficient and reliable material parameter identification procedures.
Mon, 01 Jan 2024 00:00:00 GMThttp://hdl.handle.net/10985/246612024-01-01T00:00:00ZNGUYEN, Duc-VinhJEBAHI, MohamedCHAMPANEY, VictorCHINESTA SORIA, FranciscoDue to the growing trend towards miniaturization, small-scale manufacturing processes have become widely used in various engineering fields to manufacture miniaturized products. These processes generally exhibit complex size effects, making the behavior of materials highly dependent on their geometric dimensions. As a result, accurate understanding and modeling of such effects are crucial for optimizing manufacturing outcomes and achieving high-performance final products. To this end, advanced gradient-enhanced plasticity theories have emerged as powerful tools for capturing these complex phenomena, offering a level of accuracy significantly greater than that provided by classical plasticity approaches. However, these advanced theories often require the identification of a large number of material parameters, which poses a significant challenge due to limited experimental data at small scales and high computation costs. The present paper aims at evaluating and comparing the effectiveness of various optimization techniques, including evolutionary algorithm, response surface methodology and Bayesian optimization, in identifying the material parameter of a recent flexible gradient-enhanced plasticity model developed by the authors. The paper findings represent an attempt to bridge the gap between advanced material behavior theories and their practical industrial applications, by offering insights into efficient and reliable material parameter identification procedures.Describing and Modeling Rough Composites Surfaces by Using Topological Data Analysis and Fractional Brownian Motion
http://hdl.handle.net/10985/24797
Describing and Modeling Rough Composites Surfaces by Using Topological Data Analysis and Fractional Brownian Motion
RUNACHER, Antoine; KAZEMZADEH-PARSI, Mohammad-Javad; DI LORENZO, Daniele; CHAMPANEY, Victor; HASCOET, Nicolas; CHINESTA SORIA, Francisco; AMMAR, Amine
Many composite manufacturing processes employ the consolidation of pre-impregnated preforms. However, in order to obtain adequate performance of the formed part, intimate contact and molecular diffusion across the different composites’ preform layers must be ensured. The latter takes place as soon as the intimate contact occurs and the temperature remains high enough during the molecular reptation characteristic time. The former, in turn, depends on the applied compression force, the temperature and the composite rheology, which, during the processing, induce the flow of asperities, promoting the intimate contact. Thus, the initial roughness and its evolution during the process, become critical factors in the composite consolidation. Processing optimization and control are needed for an adequate model, enabling it to infer the consolidation degree from the material and process features. The parameters associated with the process are easily identifiable and measurable (e.g., temperature, compression force, process time, ⋯). The ones concerning the materials are also accessible; however, describing the surface roughness remains an issue. Usual statistical descriptors are too poor and, moreover, they are too far from the involved physics. The present paper focuses on the use of advanced descriptors out-performing usual statistical descriptors, in particular those based on the use of homology persistence (at the heart of the so-called topological data analysis—TDA), and their connection with fractional Brownian surfaces. The latter constitutes a performance surface generator able to represent the surface evolution all along the consolidation process, as the present paper emphasizes.
Sun, 01 Jan 2023 00:00:00 GMThttp://hdl.handle.net/10985/247972023-01-01T00:00:00ZRUNACHER, AntoineKAZEMZADEH-PARSI, Mohammad-JavadDI LORENZO, DanieleCHAMPANEY, VictorHASCOET, NicolasCHINESTA SORIA, FranciscoAMMAR, AmineMany composite manufacturing processes employ the consolidation of pre-impregnated preforms. However, in order to obtain adequate performance of the formed part, intimate contact and molecular diffusion across the different composites’ preform layers must be ensured. The latter takes place as soon as the intimate contact occurs and the temperature remains high enough during the molecular reptation characteristic time. The former, in turn, depends on the applied compression force, the temperature and the composite rheology, which, during the processing, induce the flow of asperities, promoting the intimate contact. Thus, the initial roughness and its evolution during the process, become critical factors in the composite consolidation. Processing optimization and control are needed for an adequate model, enabling it to infer the consolidation degree from the material and process features. The parameters associated with the process are easily identifiable and measurable (e.g., temperature, compression force, process time, ⋯). The ones concerning the materials are also accessible; however, describing the surface roughness remains an issue. Usual statistical descriptors are too poor and, moreover, they are too far from the involved physics. The present paper focuses on the use of advanced descriptors out-performing usual statistical descriptors, in particular those based on the use of homology persistence (at the heart of the so-called topological data analysis—TDA), and their connection with fractional Brownian surfaces. The latter constitutes a performance surface generator able to represent the surface evolution all along the consolidation process, as the present paper emphasizes.Parametric Damage Mechanics Empowering Structural Health Monitoring of 3D Woven Composites
http://hdl.handle.net/10985/24737
Parametric Damage Mechanics Empowering Structural Health Monitoring of 3D Woven Composites
JACOT, Maurine; CHAMPANEY, Victor; CHINESTA SORIA, Francisco; CORTIAL, Julien
This paper presents a data-driven structural health monitoring (SHM) method by the use of so-called reduced-order models relying on an offline training/online use for unidirectional fiber and matrix failure detection in a 3D woven composite plate. During the offline phase (or learning) a dataset of possible damage localization, fiber and matrix failure ratios is generated through high-fidelity simulations (ABAQUS software). Then, a reduced model in a lower-dimensional approximation subspace based on the so-called sparse proper generalized decomposition (sPGD) is constructed. The parametrized approach of the sPGD method reduces the computational burden associated with a high-fidelity solver and allows a faster evaluation of all possible failure configurations. However, during the testing phase, it turns out that classical sPGD fails to capture the influence of the damage localization on the solution. To alleviate the just-referred difficulties, the present work proposes an adaptive sPGD. First, a change of variable is carried out to place all the damage areas on the same reference region, where an adapted interpolation can be done. During the online use, an optimization algorithm is employed with numerical experiments to evaluate the damage localization and damage ratio which allow us to define the health state of the structure.
Sun, 01 Jan 2023 00:00:00 GMThttp://hdl.handle.net/10985/247372023-01-01T00:00:00ZJACOT, MaurineCHAMPANEY, VictorCHINESTA SORIA, FranciscoCORTIAL, JulienThis paper presents a data-driven structural health monitoring (SHM) method by the use of so-called reduced-order models relying on an offline training/online use for unidirectional fiber and matrix failure detection in a 3D woven composite plate. During the offline phase (or learning) a dataset of possible damage localization, fiber and matrix failure ratios is generated through high-fidelity simulations (ABAQUS software). Then, a reduced model in a lower-dimensional approximation subspace based on the so-called sparse proper generalized decomposition (sPGD) is constructed. The parametrized approach of the sPGD method reduces the computational burden associated with a high-fidelity solver and allows a faster evaluation of all possible failure configurations. However, during the testing phase, it turns out that classical sPGD fails to capture the influence of the damage localization on the solution. To alleviate the just-referred difficulties, the present work proposes an adaptive sPGD. First, a change of variable is carried out to place all the damage areas on the same reference region, where an adapted interpolation can be done. During the online use, an optimization algorithm is employed with numerical experiments to evaluate the damage localization and damage ratio which allow us to define the health state of the structure.