SAM
https://sam.ensam.eu:443
The DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.Wed, 24 Jul 2024 17:39:48 GMT2024-07-24T17:39:48ZPGD-Based Computational Vademecum for Efficient Design, Optimization and Control
http://hdl.handle.net/10985/10241
PGD-Based Computational Vademecum for Efficient Design, Optimization and Control
LEYGUE, Adrien; BORDEU, Felipe; AGUADO, Jose Vicente; CUETO, Elias; GONZALEZ, David; HUERTA, Antonio; ALFARO, Icíar; AMMAR, Amine; CHINESTA SORIA, Francisco
In this paper we are addressing a new paradigm in the field of simulation-based engineering sciences (SBES) to face the challenges posed by current ICT technologies. Despite the impressive progress attained by simulation capabilities and techniques, some challenging problems remain today intractable. These problems, that are common to many branches of science and engineering, are of different nature. Among them, we can cite those related to high-dimensional problems, which do not admit mesh-based approaches due to the exponential increase of degrees of freedom. We developed in recent years a novel technique, called Proper Generalized Decomposition (PGD). It is based on the assumption of a separated form of the unknown field and it has demonstrated its capabilities in dealing with high-dimensional problems overcoming the strong limitations of classical approaches. But the main opportunity given by this technique is that it allows for a completely new approach for classic problems, not necessarily high dimensional. Many challenging problems can be efficiently cast into a multidimensional framework and this opens new possibilities to solve old and new problems with strategies not envisioned until now. For instance, parameters in a model can be set as additional extra-coordinates of the model. In a PGD framework, the resulting model is solved once for life, in order to obtain a general solution that includes all the solutions for every possible value of the parameters, that is, a sort of computational vademecum. Under this rationale, optimization of complex problems, uncertainty quantification, simulation-based control and real-time simulation are now at hand, even in highly complex scenarios, by combining an off-line stage in which the general PGD solution, the vademecum, is computed, and an on-line phase in which, even on deployed, handheld, platforms such as smartphones or tablets, real-time response is obtained as a result of our queries.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10985/102412013-01-01T00:00:00ZLEYGUE, AdrienBORDEU, FelipeAGUADO, Jose VicenteCUETO, EliasGONZALEZ, DavidHUERTA, AntonioALFARO, IcíarAMMAR, AmineCHINESTA SORIA, FranciscoIn this paper we are addressing a new paradigm in the field of simulation-based engineering sciences (SBES) to face the challenges posed by current ICT technologies. Despite the impressive progress attained by simulation capabilities and techniques, some challenging problems remain today intractable. These problems, that are common to many branches of science and engineering, are of different nature. Among them, we can cite those related to high-dimensional problems, which do not admit mesh-based approaches due to the exponential increase of degrees of freedom. We developed in recent years a novel technique, called Proper Generalized Decomposition (PGD). It is based on the assumption of a separated form of the unknown field and it has demonstrated its capabilities in dealing with high-dimensional problems overcoming the strong limitations of classical approaches. But the main opportunity given by this technique is that it allows for a completely new approach for classic problems, not necessarily high dimensional. Many challenging problems can be efficiently cast into a multidimensional framework and this opens new possibilities to solve old and new problems with strategies not envisioned until now. For instance, parameters in a model can be set as additional extra-coordinates of the model. In a PGD framework, the resulting model is solved once for life, in order to obtain a general solution that includes all the solutions for every possible value of the parameters, that is, a sort of computational vademecum. Under this rationale, optimization of complex problems, uncertainty quantification, simulation-based control and real-time simulation are now at hand, even in highly complex scenarios, by combining an off-line stage in which the general PGD solution, the vademecum, is computed, and an on-line phase in which, even on deployed, handheld, platforms such as smartphones or tablets, real-time response is obtained as a result of our queries.Parametric solutions involving geometry: A step towards efficient shape optimization
http://hdl.handle.net/10985/10244
Parametric solutions involving geometry: A step towards efficient shape optimization
HUERTA, Antonio; CUETO, Elias; LEYGUE, Adrien; AMMAR, Amine; CHINESTA SORIA, Francisco
Optimization of manufacturing processes or structures involves the optimal choice of many parameters (process parameters, material parameters or geometrical parameters). Usual strategies proceed by defining a trial choice of those parameters and then solving the resulting model. Then, an appropriate cost function is evaluated and its optimality checked. While the optimum is not reached, the process parameters should be updated by using an appropriate optimization procedure, and then the model must be solved again for the updated process parameters. Thus, a direct numerical solution is needed for each choice of the process parameters, with the subsequent impact on the computing time. In this work we focus on shape optimization that involves the appropriate choice of some parameters defining the problem geometry. The main objective of this work is to describe an original approach for computing an off-line parametric solution. That is, a solution able to include information for different parameter values and also allowing to compute readily the sensitivities. The curse of dimensionality is circumvented by invoking the Proper Generalized Decomposition (PGD) introduced in former works, which is applied here to compute geometrically parametrized solutions.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10985/102442014-01-01T00:00:00ZHUERTA, AntonioCUETO, EliasLEYGUE, AdrienAMMAR, AmineCHINESTA SORIA, FranciscoOptimization of manufacturing processes or structures involves the optimal choice of many parameters (process parameters, material parameters or geometrical parameters). Usual strategies proceed by defining a trial choice of those parameters and then solving the resulting model. Then, an appropriate cost function is evaluated and its optimality checked. While the optimum is not reached, the process parameters should be updated by using an appropriate optimization procedure, and then the model must be solved again for the updated process parameters. Thus, a direct numerical solution is needed for each choice of the process parameters, with the subsequent impact on the computing time. In this work we focus on shape optimization that involves the appropriate choice of some parameters defining the problem geometry. The main objective of this work is to describe an original approach for computing an off-line parametric solution. That is, a solution able to include information for different parameter values and also allowing to compute readily the sensitivities. The curse of dimensionality is circumvented by invoking the Proper Generalized Decomposition (PGD) introduced in former works, which is applied here to compute geometrically parametrized solutions.A simple microstructural viscoelastic model for flowing foams
http://hdl.handle.net/10985/16835
A simple microstructural viscoelastic model for flowing foams
IBÁÑEZ, Rubén; SCHEUER, Adrien; HUERTA, Antonio; KEUNINGS, Roland; ABISSET-CHAVANNE, Emmanuelle; CHINESTA SORIA, Francisco
The numerical modelling of forming processes involving the flow of foams requires taking into account the different problem scales. Thus, in industrial applications a macroscopic approach is suitable, whereas the macroscopic flow parameters depend on the cellular structure: cell size, shape, orientation, etc. Moreover, the shape and orientation of the cells are induced by the flow. A fully microscopic description remains useful to understand the foam behaviour and the topological changes induced by the cell elongation or distortion, however, from an industrial point of view, microscopic simulations remain challenging to address practical applications involving flows in complex 3D geometries. In this paper, we propose a viscoelastic flow model where the foam microstructure is represented from suitable microstructure descriptors whose evolution is governed by the macroscopic flow kinematics.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10985/168352018-01-01T00:00:00ZIBÁÑEZ, RubénSCHEUER, AdrienHUERTA, AntonioKEUNINGS, RolandABISSET-CHAVANNE, EmmanuelleCHINESTA SORIA, FranciscoThe numerical modelling of forming processes involving the flow of foams requires taking into account the different problem scales. Thus, in industrial applications a macroscopic approach is suitable, whereas the macroscopic flow parameters depend on the cellular structure: cell size, shape, orientation, etc. Moreover, the shape and orientation of the cells are induced by the flow. A fully microscopic description remains useful to understand the foam behaviour and the topological changes induced by the cell elongation or distortion, however, from an industrial point of view, microscopic simulations remain challenging to address practical applications involving flows in complex 3D geometries. In this paper, we propose a viscoelastic flow model where the foam microstructure is represented from suitable microstructure descriptors whose evolution is governed by the macroscopic flow kinematics.A Multidimensional Data-Driven Sparse Identification Technique: The Sparse Proper Generalized Decomposition
http://hdl.handle.net/10985/16676
A Multidimensional Data-Driven Sparse Identification Technique: The Sparse Proper Generalized Decomposition
IBAÑEZ, Ruben; GONZALEZ, David; CUETO, Elias; HUERTA, Antonio; DUVAL, Jean-Louis; ABISSET-CHAVANNE, Emmanuelle; AMMAR, Amine; CHINESTA SORIA, Francisco
Sparse model identification by means of data is especially cumbersome if the sought dynamics live in a high dimensional space. This usually involves the need for large amount of data, unfeasible in such a high dimensional settings. This well-known phenomenon, coined as the curse of dimensionality, is here overcome by means of the use of separate representations. We present a technique based on the same principles of the Proper Generalized Decomposition that enables the identification of complex laws in the low-data limit. We provide examples on the performance of the technique in up to ten dimensions.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10985/166762018-01-01T00:00:00ZIBAÑEZ, RubenGONZALEZ, DavidCUETO, EliasHUERTA, AntonioDUVAL, Jean-LouisABISSET-CHAVANNE, EmmanuelleAMMAR, AmineCHINESTA SORIA, FranciscoSparse model identification by means of data is especially cumbersome if the sought dynamics live in a high dimensional space. This usually involves the need for large amount of data, unfeasible in such a high dimensional settings. This well-known phenomenon, coined as the curse of dimensionality, is here overcome by means of the use of separate representations. We present a technique based on the same principles of the Proper Generalized Decomposition that enables the identification of complex laws in the low-data limit. We provide examples on the performance of the technique in up to ten dimensions.Multiscale proper generalized decomposition based on the partition of unity
http://hdl.handle.net/10985/18456
Multiscale proper generalized decomposition based on the partition of unity
IBÁÑEZ PINILLO, Rubén; CUETO, Elias; HUERTA, Antonio; DUVAL, Jean-Louis; AMMAR, Amine; CHINESTA SORIA, Francisco
Solutions of partial differential equations could exhibit a multiscale behavior. Standard discretization techniques are constraints to mesh up to the finest scale to predict accurately the response of the system. The proposed methodology is based on the standard proper generalized decomposition rationale; thus, the PDE is transformed into a nonlinear system that iterates between microscale and macroscale states, where the time coordinate could be viewed as a 2D time, representing the microtime and macrotime scales. The macroscale effects are taken into account because of an FEM-based macrodiscretization, whereas the microscale effects are handled with unidimensional parent spaces that are replicated throughout the domain. The proposed methodology can be seen as an alternative route to circumvent prohibitive meshes arising from the necessity of capturing fine-scale behaviors.
Tue, 01 Jan 2019 00:00:00 GMThttp://hdl.handle.net/10985/184562019-01-01T00:00:00ZIBÁÑEZ PINILLO, RubénCUETO, EliasHUERTA, AntonioDUVAL, Jean-LouisAMMAR, AmineCHINESTA SORIA, FranciscoSolutions of partial differential equations could exhibit a multiscale behavior. Standard discretization techniques are constraints to mesh up to the finest scale to predict accurately the response of the system. The proposed methodology is based on the standard proper generalized decomposition rationale; thus, the PDE is transformed into a nonlinear system that iterates between microscale and macroscale states, where the time coordinate could be viewed as a 2D time, representing the microtime and macrotime scales. The macroscale effects are taken into account because of an FEM-based macrodiscretization, whereas the microscale effects are handled with unidimensional parent spaces that are replicated throughout the domain. The proposed methodology can be seen as an alternative route to circumvent prohibitive meshes arising from the necessity of capturing fine-scale behaviors.Structural health monitoring by combining machine learning and dimensionality reduction techniques
http://hdl.handle.net/10985/15522
Structural health monitoring by combining machine learning and dimensionality reduction techniques
QUARANTA, Giacomo; LOPEZ, Elena; DUVAL, Jean Louis; HUERTA, Antonio; ABISSET-CHAVANNE, Emmanuelle; CHINESTA SORIA, Francisco
Structural Health Monitoring is of major interest in many areas of structural mechanics. This paper presents a new approach based on the combination of dimensionality reduction and data-mining techniques able to differentiate damaged and undamaged regions in a given structure. Indeed, existence, severity (size) and location of damage can be efficiently estimated from collected data at some locations from which the fields of interest are completed before the analysis based on machine learning and dimensionality reduction techniques proceed.
Tue, 01 Jan 2019 00:00:00 GMThttp://hdl.handle.net/10985/155222019-01-01T00:00:00ZQUARANTA, GiacomoLOPEZ, ElenaDUVAL, Jean LouisHUERTA, AntonioABISSET-CHAVANNE, EmmanuelleCHINESTA SORIA, FranciscoStructural Health Monitoring is of major interest in many areas of structural mechanics. This paper presents a new approach based on the combination of dimensionality reduction and data-mining techniques able to differentiate damaged and undamaged regions in a given structure. Indeed, existence, severity (size) and location of damage can be efficiently estimated from collected data at some locations from which the fields of interest are completed before the analysis based on machine learning and dimensionality reduction techniques proceed.A local multiple proper generalized decomposition based on the partition of unity
http://hdl.handle.net/10985/17949
A local multiple proper generalized decomposition based on the partition of unity
IBAÑEZ, Ruben; HUERTA, Antonio; CUETO, Elías G.; ABISSET-CHAVANNE, Emmanuelle; CHINESTA SORIA, Francisco
It is well known that model order reduction techniques that project the solution of the problem at hand onto a low-dimensional subspace present difficulties when this solution lies on a nonlinear manifold. To overcome these difficulties (notably, an undesirable increase in the number of required modes in the solution), several solutions have been suggested. Among them, we can cite the use of nonlinear dimensionality reduction techniques or, alternatively, the employ of linear local reduced order approaches. These last approaches usually present the difficulty of ensuring continuity between these local models. Here, a new method is presented, which ensures this continuity by resorting to the paradigm of the partition of unity while employing proper generalized decompositions at each local patch.
Tue, 01 Jan 2019 00:00:00 GMThttp://hdl.handle.net/10985/179492019-01-01T00:00:00ZIBAÑEZ, RubenHUERTA, AntonioCUETO, Elías G.ABISSET-CHAVANNE, EmmanuelleCHINESTA SORIA, FranciscoIt is well known that model order reduction techniques that project the solution of the problem at hand onto a low-dimensional subspace present difficulties when this solution lies on a nonlinear manifold. To overcome these difficulties (notably, an undesirable increase in the number of required modes in the solution), several solutions have been suggested. Among them, we can cite the use of nonlinear dimensionality reduction techniques or, alternatively, the employ of linear local reduced order approaches. These last approaches usually present the difficulty of ensuring continuity between these local models. Here, a new method is presented, which ensures this continuity by resorting to the paradigm of the partition of unity while employing proper generalized decompositions at each local patch.Proper generalized decomposition solutions within a domain decomposition strategy
http://hdl.handle.net/10985/13823
Proper generalized decomposition solutions within a domain decomposition strategy
HUERTA, Antonio; NADAL, Enrique; CHINESTA SORIA, Francisco
Domain decomposition strategies and proper generalized decomposition are efficiently combined to obtain a fast evaluation of the solution approximation in parameterized elliptic problems with complex geometries. The classical difficulties associated to the combination of layered domains with arbitrarily oriented midsurfaces, which may require in-plane–out-of-plane techniques, are now dismissed. More generally, solutions on large domains can now be confronted within a domain decomposition approach. This is done with a reduced cost in the offline phase because the proper generalized decomposition gives an explicit description of the solution in each subdomain in terms of the solution at the interface. Thus, the evaluation of the approximation in each subdomain is a simple function evaluation given the interface values (and the other problem parameters). The interface solution can be characterized by any a priori user-defined approximation. Here, for illustration purposes, hierarchical polynomials are used. The repetitiveness of the subdomains is exploited to reduce drastically the offline computational effort. The online phase requires solving a nonlinear problem to determine all the interface solutions. However, this problem only has degrees of freedom on the interfaces and the Jacobian matrix is explicitly determined. Obviously, other parameters characterizing the solution (material constants, external loads, and geometry) can also be incorporated in the explicit description of the solution.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10985/138232018-01-01T00:00:00ZHUERTA, AntonioNADAL, EnriqueCHINESTA SORIA, FranciscoDomain decomposition strategies and proper generalized decomposition are efficiently combined to obtain a fast evaluation of the solution approximation in parameterized elliptic problems with complex geometries. The classical difficulties associated to the combination of layered domains with arbitrarily oriented midsurfaces, which may require in-plane–out-of-plane techniques, are now dismissed. More generally, solutions on large domains can now be confronted within a domain decomposition approach. This is done with a reduced cost in the offline phase because the proper generalized decomposition gives an explicit description of the solution in each subdomain in terms of the solution at the interface. Thus, the evaluation of the approximation in each subdomain is a simple function evaluation given the interface values (and the other problem parameters). The interface solution can be characterized by any a priori user-defined approximation. Here, for illustration purposes, hierarchical polynomials are used. The repetitiveness of the subdomains is exploited to reduce drastically the offline computational effort. The online phase requires solving a nonlinear problem to determine all the interface solutions. However, this problem only has degrees of freedom on the interfaces and the Jacobian matrix is explicitly determined. Obviously, other parameters characterizing the solution (material constants, external loads, and geometry) can also be incorporated in the explicit description of the solution.Tensor Representation of Non-linear Models Using Cross Approximations
http://hdl.handle.net/10985/17719
Tensor Representation of Non-linear Models Using Cross Approximations
AGUADO, Jose Vicente; BORZACCHIELLO, Domenico; KOLLEPARA, Kiran S.; HUERTA, Antonio; CHINESTA SORIA, Francisco
Tensor representations allow compact storage and efficient manipulation of multi-dimensional data. Based on these, tensor methods build low-rank subspaces for the solution of multi-dimensional and multi-parametric models. However, tensor methods cannot always be implemented efficiently, specially when dealing with non-linear models. In this paper, we discuss the importance of achieving a tensor representation of the model itself for the efficiency of tensor-based algorithms. We investigate the adequacy of interpolation rather than projection-based approaches as a means to enforce such tensor representation, and propose the use of cross approximations for models in moderate dimension. Finally, linearization of tensor problems is analyzed and several strategies for the tensor subspace construction are proposed.
Tue, 01 Jan 2019 00:00:00 GMThttp://hdl.handle.net/10985/177192019-01-01T00:00:00ZAGUADO, Jose VicenteBORZACCHIELLO, DomenicoKOLLEPARA, Kiran S.HUERTA, AntonioCHINESTA SORIA, FranciscoTensor representations allow compact storage and efficient manipulation of multi-dimensional data. Based on these, tensor methods build low-rank subspaces for the solution of multi-dimensional and multi-parametric models. However, tensor methods cannot always be implemented efficiently, specially when dealing with non-linear models. In this paper, we discuss the importance of achieving a tensor representation of the model itself for the efficiency of tensor-based algorithms. We investigate the adequacy of interpolation rather than projection-based approaches as a means to enforce such tensor representation, and propose the use of cross approximations for models in moderate dimension. Finally, linearization of tensor problems is analyzed and several strategies for the tensor subspace construction are proposed.Radars in Transport Applications
http://hdl.handle.net/10985/18620
Radars in Transport Applications
IBÁÑEZ PINILLO, Rubén; ABENIUS, Erik; HUERTA, Antonio; ABISSET-CHAVANNE, Emmanuelle; CHINESTA SORIA, Francisco
In the recent years, automotive car industry is evolving towards a new generation of autonomous vehicles, where decision making is not fully perform by the driver but it partially relies on the technology of the car itself. Indeed, a CPU inside the car will process all information coming from the sensors, distinguishing different scenarios appearing in the real life and ultimately allowing decision making. Since the CPU will be confronted with plenty of information, tools like machine learning or big-data analysis will be a useful ally to separate data from information. These existing machine learning techniques, such as kernel Principal Component Analysis (k-PCA), Locally Linear Embedding (LLE) among many other techniques, are useful to unveil the latent parameters defining a given scenario. Indeed, these algorithms have been already used to perform real-time classification of signals appearing throughout the road. Selecting the modeling of the electromagnetic response of the radar plays an important role to achieve real time constraints. Even though Helmholtz equation represents accurately the physics, the computational cost of such simulation is not affordable for real-time applications due to high radar operating frequencies, resulting into a very fine finite element mesh. On the other hand, far field approaches are not so accurate when the objects are very close due to plane wave assumption. In the first part of this work, the Geometrical Optics method is investigated in this work as a possible route to fulfill both real-time and accuracy constraints. The main hypothesis under such model is that waves are treated as straight lines constrained to optical reflection laws. Therefore, there is no need to mesh the interior of the domain. However, the accuracy of such approach is compromised when the size of the objects inside the domain are comparable to the wave lengths or in the vicinity of angular points. The second part is mainly focused on of the application of manifold learning and big data analysis into a data set of precomputed scenarios. Indeed, the identification of an unknown scenario from electromagnetic signals is purchased. Nevertheless, current research lines are devoted to give an answer to questions such as how many receptors do we need to identify unequivocally the scenario, where to locate the receptors, or which parts of the scenario have a negligible impact in the electromagnetic response.
Wed, 01 Jan 2020 00:00:00 GMThttp://hdl.handle.net/10985/186202020-01-01T00:00:00ZIBÁÑEZ PINILLO, RubénABENIUS, ErikHUERTA, AntonioABISSET-CHAVANNE, EmmanuelleCHINESTA SORIA, FranciscoIn the recent years, automotive car industry is evolving towards a new generation of autonomous vehicles, where decision making is not fully perform by the driver but it partially relies on the technology of the car itself. Indeed, a CPU inside the car will process all information coming from the sensors, distinguishing different scenarios appearing in the real life and ultimately allowing decision making. Since the CPU will be confronted with plenty of information, tools like machine learning or big-data analysis will be a useful ally to separate data from information. These existing machine learning techniques, such as kernel Principal Component Analysis (k-PCA), Locally Linear Embedding (LLE) among many other techniques, are useful to unveil the latent parameters defining a given scenario. Indeed, these algorithms have been already used to perform real-time classification of signals appearing throughout the road. Selecting the modeling of the electromagnetic response of the radar plays an important role to achieve real time constraints. Even though Helmholtz equation represents accurately the physics, the computational cost of such simulation is not affordable for real-time applications due to high radar operating frequencies, resulting into a very fine finite element mesh. On the other hand, far field approaches are not so accurate when the objects are very close due to plane wave assumption. In the first part of this work, the Geometrical Optics method is investigated in this work as a possible route to fulfill both real-time and accuracy constraints. The main hypothesis under such model is that waves are treated as straight lines constrained to optical reflection laws. Therefore, there is no need to mesh the interior of the domain. However, the accuracy of such approach is compromised when the size of the objects inside the domain are comparable to the wave lengths or in the vicinity of angular points. The second part is mainly focused on of the application of manifold learning and big data analysis into a data set of precomputed scenarios. Indeed, the identification of an unknown scenario from electromagnetic signals is purchased. Nevertheless, current research lines are devoted to give an answer to questions such as how many receptors do we need to identify unequivocally the scenario, where to locate the receptors, or which parts of the scenario have a negligible impact in the electromagnetic response.