SAM
https://sam.ensam.eu:443
The DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.Mon, 14 Oct 2024 09:32:42 GMT2024-10-14T09:32:42ZMultiscale proper generalized decomposition based on the partition of unity
http://hdl.handle.net/10985/18456
Multiscale proper generalized decomposition based on the partition of unity
IBÁÑEZ PINILLO, Rubén; CUETO, Elias; HUERTA, Antonio; DUVAL, Jean-Louis; AMMAR, Amine; CHINESTA SORIA, Francisco
Solutions of partial differential equations could exhibit a multiscale behavior. Standard discretization techniques are constraints to mesh up to the finest scale to predict accurately the response of the system. The proposed methodology is based on the standard proper generalized decomposition rationale; thus, the PDE is transformed into a nonlinear system that iterates between microscale and macroscale states, where the time coordinate could be viewed as a 2D time, representing the microtime and macrotime scales. The macroscale effects are taken into account because of an FEM-based macrodiscretization, whereas the microscale effects are handled with unidimensional parent spaces that are replicated throughout the domain. The proposed methodology can be seen as an alternative route to circumvent prohibitive meshes arising from the necessity of capturing fine-scale behaviors.
Tue, 01 Jan 2019 00:00:00 GMThttp://hdl.handle.net/10985/184562019-01-01T00:00:00ZIBÁÑEZ PINILLO, RubénCUETO, EliasHUERTA, AntonioDUVAL, Jean-LouisAMMAR, AmineCHINESTA SORIA, FranciscoSolutions of partial differential equations could exhibit a multiscale behavior. Standard discretization techniques are constraints to mesh up to the finest scale to predict accurately the response of the system. The proposed methodology is based on the standard proper generalized decomposition rationale; thus, the PDE is transformed into a nonlinear system that iterates between microscale and macroscale states, where the time coordinate could be viewed as a 2D time, representing the microtime and macrotime scales. The macroscale effects are taken into account because of an FEM-based macrodiscretization, whereas the microscale effects are handled with unidimensional parent spaces that are replicated throughout the domain. The proposed methodology can be seen as an alternative route to circumvent prohibitive meshes arising from the necessity of capturing fine-scale behaviors.Radars in Transport Applications
http://hdl.handle.net/10985/18620
Radars in Transport Applications
IBÁÑEZ PINILLO, Rubén; ABENIUS, Erik; HUERTA, Antonio; ABISSET-CHAVANNE, Emmanuelle; CHINESTA SORIA, Francisco
In the recent years, automotive car industry is evolving towards a new generation of autonomous vehicles, where decision making is not fully perform by the driver but it partially relies on the technology of the car itself. Indeed, a CPU inside the car will process all information coming from the sensors, distinguishing different scenarios appearing in the real life and ultimately allowing decision making. Since the CPU will be confronted with plenty of information, tools like machine learning or big-data analysis will be a useful ally to separate data from information. These existing machine learning techniques, such as kernel Principal Component Analysis (k-PCA), Locally Linear Embedding (LLE) among many other techniques, are useful to unveil the latent parameters defining a given scenario. Indeed, these algorithms have been already used to perform real-time classification of signals appearing throughout the road. Selecting the modeling of the electromagnetic response of the radar plays an important role to achieve real time constraints. Even though Helmholtz equation represents accurately the physics, the computational cost of such simulation is not affordable for real-time applications due to high radar operating frequencies, resulting into a very fine finite element mesh. On the other hand, far field approaches are not so accurate when the objects are very close due to plane wave assumption. In the first part of this work, the Geometrical Optics method is investigated in this work as a possible route to fulfill both real-time and accuracy constraints. The main hypothesis under such model is that waves are treated as straight lines constrained to optical reflection laws. Therefore, there is no need to mesh the interior of the domain. However, the accuracy of such approach is compromised when the size of the objects inside the domain are comparable to the wave lengths or in the vicinity of angular points. The second part is mainly focused on of the application of manifold learning and big data analysis into a data set of precomputed scenarios. Indeed, the identification of an unknown scenario from electromagnetic signals is purchased. Nevertheless, current research lines are devoted to give an answer to questions such as how many receptors do we need to identify unequivocally the scenario, where to locate the receptors, or which parts of the scenario have a negligible impact in the electromagnetic response.
Wed, 01 Jan 2020 00:00:00 GMThttp://hdl.handle.net/10985/186202020-01-01T00:00:00ZIBÁÑEZ PINILLO, RubénABENIUS, ErikHUERTA, AntonioABISSET-CHAVANNE, EmmanuelleCHINESTA SORIA, FranciscoIn the recent years, automotive car industry is evolving towards a new generation of autonomous vehicles, where decision making is not fully perform by the driver but it partially relies on the technology of the car itself. Indeed, a CPU inside the car will process all information coming from the sensors, distinguishing different scenarios appearing in the real life and ultimately allowing decision making. Since the CPU will be confronted with plenty of information, tools like machine learning or big-data analysis will be a useful ally to separate data from information. These existing machine learning techniques, such as kernel Principal Component Analysis (k-PCA), Locally Linear Embedding (LLE) among many other techniques, are useful to unveil the latent parameters defining a given scenario. Indeed, these algorithms have been already used to perform real-time classification of signals appearing throughout the road. Selecting the modeling of the electromagnetic response of the radar plays an important role to achieve real time constraints. Even though Helmholtz equation represents accurately the physics, the computational cost of such simulation is not affordable for real-time applications due to high radar operating frequencies, resulting into a very fine finite element mesh. On the other hand, far field approaches are not so accurate when the objects are very close due to plane wave assumption. In the first part of this work, the Geometrical Optics method is investigated in this work as a possible route to fulfill both real-time and accuracy constraints. The main hypothesis under such model is that waves are treated as straight lines constrained to optical reflection laws. Therefore, there is no need to mesh the interior of the domain. However, the accuracy of such approach is compromised when the size of the objects inside the domain are comparable to the wave lengths or in the vicinity of angular points. The second part is mainly focused on of the application of manifold learning and big data analysis into a data set of precomputed scenarios. Indeed, the identification of an unknown scenario from electromagnetic signals is purchased. Nevertheless, current research lines are devoted to give an answer to questions such as how many receptors do we need to identify unequivocally the scenario, where to locate the receptors, or which parts of the scenario have a negligible impact in the electromagnetic response.Code2vect: An efficient heterogenous data classifier and nonlinear regression technique
http://hdl.handle.net/10985/18405
Code2vect: An efficient heterogenous data classifier and nonlinear regression technique
ARGERICH MARTÍN, Clara; IBÁÑEZ PINILLO, Rubén; BARASINSKI, Anaïs; CHINESTA SORIA, Francisco
The aim of this paper is to present a new classification and regression algorithm based on Artificial Intelligence. The main feature of this algorithm, which will be called Code2Vect, is the nature of the data to treat: qualitative or quantitative and continuous or discrete. Contrary to other artificial intelligence techniques based on the “Big-Data,” this new approach will enable working with a reduced amount of data, within the so-called “Smart Data” paradigm. Moreover, the main purpose of this algorithm is to enable the representation of high-dimensional data and more specifically grouping and visualizing this data according to a given target. For that purpose, the data will be projected into a vectorial space equipped with an appropriate metric, able to group data according to their affinity (with respect to a given output of interest). Furthermore, another application of this algorithm lies on its prediction capability. As it occurs with most common data-mining techniques such as regression trees, by giving an input the output will be inferred, in this case considering the nature of the data formerly described. In order to illustrate its potentialities, two different applications will be addressed, one concerning the representation of high-dimensional and categorical data and another featuring the prediction capabilities of the algorithm.
Tue, 01 Jan 2019 00:00:00 GMThttp://hdl.handle.net/10985/184052019-01-01T00:00:00ZARGERICH MARTÍN, ClaraIBÁÑEZ PINILLO, RubénBARASINSKI, AnaïsCHINESTA SORIA, FranciscoThe aim of this paper is to present a new classification and regression algorithm based on Artificial Intelligence. The main feature of this algorithm, which will be called Code2Vect, is the nature of the data to treat: qualitative or quantitative and continuous or discrete. Contrary to other artificial intelligence techniques based on the “Big-Data,” this new approach will enable working with a reduced amount of data, within the so-called “Smart Data” paradigm. Moreover, the main purpose of this algorithm is to enable the representation of high-dimensional data and more specifically grouping and visualizing this data according to a given target. For that purpose, the data will be projected into a vectorial space equipped with an appropriate metric, able to group data according to their affinity (with respect to a given output of interest). Furthermore, another application of this algorithm lies on its prediction capability. As it occurs with most common data-mining techniques such as regression trees, by giving an input the output will be inferred, in this case considering the nature of the data formerly described. In order to illustrate its potentialities, two different applications will be addressed, one concerning the representation of high-dimensional and categorical data and another featuring the prediction capabilities of the algorithm.