SAM
https://sam.ensam.eu:443
The DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.Wed, 22 Mar 2023 20:18:10 GMT2023-03-22T20:18:10ZOn the solution of the heat equation in very thin tapes
http://hdl.handle.net/10985/14857
On the solution of the heat equation in very thin tapes
PRULIERE, Etienne; CHINESTA, Francisco; AMMAR, Amine; LEYGUE, Adrien; POITOU, Arnaud
This paper addresses two issues usually encountered when simulating thermal processes in forming processes involving tape-type geometries, as is the case of tape or tow placement, surface treatments, / The first issue concerns the necessity of solving the transient model a huge number of times because the thermal loads are moving very fast on the surface of the part and the thermal model is usually non-linear. The second issue concerns the degenerate geometry that we consider in which the thickness is usually much lower than the in-plane characteristic length. The solution of such 3D models involving fine meshes in all the directions becomes rapidly intractable despite the huge recent progresses in computer sciences. In this paper we propose to consider a fully space-time separated representation of the unknown field. This choice allows circumventing both issues allowing the solution of extremely fine models very fast, sometimes in real time.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10985/148572012-01-01T00:00:00ZPRULIERE, EtienneCHINESTA, FranciscoAMMAR, AmineLEYGUE, AdrienPOITOU, ArnaudThis paper addresses two issues usually encountered when simulating thermal processes in forming processes involving tape-type geometries, as is the case of tape or tow placement, surface treatments, / The first issue concerns the necessity of solving the transient model a huge number of times because the thermal loads are moving very fast on the surface of the part and the thermal model is usually non-linear. The second issue concerns the degenerate geometry that we consider in which the thickness is usually much lower than the in-plane characteristic length. The solution of such 3D models involving fine meshes in all the directions becomes rapidly intractable despite the huge recent progresses in computer sciences. In this paper we propose to consider a fully space-time separated representation of the unknown field. This choice allows circumventing both issues allowing the solution of extremely fine models very fast, sometimes in real time.Non-intrusive Sparse Subspace Learning for Parametrized Problems
http://hdl.handle.net/10985/18435
Non-intrusive Sparse Subspace Learning for Parametrized Problems
BORZACCHIELLO, Domenico; AGUADO, José Vicente; CHINESTA, Francisco
We discuss the use of hierarchical collocation to approximate the numerical solution of parametric models. With respect to traditional projection-based reduced order modeling, the use of a collocation enables non-intrusive approach based on sparse adaptive sampling of the parametric space. This allows to recover the low-dimensional structure of the parametric solution subspace while also learning the functional dependency from the parameters in explicit form. A sparse low-rank approximate tensor representation of the parametric solution can be built through an incremental strategy that only needs to have access to the output of a deterministic solver. Non-intrusiveness makes this approach straightforwardly applicable to challenging problems characterized by nonlinearity or non affine weak forms. As we show in the various examples presented in the paper, the method can be interfaced with no particular effort to existing third party simulation software making the proposed approach particularly appealing and adapted to practical engineering problems of industrial interest.
Tue, 01 Jan 2019 00:00:00 GMThttp://hdl.handle.net/10985/184352019-01-01T00:00:00ZBORZACCHIELLO, DomenicoAGUADO, José VicenteCHINESTA, FranciscoWe discuss the use of hierarchical collocation to approximate the numerical solution of parametric models. With respect to traditional projection-based reduced order modeling, the use of a collocation enables non-intrusive approach based on sparse adaptive sampling of the parametric space. This allows to recover the low-dimensional structure of the parametric solution subspace while also learning the functional dependency from the parameters in explicit form. A sparse low-rank approximate tensor representation of the parametric solution can be built through an incremental strategy that only needs to have access to the output of a deterministic solver. Non-intrusiveness makes this approach straightforwardly applicable to challenging problems characterized by nonlinearity or non affine weak forms. As we show in the various examples presented in the paper, the method can be interfaced with no particular effort to existing third party simulation software making the proposed approach particularly appealing and adapted to practical engineering problems of industrial interest.Manifold embedding of heterogeneity in permeability of a woven fabric for optimization of the VARTM process
http://hdl.handle.net/10985/13821
Manifold embedding of heterogeneity in permeability of a woven fabric for optimization of the VARTM process
LOPEZ, Elena; CHINESTA, Francisco; SURESH, Advania
In Vacuum Assisted Resin Transfer Molding (VARTM), fabrics are placed on a tool surface and a Distribution Media (DM) is placed on top to enhance the flow in the in-plane direction. Resin is introduced from one end and a vacuum is applied at the other end to create the pressure gradient needed to impregnate the fabric with resin before curing the resin to fabricate the composite part. Heterogeneity in through the thickness permeability of a woven fabric is one of the causes for the variability in the quality of the final composite part fabricated using the VARTM process. The heterogeneity is caused by the varying sizes of pinholes which are meso-scale empty spaces between woven tows as a result of the weaving process. The pinhole locations and sizes in the fabric govern the void formation behavior during impregnation of the resin into the fabric. The pinholes can be characterized with two parameters, a gamma distribution function parameter α and Moran's I (MI). In this work, manifold embedding methods such as t – Distributed Stochastic Neighborhood Embedding (t_SNE) and Principal Component Analysis (PCA) are used to visually characterize fabrics of interest with the two variables, α and MI, through the reduction of dimensionality. To demonstrate the manifold embedding method, a total of 450 training sample data with ranges of α from 1 to 3 and MI from 0 to 0.5 were used to create a map in three-dimensional space for ease of visualization and characterization. The method is validated with a plain-woven fabric sample in a testing step to show that the two parameters of the fabric are identified with its corresponding α and MI using these machine learning algorithms. Numerical flow simulations were carried out for varying α, MI, and DM permeability, and the results were used to predict final void percentage. The quick online identification of the fabric parameters with machine learning algorithms can instantly provide expected variability in void formation behavior that will be encountered in a VARTM process.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10985/138212018-01-01T00:00:00ZLOPEZ, ElenaCHINESTA, FranciscoSURESH, AdvaniaIn Vacuum Assisted Resin Transfer Molding (VARTM), fabrics are placed on a tool surface and a Distribution Media (DM) is placed on top to enhance the flow in the in-plane direction. Resin is introduced from one end and a vacuum is applied at the other end to create the pressure gradient needed to impregnate the fabric with resin before curing the resin to fabricate the composite part. Heterogeneity in through the thickness permeability of a woven fabric is one of the causes for the variability in the quality of the final composite part fabricated using the VARTM process. The heterogeneity is caused by the varying sizes of pinholes which are meso-scale empty spaces between woven tows as a result of the weaving process. The pinhole locations and sizes in the fabric govern the void formation behavior during impregnation of the resin into the fabric. The pinholes can be characterized with two parameters, a gamma distribution function parameter α and Moran's I (MI). In this work, manifold embedding methods such as t – Distributed Stochastic Neighborhood Embedding (t_SNE) and Principal Component Analysis (PCA) are used to visually characterize fabrics of interest with the two variables, α and MI, through the reduction of dimensionality. To demonstrate the manifold embedding method, a total of 450 training sample data with ranges of α from 1 to 3 and MI from 0 to 0.5 were used to create a map in three-dimensional space for ease of visualization and characterization. The method is validated with a plain-woven fabric sample in a testing step to show that the two parameters of the fabric are identified with its corresponding α and MI using these machine learning algorithms. Numerical flow simulations were carried out for varying α, MI, and DM permeability, and the results were used to predict final void percentage. The quick online identification of the fabric parameters with machine learning algorithms can instantly provide expected variability in void formation behavior that will be encountered in a VARTM process.Review on the Brownian Dynamics Simulation of Bead-Rod-Spring Models Encountered in Computational Rheology
http://hdl.handle.net/10985/19135
Review on the Brownian Dynamics Simulation of Bead-Rod-Spring Models Encountered in Computational Rheology
CRUZ, C.; CHINESTA, Francisco; RÉGNIER, G.
Kinetic theory is a mathematical framework intended to relate directly the most relevant characteristics of the molecular structure to the rheological behavior of the bulk system. In other words, kinetic theory is a micro-to-macro approach for solving the flow of complex fluids that circumvents the use of closure relations and offers a better physical description of the phenomena involved in the flow processes. Cornerstone models in kinetic theory employ beads, rods and springs for mimicking the molecular structure of the complex fluid. The generalized bead-rod-spring chain includes the most basic models in kinetic theory: the freely jointed bead-spring chain and the freely-jointed bead-rod chain. Configuration of simple coarse-grained models can be represented by an equivalent Fokker-Planck (FP) diffusion equation, which describes the evolution of the configuration distribution function in the physical and configurational spaces. FP equation can be a complex mathematical object, given its multidimensionality, and solving it explicitly can become a difficult task. Even more, in some cases, obtaining an equivalent FP equation is not possible given the complexity of the coarse-grained molecular model. Brownian dynamics can be employed as an alternative extensive numerical method for approaching the configuration distribution function of a given kinetic-theory model that avoid obtaining and/or resolving explicitly an equivalent FP equation. The validity of this discrete approach is based on the mathematical equivalence between a continuous diffusion equation and a stochastic differential equation as demonstrated by Itô in the 1940s. This paper presents a review of the fundamental issues in the BD simulation of the linear viscoelastic behavior of bead-rod-spring coarse grained models in dilute solution. In the first part of this work, the BD numerical technique is introduced. An overview of the mathematical framework of the BD and a review of the scope of applications are presented. Subsequently, the links between the rheology of complex fluids, the kinetic theory and the BD technique are established at the light of the stochastic nature of the bead-rod-spring models. Finally, the pertinence of the present state-of-the-art review is explained in terms of the increasing interest for the stochastic micro-to-macro approaches for solving complex fluids problems. In the second part of this paper, a detailed description of the BD algorithm used for simulating a small-amplitude oscillatory deformation test is given. Dynamic properties are employed throughout this work to characterise the linear viscoelastic behavior of bead-rod-spring models in dilute solution. In the third and fourth part of this article, an extensive discussion about the main issues of a BD simulation in linear viscoelasticity of diluted suspensions is tackled at the light of the classical multi-bead-spring chain model and the multi-bead-rod chain model, respectively. Kinematic formulations, integration schemes and expressions to calculate the stress tensor are revised for several classical models: Rouse and Zimm theories in the case of multi-bead-spring chains, and Kramers chain and semi-flexible filaments in the case of multi-bead-rod chains. The implemented BD technique is, on the one hand, validated in front of the analytical or exact numerical solutions known of the equivalent FP equations for those classic kinetic theory models; and, on the other hand, is control-set thanks to the analysis of the main numerical issues involved in a BD simulation. Finally, the review paper is closed by some concluding remarks.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10985/191352012-01-01T00:00:00ZCRUZ, C.CHINESTA, FranciscoRÉGNIER, G.Kinetic theory is a mathematical framework intended to relate directly the most relevant characteristics of the molecular structure to the rheological behavior of the bulk system. In other words, kinetic theory is a micro-to-macro approach for solving the flow of complex fluids that circumvents the use of closure relations and offers a better physical description of the phenomena involved in the flow processes. Cornerstone models in kinetic theory employ beads, rods and springs for mimicking the molecular structure of the complex fluid. The generalized bead-rod-spring chain includes the most basic models in kinetic theory: the freely jointed bead-spring chain and the freely-jointed bead-rod chain. Configuration of simple coarse-grained models can be represented by an equivalent Fokker-Planck (FP) diffusion equation, which describes the evolution of the configuration distribution function in the physical and configurational spaces. FP equation can be a complex mathematical object, given its multidimensionality, and solving it explicitly can become a difficult task. Even more, in some cases, obtaining an equivalent FP equation is not possible given the complexity of the coarse-grained molecular model. Brownian dynamics can be employed as an alternative extensive numerical method for approaching the configuration distribution function of a given kinetic-theory model that avoid obtaining and/or resolving explicitly an equivalent FP equation. The validity of this discrete approach is based on the mathematical equivalence between a continuous diffusion equation and a stochastic differential equation as demonstrated by Itô in the 1940s. This paper presents a review of the fundamental issues in the BD simulation of the linear viscoelastic behavior of bead-rod-spring coarse grained models in dilute solution. In the first part of this work, the BD numerical technique is introduced. An overview of the mathematical framework of the BD and a review of the scope of applications are presented. Subsequently, the links between the rheology of complex fluids, the kinetic theory and the BD technique are established at the light of the stochastic nature of the bead-rod-spring models. Finally, the pertinence of the present state-of-the-art review is explained in terms of the increasing interest for the stochastic micro-to-macro approaches for solving complex fluids problems. In the second part of this paper, a detailed description of the BD algorithm used for simulating a small-amplitude oscillatory deformation test is given. Dynamic properties are employed throughout this work to characterise the linear viscoelastic behavior of bead-rod-spring models in dilute solution. In the third and fourth part of this article, an extensive discussion about the main issues of a BD simulation in linear viscoelasticity of diluted suspensions is tackled at the light of the classical multi-bead-spring chain model and the multi-bead-rod chain model, respectively. Kinematic formulations, integration schemes and expressions to calculate the stress tensor are revised for several classical models: Rouse and Zimm theories in the case of multi-bead-spring chains, and Kramers chain and semi-flexible filaments in the case of multi-bead-rod chains. The implemented BD technique is, on the one hand, validated in front of the analytical or exact numerical solutions known of the equivalent FP equations for those classic kinetic theory models; and, on the other hand, is control-set thanks to the analysis of the main numerical issues involved in a BD simulation. Finally, the review paper is closed by some concluding remarks.On the effective conductivity and the apparent viscosity of a thin rough polymer interface using PGD‐based separated representations
http://hdl.handle.net/10985/19486
On the effective conductivity and the apparent viscosity of a thin rough polymer interface using PGD‐based separated representations
AMMAR, Amine; GHNATIOS, Chady; DELPLACE, Frank; BARASINSKI, Anais; DUVAL, Jean-Louis; CUETO, Elias; CHINESTA, Francisco
Composite manufacturing processes usually proceed from preimpregnated preforms that are consolidated by simultaneously applying heat and pressure, so as to ensure a perfect contact compulsory for making molecular diffusion possible. However, in practice, the contact is rarely perfect. This results in a rough interface where air could remain entrapped, thus affecting the effective thermal conductivity. Moreover, the interfacial melted polymer is squeezed flowing in the rough gap created by the fibers located on the prepreg surfaces. Because of the typical dimensions of a composite prepreg, with thickness orders of magnitude smaller than its other in-plane dimensions, and its surface roughness having a characteristic size orders of magnitude smaller than the prepreg thickness, high-fidelity numerical simulations for elucidating the impact of surface and interface roughness remain today, despite the impressive advances in computational availabilities, unattainable. This work aims at elucidating roughness impact on heat conduction and the effective viscosity of the interfacial polymer squeeze flow by using an advanced numerical strategy able to reach resolutions never attained until now, a sort of numerical microscope able to attain the scale of the smallest geometrical detail.
Wed, 01 Jan 2020 00:00:00 GMThttp://hdl.handle.net/10985/194862020-01-01T00:00:00ZAMMAR, AmineGHNATIOS, ChadyDELPLACE, FrankBARASINSKI, AnaisDUVAL, Jean-LouisCUETO, EliasCHINESTA, FranciscoComposite manufacturing processes usually proceed from preimpregnated preforms that are consolidated by simultaneously applying heat and pressure, so as to ensure a perfect contact compulsory for making molecular diffusion possible. However, in practice, the contact is rarely perfect. This results in a rough interface where air could remain entrapped, thus affecting the effective thermal conductivity. Moreover, the interfacial melted polymer is squeezed flowing in the rough gap created by the fibers located on the prepreg surfaces. Because of the typical dimensions of a composite prepreg, with thickness orders of magnitude smaller than its other in-plane dimensions, and its surface roughness having a characteristic size orders of magnitude smaller than the prepreg thickness, high-fidelity numerical simulations for elucidating the impact of surface and interface roughness remain today, despite the impressive advances in computational availabilities, unattainable. This work aims at elucidating roughness impact on heat conduction and the effective viscosity of the interfacial polymer squeeze flow by using an advanced numerical strategy able to reach resolutions never attained until now, a sort of numerical microscope able to attain the scale of the smallest geometrical detail.Effects of a bent structure on the linear viscoelastic response of diluted carbon nanotube suspensions
http://hdl.handle.net/10985/17960
Effects of a bent structure on the linear viscoelastic response of diluted carbon nanotube suspensions
CRUZ, Camilo; ILLOUL, Lounès; CHINESTA, Francisco; RÉGNIER, Gilles
Commonly isolated carbon nanotubes in suspension have been modelled as a perfectly straight structure. Nevertheless, single-wall carbon nanotubes (SWNTs) contain naturally side-wall defects and, in consequence, natural bent configurations. Hence, a semi-flexile filament model with a natural bent configuration was proposed to represent physically the SWNT structure. This continuous model was discretized as a non-freely jointed multi-bead-rod system with a natural bent configuration. Using a Brownian dynamics algorithm the dynamical mechanical contribution to the linear viscoelastic response of naturally bent SWNTs in dilute suspension was simulated. The dynamics of such system shows the apparition of new relaxation processes at intermediate frequencies characterized mainly by the activation of a mild elasticity. Storage modulus evolution at those intermediate frequencies strongly depends on the flexibility of the system, given by the rigidity constant of the bending potential and the number of constitutive rods.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/10985/179602010-01-01T00:00:00ZCRUZ, CamiloILLOUL, LounèsCHINESTA, FranciscoRÉGNIER, GillesCommonly isolated carbon nanotubes in suspension have been modelled as a perfectly straight structure. Nevertheless, single-wall carbon nanotubes (SWNTs) contain naturally side-wall defects and, in consequence, natural bent configurations. Hence, a semi-flexile filament model with a natural bent configuration was proposed to represent physically the SWNT structure. This continuous model was discretized as a non-freely jointed multi-bead-rod system with a natural bent configuration. Using a Brownian dynamics algorithm the dynamical mechanical contribution to the linear viscoelastic response of naturally bent SWNTs in dilute suspension was simulated. The dynamics of such system shows the apparition of new relaxation processes at intermediate frequencies characterized mainly by the activation of a mild elasticity. Storage modulus evolution at those intermediate frequencies strongly depends on the flexibility of the system, given by the rigidity constant of the bending potential and the number of constitutive rods.On the coupling of local 3D solutions and global 2D shell theory in structural mechanics
http://hdl.handle.net/10985/14597
On the coupling of local 3D solutions and global 2D shell theory in structural mechanics
QUARANTA, Giacomo; ZIANE, Mustapha; ABISSET-CHAVANNE, Emmanuelle; DUVAL, Jean Louis; CHINESTA, Francisco; ESI GROUP
Most of mechanical systems and complex structures exhibit plate and shell components. Therefore, 2D simulation, based on plate and shell theory, appears as an appealing choice in structural analysis as it allows reducing the computational complexity. Nevertheless, this 2D framework fails for capturing rich physics compromising the usual hypotheses considered when deriving standard plate and shell theories. To circumvent, or at least alleviate this issue, authors proposed in their former works an in-plane-out-of-plane separated representation able to capture rich 3D behaviors while keeping the computational complexity of 2D simulations. However, that procedure it was revealed to be too intrusive for being introduced into existing commercial softwares. Moreover, experience indicated that such enriched descriptions are only compulsory locally, in some regions or structure components. In the present paper we propose an enrichment procedure able to address 3D local behaviors, preserving the direct minimally-invasive coupling with existing plate and shell discretizations. The proposed strategy will be extended to inelastic behaviors and structural dynamics.
Tue, 01 Jan 2019 00:00:00 GMThttp://hdl.handle.net/10985/145972019-01-01T00:00:00ZQUARANTA, GiacomoZIANE, MustaphaABISSET-CHAVANNE, EmmanuelleDUVAL, Jean LouisCHINESTA, FranciscoESI GROUPMost of mechanical systems and complex structures exhibit plate and shell components. Therefore, 2D simulation, based on plate and shell theory, appears as an appealing choice in structural analysis as it allows reducing the computational complexity. Nevertheless, this 2D framework fails for capturing rich physics compromising the usual hypotheses considered when deriving standard plate and shell theories. To circumvent, or at least alleviate this issue, authors proposed in their former works an in-plane-out-of-plane separated representation able to capture rich 3D behaviors while keeping the computational complexity of 2D simulations. However, that procedure it was revealed to be too intrusive for being introduced into existing commercial softwares. Moreover, experience indicated that such enriched descriptions are only compulsory locally, in some regions or structure components. In the present paper we propose an enrichment procedure able to address 3D local behaviors, preserving the direct minimally-invasive coupling with existing plate and shell discretizations. The proposed strategy will be extended to inelastic behaviors and structural dynamics.Code2vect: An efficient heterogenous data classifier and nonlinear regression technique
http://hdl.handle.net/10985/18405
Code2vect: An efficient heterogenous data classifier and nonlinear regression technique
ARGERICH MARTÍN, Clara; IBÁÑEZ PINILLO, Rubén; BARASINSKI, Anaïs; CHINESTA, Francisco
The aim of this paper is to present a new classification and regression algorithm based on Artificial Intelligence. The main feature of this algorithm, which will be called Code2Vect, is the nature of the data to treat: qualitative or quantitative and continuous or discrete. Contrary to other artificial intelligence techniques based on the “Big-Data,” this new approach will enable working with a reduced amount of data, within the so-called “Smart Data” paradigm. Moreover, the main purpose of this algorithm is to enable the representation of high-dimensional data and more specifically grouping and visualizing this data according to a given target. For that purpose, the data will be projected into a vectorial space equipped with an appropriate metric, able to group data according to their affinity (with respect to a given output of interest). Furthermore, another application of this algorithm lies on its prediction capability. As it occurs with most common data-mining techniques such as regression trees, by giving an input the output will be inferred, in this case considering the nature of the data formerly described. In order to illustrate its potentialities, two different applications will be addressed, one concerning the representation of high-dimensional and categorical data and another featuring the prediction capabilities of the algorithm.
Tue, 01 Jan 2019 00:00:00 GMThttp://hdl.handle.net/10985/184052019-01-01T00:00:00ZARGERICH MARTÍN, ClaraIBÁÑEZ PINILLO, RubénBARASINSKI, AnaïsCHINESTA, FranciscoThe aim of this paper is to present a new classification and regression algorithm based on Artificial Intelligence. The main feature of this algorithm, which will be called Code2Vect, is the nature of the data to treat: qualitative or quantitative and continuous or discrete. Contrary to other artificial intelligence techniques based on the “Big-Data,” this new approach will enable working with a reduced amount of data, within the so-called “Smart Data” paradigm. Moreover, the main purpose of this algorithm is to enable the representation of high-dimensional data and more specifically grouping and visualizing this data according to a given target. For that purpose, the data will be projected into a vectorial space equipped with an appropriate metric, able to group data according to their affinity (with respect to a given output of interest). Furthermore, another application of this algorithm lies on its prediction capability. As it occurs with most common data-mining techniques such as regression trees, by giving an input the output will be inferred, in this case considering the nature of the data formerly described. In order to illustrate its potentialities, two different applications will be addressed, one concerning the representation of high-dimensional and categorical data and another featuring the prediction capabilities of the algorithm.Shape parametrization of bio-mechanical finite element models based on medical images
http://hdl.handle.net/10985/18605
Shape parametrization of bio-mechanical finite element models based on medical images
LAUZERAL, Nathan; BORZACCHIELLO, Domenico; KUGLER, Michaël; GEORGE, Daniel; RÉMOND, Yves; HOSTETTLER, Alexandre; CHINESTA, Francisco
The main objective of this study is to combine the statistical shape analysis with a morphing procedure in order to generate shape-parametric finite element models of tissues and organs and to explore the reliability and the limitations of this approach when applied to databases of real medical images. As classical statistical shape models are not always adapted to the morphing procedure, a new registration method was developed in order to maximize the morphing efficiency. The method was compared to the traditional iterative thin plate spline (iTPS). Two data sets of 33 proximal femora shapes and 385 liver shapes were used for the comparison. The principal component analysis was used to get the principal morphing modes. In terms of anatomical shape reconstruction (evaluated through the criteria of generalization, compactness and specificity), our approach compared fairly well to the iTPS method, while performing remarkably better in terms of mesh quality, since it was less prone to generate invalid meshes in the interior. This was particularly true in the liver case. Such methodology offers a potential application for the generation of automated finite element (FE) models from medical images. Parametrized anatomical models can also be used to assess the influence of inter-patient variability on the biomechanical response of the tissues. Indeed, thanks to the shape parametrization the user would easily have access to a valid FE model for any shape belonging to the parameters subspace.
Tue, 01 Jan 2019 00:00:00 GMThttp://hdl.handle.net/10985/186052019-01-01T00:00:00ZLAUZERAL, NathanBORZACCHIELLO, DomenicoKUGLER, MichaëlGEORGE, DanielRÉMOND, YvesHOSTETTLER, AlexandreCHINESTA, FranciscoThe main objective of this study is to combine the statistical shape analysis with a morphing procedure in order to generate shape-parametric finite element models of tissues and organs and to explore the reliability and the limitations of this approach when applied to databases of real medical images. As classical statistical shape models are not always adapted to the morphing procedure, a new registration method was developed in order to maximize the morphing efficiency. The method was compared to the traditional iterative thin plate spline (iTPS). Two data sets of 33 proximal femora shapes and 385 liver shapes were used for the comparison. The principal component analysis was used to get the principal morphing modes. In terms of anatomical shape reconstruction (evaluated through the criteria of generalization, compactness and specificity), our approach compared fairly well to the iTPS method, while performing remarkably better in terms of mesh quality, since it was less prone to generate invalid meshes in the interior. This was particularly true in the liver case. Such methodology offers a potential application for the generation of automated finite element (FE) models from medical images. Parametrized anatomical models can also be used to assess the influence of inter-patient variability on the biomechanical response of the tissues. Indeed, thanks to the shape parametrization the user would easily have access to a valid FE model for any shape belonging to the parameters subspace.Intelligent assistant system as a context-aware decision-making support for the workers of the future
http://hdl.handle.net/10985/18438
Intelligent assistant system as a context-aware decision-making support for the workers of the future
BELKADI, Farouk; DHUIEB, Mohamed Anis; AGUADO, José Vicente; LAROCHE, Florent; BERNARD, Alain; CHINESTA, Francisco
The key role of information and communication technologies (ICT) to improve manufacturing productivity within the paradigm of factory of the future is often proved. These tools are used in a wide range of product lifecycle activities, from the early design phase to product recycling. Generally, the assistance tools are mainly dedicated to the management board and fewer initiatives focus on the operational needs of the worker at the shop-floor level. This paper proposes a context-aware knowledge-based system dedicated to support the actors of the factory by the right information at the right time and in the appropriate format regarding their context of work and level of expertise. Particularly, specific assistance functionalities are dedicated to the workers in charge of the machine configuration and the realization of manufacturing operations. PGD-based (Proper Generalized Decomposition) algorithms are used for real time simulation of industrial processes and machine configuration. At the conceptual level, a semantic model is proposedas key enablersfor the structuration of the knowledge-based system.
Wed, 01 Jan 2020 00:00:00 GMThttp://hdl.handle.net/10985/184382020-01-01T00:00:00ZBELKADI, FaroukDHUIEB, Mohamed AnisAGUADO, José VicenteLAROCHE, FlorentBERNARD, AlainCHINESTA, FranciscoThe key role of information and communication technologies (ICT) to improve manufacturing productivity within the paradigm of factory of the future is often proved. These tools are used in a wide range of product lifecycle activities, from the early design phase to product recycling. Generally, the assistance tools are mainly dedicated to the management board and fewer initiatives focus on the operational needs of the worker at the shop-floor level. This paper proposes a context-aware knowledge-based system dedicated to support the actors of the factory by the right information at the right time and in the appropriate format regarding their context of work and level of expertise. Particularly, specific assistance functionalities are dedicated to the workers in charge of the machine configuration and the realization of manufacturing operations. PGD-based (Proper Generalized Decomposition) algorithms are used for real time simulation of industrial processes and machine configuration. At the conceptual level, a semantic model is proposedas key enablersfor the structuration of the knowledge-based system.