SAM
https://sam.ensam.eu:443
The DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.
Fri, 10 Jul 2020 00:24:18 GMT
20200710T00:24:18Z

Analyse multiéchelle de la rugosité des surfaces
http://hdl.handle.net/10985/10842
Analyse multiéchelle de la rugosité des surfaces
VAN GORP, Adrien; BIGERELLE, Maxence; IOST, Alain
L’analyse de la rugosité des surfaces consiste à séparer les défauts à différents ordres ou types correspondant à la forme, l’ondulation et la rugosité. La norme appliquée aux cas courants utilise un filtrage gaussien pour réaliser cette séparation. Il s’agit alors de fixer la longueur d’onde de coupure, appelée cutoff, pour calculer les paramètres afin de caractériser la rugosité. Cette valeur est choisie parmi les suivantes : 0,08, 0,25, 0,8, 2,5 et 8mm, en fonction du type de la rugosité étudiée. Chaque paramètre est ensuite calculé sur une longueur dite "de base" dont la valeur correspond à celle du cutoff. De plus, même si la plupart des mesures de rugosité visent à valider la qualité de la surface, certaines études cherchent à mettre en évidence :  soit l’influence d’un processus d’obtention de la surface sur la rugosité,  soit la corrélation entre la rugosité et une propriété de la surface étudiée. Dans chacun de ces cas, on recherche à mettre en évidence un ou plusieurs phénomènes physiques mis en jeu dans la création ou la fonctionnalité de la surface. Dés lors, une question se pose : « Les valeurs normalisées de longueur de base sontelles les plus pertinentes pour mettre en évidence les phénomènes physiques recherchés ? ». La réponse peut être apportée par l’approche multiéchelle qui consiste à ne faire aucune hypothèse sur la longueur de base à utiliser, et à appliquer le filtrage sur une gamme de longueurs de base. Il s’agit ensuite de discriminer la longueur de base qui permet de mettre au mieux en évidence l’effet du process d’obtention de la surface ou la corrélation avec la propriété recherchée par analyse de variance.
Mon, 01 Jan 2007 00:00:00 GMT
http://hdl.handle.net/10985/10842
20070101T00:00:00Z
VAN GORP, Adrien
BIGERELLE, Maxence
IOST, Alain
L’analyse de la rugosité des surfaces consiste à séparer les défauts à différents ordres ou types correspondant à la forme, l’ondulation et la rugosité. La norme appliquée aux cas courants utilise un filtrage gaussien pour réaliser cette séparation. Il s’agit alors de fixer la longueur d’onde de coupure, appelée cutoff, pour calculer les paramètres afin de caractériser la rugosité. Cette valeur est choisie parmi les suivantes : 0,08, 0,25, 0,8, 2,5 et 8mm, en fonction du type de la rugosité étudiée. Chaque paramètre est ensuite calculé sur une longueur dite "de base" dont la valeur correspond à celle du cutoff. De plus, même si la plupart des mesures de rugosité visent à valider la qualité de la surface, certaines études cherchent à mettre en évidence :  soit l’influence d’un processus d’obtention de la surface sur la rugosité,  soit la corrélation entre la rugosité et une propriété de la surface étudiée. Dans chacun de ces cas, on recherche à mettre en évidence un ou plusieurs phénomènes physiques mis en jeu dans la création ou la fonctionnalité de la surface. Dés lors, une question se pose : « Les valeurs normalisées de longueur de base sontelles les plus pertinentes pour mettre en évidence les phénomènes physiques recherchés ? ». La réponse peut être apportée par l’approche multiéchelle qui consiste à ne faire aucune hypothèse sur la longueur de base à utiliser, et à appliquer le filtrage sur une gamme de longueurs de base. Il s’agit ensuite de discriminer la longueur de base qui permet de mettre au mieux en évidence l’effet du process d’obtention de la surface ou la corrélation avec la propriété recherchée par analyse de variance.

A new model of the heat transfer in materials: the surfacic potential algorithm
http://hdl.handle.net/10985/9732
A new model of the heat transfer in materials: the surfacic potential algorithm
BIGERELLE, Maxence; BOUNICHANE, Benaamer; HAGEGE, Benjamin; JOURANI, Abdeljalil; IOST, Alain
This paper proposes a new simulation method of thermal transfers based on the concepts of Brownian motion via the theory of potential and the characteristics of materials. In our simulation the particles take their origins on the surface and we propose an algorithm called 'Surfacic Potential Algorithm' that allows to determine the cartography of the temperatures. This algorithm has a better convergence than the one resulting from the potential theory and allows to treat adiabatic surfaces. It also includes the thermal heterogeneity of material. Its relevance is verified on thermal problems whose analytical solution is known.
Fri, 01 Jan 2010 00:00:00 GMT
http://hdl.handle.net/10985/9732
20100101T00:00:00Z
BIGERELLE, Maxence
BOUNICHANE, Benaamer
HAGEGE, Benjamin
JOURANI, Abdeljalil
IOST, Alain
This paper proposes a new simulation method of thermal transfers based on the concepts of Brownian motion via the theory of potential and the characteristics of materials. In our simulation the particles take their origins on the surface and we propose an algorithm called 'Surfacic Potential Algorithm' that allows to determine the cartography of the temperatures. This algorithm has a better convergence than the one resulting from the potential theory and allows to treat adiabatic surfaces. It also includes the thermal heterogeneity of material. Its relevance is verified on thermal problems whose analytical solution is known.

Statistical artefacts in the determination of the fractal dimension by the slit island method
http://hdl.handle.net/10985/10839
Statistical artefacts in the determination of the fractal dimension by the slit island method
BIGERELLE, Maxence; IOST, Alain
This paper comments upon some statistical aspects of the slit island method which is widely used to calculate the fractal dimension of fractured surfaces or of materials’ features like grain geometry. If a noise is introduced when measuring areas and perimeters of the islands (experimental errors), it is shown that errors are made in the calculation of the fractal dimension and more than a false analytical relation between a physical process parameter and the fractal dimension can be found. Moreover, positive or negative correlation with the same physical process parameter can be obtained whether the regression is performed by plotting the variation of the noisy area versus the noisy perimeter of the considered islands or vice versa. MonteCarlo simulations confirm the analytical relations obtained under statistical considerations.
Thu, 01 Jan 2004 00:00:00 GMT
http://hdl.handle.net/10985/10839
20040101T00:00:00Z
BIGERELLE, Maxence
IOST, Alain
This paper comments upon some statistical aspects of the slit island method which is widely used to calculate the fractal dimension of fractured surfaces or of materials’ features like grain geometry. If a noise is introduced when measuring areas and perimeters of the islands (experimental errors), it is shown that errors are made in the calculation of the fractal dimension and more than a false analytical relation between a physical process parameter and the fractal dimension can be found. Moreover, positive or negative correlation with the same physical process parameter can be obtained whether the regression is performed by plotting the variation of the noisy area versus the noisy perimeter of the considered islands or vice versa. MonteCarlo simulations confirm the analytical relations obtained under statistical considerations.

Estimating the parameters of a generalized lambda distribution
http://hdl.handle.net/10985/10868
Estimating the parameters of a generalized lambda distribution
FOURNIER, Benjamin; RUPIN, Nicolas; BIGERELLE, Maxence; NAJJAR, Denis; IOST, Alain; WILCOX, R
The method of moments is a popular technique for estimating the parameters of a generalized lambda distribution (GLD), but published results suggest that the percentile method gives superior results. However, the percentile method cannot be implemented in an automatic fashion, and automatic methods, like the starship method, can lead to prohibitive execution time with large sample sizes. A new estimation method is proposed that is automatic (it does not require the use of special tables or graphs), and it reduces the computational time. Based partly on the usual percentile method, this new method also requires choosing which quantile u to use when fitting a GLD to data. The choice for u is studied and it is found that the best choice depends on the final goal of the modeling process. The sampling distribution of the new estimator is studied and compared to the sampling distribution of estimators that have been proposed. Naturally, all estimators are biased and here it is found that the bias becomes negligible with sample sizes n⩾2×103. The .025 and .975 quantiles of the sampling distribution are investigated, and the difference between these quantiles is found to decrease proportionally to View the MathML source. The same results hold for the moment and percentile estimates. Finally, the influence of the sample size is studied when a normal distribution is modeled by a GLD. Both bounded and unbounded GLDs are used and the bounded GLD turns out to be the most accurate. Indeed it is shown that, up to n=106, bounded GLD modeling cannot be rejected by usual goodnessoffit tests.
Mon, 01 Jan 2007 00:00:00 GMT
http://hdl.handle.net/10985/10868
20070101T00:00:00Z
FOURNIER, Benjamin
RUPIN, Nicolas
BIGERELLE, Maxence
NAJJAR, Denis
IOST, Alain
WILCOX, R
The method of moments is a popular technique for estimating the parameters of a generalized lambda distribution (GLD), but published results suggest that the percentile method gives superior results. However, the percentile method cannot be implemented in an automatic fashion, and automatic methods, like the starship method, can lead to prohibitive execution time with large sample sizes. A new estimation method is proposed that is automatic (it does not require the use of special tables or graphs), and it reduces the computational time. Based partly on the usual percentile method, this new method also requires choosing which quantile u to use when fitting a GLD to data. The choice for u is studied and it is found that the best choice depends on the final goal of the modeling process. The sampling distribution of the new estimator is studied and compared to the sampling distribution of estimators that have been proposed. Naturally, all estimators are biased and here it is found that the bias becomes negligible with sample sizes n⩾2×103. The .025 and .975 quantiles of the sampling distribution are investigated, and the difference between these quantiles is found to decrease proportionally to View the MathML source. The same results hold for the moment and percentile estimates. Finally, the influence of the sample size is studied when a normal distribution is modeled by a GLD. Both bounded and unbounded GLDs are used and the bounded GLD turns out to be the most accurate. Indeed it is shown that, up to n=106, bounded GLD modeling cannot be rejected by usual goodnessoffit tests.

Multiscale Roughness Analysis in InjectionMolding Process
http://hdl.handle.net/10985/10796
Multiscale Roughness Analysis in InjectionMolding Process
BIGERELLE, Maxence; VAN GORP, Adrien; IOST, Alain
The roughness of polymer surfaces is often investigated to guarantee both the surface integrity and the surface functionality. One of the major problems in roughness measurement analyses consists in determining both the evaluation length and the reference line (i.e., the degree of the polynomial equation) from which roughness parameters are computed. This article outlines an original generic method based on the generalized analysis of variance and experimental design methodology for estimating the most relevant roughness parameter p, the most pertinent scale, s, and finally, the degree of the polynomial fitting, d. This methodology is then applied to characterize the influence of four process parameters on the final roughness of poly(polypropylene) samples obtained by injection molding. This method allows us to determine the most efficient triplet (p, s, d) that best discriminates the effect of a process parameter q. It is shown that different (p, s, d) values are affected to each process parameter giving finally the scale on which each process parameter modifies the roughness of a polymeric surface obtained by injection molding. POLYM. ENG. SCI., 2008. © 2008 Society of Plastics Engineers
Tue, 01 Jan 2008 00:00:00 GMT
http://hdl.handle.net/10985/10796
20080101T00:00:00Z
BIGERELLE, Maxence
VAN GORP, Adrien
IOST, Alain
The roughness of polymer surfaces is often investigated to guarantee both the surface integrity and the surface functionality. One of the major problems in roughness measurement analyses consists in determining both the evaluation length and the reference line (i.e., the degree of the polynomial equation) from which roughness parameters are computed. This article outlines an original generic method based on the generalized analysis of variance and experimental design methodology for estimating the most relevant roughness parameter p, the most pertinent scale, s, and finally, the degree of the polynomial fitting, d. This methodology is then applied to characterize the influence of four process parameters on the final roughness of poly(polypropylene) samples obtained by injection molding. This method allows us to determine the most efficient triplet (p, s, d) that best discriminates the effect of a process parameter q. It is shown that different (p, s, d) values are affected to each process parameter giving finally the scale on which each process parameter modifies the roughness of a polymeric surface obtained by injection molding. POLYM. ENG. SCI., 2008. © 2008 Society of Plastics Engineers

Assessment of the constitutive law by inverse methodology: Small punch test and hardness
http://hdl.handle.net/10985/10817
Assessment of the constitutive law by inverse methodology: Small punch test and hardness
ISSELIN, Jérôme; IOST, Alain; GOLEK, Jocelyn; NAJJAR, Denis; BIGERELLE, Maxence
The relevance of smallpunch tests and indentation (hardness) tests are compared with regard to the determination of a constitutive law in the case of non active ferrite–bainite steel taken from a French power plant. Firstly, smallpunch tests were performed on material samples and the load deflection curves were compared with finite element calculations using the FORGE2 Standard code. As a result the strength coefficient and the strain hardening exponent of Hollomon’s constitutive law were determined by an inverse method (Simplex method). Besides, it was shown that a threeparameter constitutive law such as Ludwik Hollomon’s leads to an indetermination since its parameters are correlated with each other. Secondly indentation tests were performed with a ball indenter and the parameters of the constitutive law were determined from the analysis of the load–indentation depth curves. Both methods give results in good agreement with the true stress–true strain curve obtained by classical tensile testing, thus proving their applicability to nuclear materials.
Sun, 01 Jan 2006 00:00:00 GMT
http://hdl.handle.net/10985/10817
20060101T00:00:00Z
ISSELIN, Jérôme
IOST, Alain
GOLEK, Jocelyn
NAJJAR, Denis
BIGERELLE, Maxence
The relevance of smallpunch tests and indentation (hardness) tests are compared with regard to the determination of a constitutive law in the case of non active ferrite–bainite steel taken from a French power plant. Firstly, smallpunch tests were performed on material samples and the load deflection curves were compared with finite element calculations using the FORGE2 Standard code. As a result the strength coefficient and the strain hardening exponent of Hollomon’s constitutive law were determined by an inverse method (Simplex method). Besides, it was shown that a threeparameter constitutive law such as Ludwik Hollomon’s leads to an indetermination since its parameters are correlated with each other. Secondly indentation tests were performed with a ball indenter and the parameters of the constitutive law were determined from the analysis of the load–indentation depth curves. Both methods give results in good agreement with the true stress–true strain curve obtained by classical tensile testing, thus proving their applicability to nuclear materials.

Influence of the morphological texture on the low wear damage of paint coated sheets
http://hdl.handle.net/10985/9865
Influence of the morphological texture on the low wear damage of paint coated sheets
HENNEBELLE, François; NAJJAR, Denis; BIGERELLE, Maxence; IOST, Alain
The influence of the morphological texture (flat and structured) of a polyester based paint coating on the low wear damage is characterised by means of roughness and gloss measurements. Using statistical methods, the aim of the investigation is to determine, among about 60 surface roughness parameters, the most relevant of them with regard to the morphological texture and the wear behaviour of polymer coatings. The level of relevance of each roughness is quantitatively assessed through the calculation of a statistical index of performance determined by combining the twoway analysis of variance (ANOVA) and the computer based Bootstrap method (CBBM). For the experimental conditions related to the present investigation, the fractal dimension and a roughness parameter directly related to the number of inflexion points of the profiles are shown to be the most relevant parameters for discriminating the different morphological textures of studied coatings and for characterising the low wear damage, respectively. Even if the gloss reduction related to the low wear damage is more visually perceptible at a macroscopic scale for the flat products than for the structured ones, the magnitude of this damage is shown to be however very similar at a microscopic scale whatever the morphological texture of the paint coatings.
Sun, 01 Jan 2006 00:00:00 GMT
http://hdl.handle.net/10985/9865
20060101T00:00:00Z
HENNEBELLE, François
NAJJAR, Denis
BIGERELLE, Maxence
IOST, Alain
The influence of the morphological texture (flat and structured) of a polyester based paint coating on the low wear damage is characterised by means of roughness and gloss measurements. Using statistical methods, the aim of the investigation is to determine, among about 60 surface roughness parameters, the most relevant of them with regard to the morphological texture and the wear behaviour of polymer coatings. The level of relevance of each roughness is quantitatively assessed through the calculation of a statistical index of performance determined by combining the twoway analysis of variance (ANOVA) and the computer based Bootstrap method (CBBM). For the experimental conditions related to the present investigation, the fractal dimension and a roughness parameter directly related to the number of inflexion points of the profiles are shown to be the most relevant parameters for discriminating the different morphological textures of studied coatings and for characterising the low wear damage, respectively. Even if the gloss reduction related to the low wear damage is more visually perceptible at a macroscopic scale for the flat products than for the structured ones, the magnitude of this damage is shown to be however very similar at a microscopic scale whatever the morphological texture of the paint coatings.

Effect of surface roughness in the determination of the mechanical properties of material using nanoindentation test
http://hdl.handle.net/10985/9660
Effect of surface roughness in the determination of the mechanical properties of material using nanoindentation test
XIA, Yang; BIGERELLE, Maxence; MARTEAU, Julie; MAZERAN, PierreEmmanuel; BOUVIER, Salima; IOST, Alain
A quantitative model is proposed for the estimation of macrohardness using nanoindentation tests. It decreases the effect of errors related to the nonreproducibility of the nanoindentation test on calculations of macrohardness by taking into account the indentation size effect and the surface roughness. The most innovative feature of this model is the simultaneous statistical treatment of all the nanoindentation loading curves. The curve treatment mainly corrects errors in the zero depth determination by correlating their positions through the use of a relative reference. First, the experimental loading curves are described using the Bernhardt law. The fitted curves are then shifted, in order to simultaneously reduce the gaps between them that result from the scatter in the experimental curves. A set of shift depths, Δhc, is therefore identified. The proposed approach is applied to a large set of TiAl6V4 titaniumbased samples with different roughness levels, polished by eleven silicon carbide sandpapers from grit paper 80 to 4,000. The result reveals that the scatter degree of the indentation curves is higher when the surface is rougher. The standard deviation of the shift Δhc is linearly connected to the standard deviation of the surface roughness, if the roughness is highpass filtered in the scale of the indenter (15 µm). Using the proposed method, the estimated macrohardness for eleven studied TiAl6V4 samples is in the range of 3.5–4.1 GPa, with the smallest deviation around 0.01 GPa, which is more accurate than the one given by the Nanoindentation MTS™ system, which uses an average value (around 4.3 ± 0.5 GPa). Moreover, the calculated Young's modulus of the material is around 136 ± 20 GPa, which is similar to the modulus in literature.
Wed, 01 Jan 2014 00:00:00 GMT
http://hdl.handle.net/10985/9660
20140101T00:00:00Z
XIA, Yang
BIGERELLE, Maxence
MARTEAU, Julie
MAZERAN, PierreEmmanuel
BOUVIER, Salima
IOST, Alain
A quantitative model is proposed for the estimation of macrohardness using nanoindentation tests. It decreases the effect of errors related to the nonreproducibility of the nanoindentation test on calculations of macrohardness by taking into account the indentation size effect and the surface roughness. The most innovative feature of this model is the simultaneous statistical treatment of all the nanoindentation loading curves. The curve treatment mainly corrects errors in the zero depth determination by correlating their positions through the use of a relative reference. First, the experimental loading curves are described using the Bernhardt law. The fitted curves are then shifted, in order to simultaneously reduce the gaps between them that result from the scatter in the experimental curves. A set of shift depths, Δhc, is therefore identified. The proposed approach is applied to a large set of TiAl6V4 titaniumbased samples with different roughness levels, polished by eleven silicon carbide sandpapers from grit paper 80 to 4,000. The result reveals that the scatter degree of the indentation curves is higher when the surface is rougher. The standard deviation of the shift Δhc is linearly connected to the standard deviation of the surface roughness, if the roughness is highpass filtered in the scale of the indenter (15 µm). Using the proposed method, the estimated macrohardness for eleven studied TiAl6V4 samples is in the range of 3.5–4.1 GPa, with the smallest deviation around 0.01 GPa, which is more accurate than the one given by the Nanoindentation MTS™ system, which uses an average value (around 4.3 ± 0.5 GPa). Moreover, the calculated Young's modulus of the material is around 136 ± 20 GPa, which is similar to the modulus in literature.

Application of the generalized lambda distributions in a statistical process control methodology
http://hdl.handle.net/10985/10792
Application of the generalized lambda distributions in a statistical process control methodology
FOURNIER, Benjamin; RUPIN, Nicolas; BIGERELLE, Maxence; NAJJAR, Denis; IOST, Alain
In statistical process control (SPC) methodology, quantitative standard control charts are often based on the assumption that the observations are normally distributed. In practice, normality can fail and consequently the determination of assignable causes may result in error. After pointing out the limitations of hypothesis testing methodology commonly used for discriminating between Gaussian and nonGaussian populations, a very flexible family of statistical distributions is presented in this paper and proposed to be introduced in SPC methodology: the generalized lambda distributions (GLD). It is shown that the control limits usually considered in SPC are accurately predicted when modelling usual statistical laws by means of these distributions. Besides, simulation results reveal that an acceptable accuracy is obtained even for a rather reduced number of initial observations (approximately a hundred). Finally, a specific userfriendly software have been used to process, using the SPC Western Electric rules, experimental data originating from an industrial production line. This example and the fact that it enables us to avoid choosing an a priori statistical law emphasize the relevance of using the GLD in SPC.
Sun, 01 Jan 2006 00:00:00 GMT
http://hdl.handle.net/10985/10792
20060101T00:00:00Z
FOURNIER, Benjamin
RUPIN, Nicolas
BIGERELLE, Maxence
NAJJAR, Denis
IOST, Alain
In statistical process control (SPC) methodology, quantitative standard control charts are often based on the assumption that the observations are normally distributed. In practice, normality can fail and consequently the determination of assignable causes may result in error. After pointing out the limitations of hypothesis testing methodology commonly used for discriminating between Gaussian and nonGaussian populations, a very flexible family of statistical distributions is presented in this paper and proposed to be introduced in SPC methodology: the generalized lambda distributions (GLD). It is shown that the control limits usually considered in SPC are accurately predicted when modelling usual statistical laws by means of these distributions. Besides, simulation results reveal that an acceptable accuracy is obtained even for a rather reduced number of initial observations (approximately a hundred). Finally, a specific userfriendly software have been used to process, using the SPC Western Electric rules, experimental data originating from an industrial production line. This example and the fact that it enables us to avoid choosing an a priori statistical law emphasize the relevance of using the GLD in SPC.

Multiscale modelling of morphological evolution of rough surface duringsuperficial, volume and evaporationcondensation diffusions
http://hdl.handle.net/10985/9730
Multiscale modelling of morphological evolution of rough surface duringsuperficial, volume and evaporationcondensation diffusions
BIGERELLE, Maxence; FAVERGEON, J.; MATHIA, Thomas; IOST, Alain
Fractal functions are used to model a metallic interface. An analytical model described by three partial differential equations is built to model time evolution of the surface during heating including three different mechanisms of diffusion: superficial diffusion (SD), volume diffusion (VD) and diffusion by evaporationcondensation (DEC). Initial topographies are modeled by Stochastic Weierstraβ functions because of their ability to reproduce experimental roughness profiles. Applied to an aluminum alloy at 550°C, a high number of roughness parameters and their variance are calculated. A classification method shows that the best geometrical approach that discriminates heat effect is the fractal dimension. The most popular parameter, Ra, badly discriminates processes (classification number = 58). The four order spectral moments of the roughness profile are correlated with the evolution of profile. It is shown theoretically that the superficial diffusion depends directly to the fourth spectral moment of the roughness profile.
Sun, 01 Jan 2012 00:00:00 GMT
http://hdl.handle.net/10985/9730
20120101T00:00:00Z
BIGERELLE, Maxence
FAVERGEON, J.
MATHIA, Thomas
IOST, Alain
Fractal functions are used to model a metallic interface. An analytical model described by three partial differential equations is built to model time evolution of the surface during heating including three different mechanisms of diffusion: superficial diffusion (SD), volume diffusion (VD) and diffusion by evaporationcondensation (DEC). Initial topographies are modeled by Stochastic Weierstraβ functions because of their ability to reproduce experimental roughness profiles. Applied to an aluminum alloy at 550°C, a high number of roughness parameters and their variance are calculated. A classification method shows that the best geometrical approach that discriminates heat effect is the fractal dimension. The most popular parameter, Ra, badly discriminates processes (classification number = 58). The four order spectral moments of the roughness profile are correlated with the evolution of profile. It is shown theoretically that the superficial diffusion depends directly to the fourth spectral moment of the roughness profile.