<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>Laboratoire d’Ingénierie des Systèmes Physiques Et Numériques (LISPEN)</title>
<link>http://hdl.handle.net/10985/177</link>
<description/>
<pubDate>Sat, 11 Apr 2026 23:27:52 GMT</pubDate>
<dc:date>2026-04-11T23:27:52Z</dc:date>

<item>
<title>Free Hand-Based 3D Interaction in Optical See-Through Augmented Reality Using Leap Motion</title>
<link>http://hdl.handle.net/10985/14475</link>
<description>Free Hand-Based 3D Interaction in Optical See-Through Augmented Reality Using Leap Motion
ABABSA, Fakhreddine; HE, Junhui; CHARDONNET, Jean-Rémy
In augmented reality environments, the natural hand interaction between a virtual object and the user is a major issue to manipulate a rendered object in a convenient way. Microsoft’s HoloLens (Microsoft 2018) is an innovative augmented reality (AR) device that has provided an impressive experience for the user. However, the gesture interactions offered to the user are very limited. HoloLens currently recognizes two core component gestures: Air tap and Bloom. To solve this issue, we propose to integrate a Leap Motion Controller (LMC) within the HoloLens device (Figure 1). We thus used 3D hand and finger tracking provided by the LMC to propose new free hand-based interaction more natural and intuitive. We implemented three fully 3D techniques for selection, translation and rotation manipulation. In this work, we first investigated how to combine the two devices to get them working together in real time, and then we evaluated the proposed 3D hand interactions.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10985/14475</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
<dc:creator>ABABSA, Fakhreddine</dc:creator>
<dc:creator>HE, Junhui</dc:creator>
<dc:creator>CHARDONNET, Jean-Rémy</dc:creator>
<dc:description>In augmented reality environments, the natural hand interaction between a virtual object and the user is a major issue to manipulate a rendered object in a convenient way. Microsoft’s HoloLens (Microsoft 2018) is an innovative augmented reality (AR) device that has provided an impressive experience for the user. However, the gesture interactions offered to the user are very limited. HoloLens currently recognizes two core component gestures: Air tap and Bloom. To solve this issue, we propose to integrate a Leap Motion Controller (LMC) within the HoloLens device (Figure 1). We thus used 3D hand and finger tracking provided by the LMC to propose new free hand-based interaction more natural and intuitive. We implemented three fully 3D techniques for selection, translation and rotation manipulation. In this work, we first investigated how to combine the two devices to get them working together in real time, and then we evaluated the proposed 3D hand interactions.</dc:description>
</item>
<item>
<title>Combining HoloLens and Leap-Motion for Free Hand-Based 3D Interaction in MR Environments</title>
<link>http://hdl.handle.net/10985/19485</link>
<description>Combining HoloLens and Leap-Motion for Free Hand-Based 3D Interaction in MR Environments
ABABSA, Fakhreddine; HE, Junhui; CHARDONNET, Jean-Rémy
The ability to interact with virtual objects using gestures would allow users to improve their experience in Mixed Reality (MR) environments, especially when they use AR headsets. Today, MR head-mounted displays like the HoloLens integrate hand gesture based interaction allowing users to take actions in MR environments. However, the proposed interactions remain limited. In this paper, we propose to combine a Leap Motion Controller (LMC) with a HoloLens in order to improve gesture interaction with virtual objects. Two main issues are presented: an interactive calibration procedure for the coupled HoloLens-LMC device and an intuitive hand-based interaction approach using LMC data in the HoloLens environment. A set of first experiments was carried out to evaluate the accuracy and the usability of the proposed approach.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10985/19485</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
<dc:creator>ABABSA, Fakhreddine</dc:creator>
<dc:creator>HE, Junhui</dc:creator>
<dc:creator>CHARDONNET, Jean-Rémy</dc:creator>
<dc:description>The ability to interact with virtual objects using gestures would allow users to improve their experience in Mixed Reality (MR) environments, especially when they use AR headsets. Today, MR head-mounted displays like the HoloLens integrate hand gesture based interaction allowing users to take actions in MR environments. However, the proposed interactions remain limited. In this paper, we propose to combine a Leap Motion Controller (LMC) with a HoloLens in order to improve gesture interaction with virtual objects. Two main issues are presented: an interactive calibration procedure for the coupled HoloLens-LMC device and an intuitive hand-based interaction approach using LMC data in the HoloLens environment. A set of first experiments was carried out to evaluate the accuracy and the usability of the proposed approach.</dc:description>
</item>
<item>
<title>3D Human Tracking with Catadioptric Omnidirectional Camera</title>
<link>http://hdl.handle.net/10985/16195</link>
<description>3D Human Tracking with Catadioptric Omnidirectional Camera
ABABSA, Fakhreddine; HADJ-ABDELKADER, Hicham; BOUI, Marouane
This paper deals with the problem of 3D human tracking in catadioptric images using particle-filtering framework. While traditional perspective images are well exploited, only a few methods have been developed for catadioptric vision, for the human detection or tracking problems. We propose to extend the 3D pose estimation in the case of perspective cameras to catadioptric sensors. In this paper, we develop an original likelihood functions based, on the one hand, on the geodetic distance in the spherical space SO3 and, on the other hand, on the mapping between the human silhouette in the images and the projected 3D model. These likelihood functions combined with a particle filter, whose propagation model is adapted to the spherical space, allow accurate 3D human tracking in omnidirectional images. Both visual and quantitative analysis of the experimental results demonstrate the effectiveness of our approach.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10985/16195</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
<dc:creator>ABABSA, Fakhreddine</dc:creator>
<dc:creator>HADJ-ABDELKADER, Hicham</dc:creator>
<dc:creator>BOUI, Marouane</dc:creator>
<dc:description>This paper deals with the problem of 3D human tracking in catadioptric images using particle-filtering framework. While traditional perspective images are well exploited, only a few methods have been developed for catadioptric vision, for the human detection or tracking problems. We propose to extend the 3D pose estimation in the case of perspective cameras to catadioptric sensors. In this paper, we develop an original likelihood functions based, on the one hand, on the geodetic distance in the spherical space SO3 and, on the other hand, on the mapping between the human silhouette in the images and the projected 3D model. These likelihood functions combined with a particle filter, whose propagation model is adapted to the spherical space, allow accurate 3D human tracking in omnidirectional images. Both visual and quantitative analysis of the experimental results demonstrate the effectiveness of our approach.</dc:description>
</item>
<item>
<title>Augmented Reality Application in Manufacturing Industry: Maintenance and Non-destructive Testing (NDT) Use Cases</title>
<link>http://hdl.handle.net/10985/19418</link>
<description>Augmented Reality Application in Manufacturing Industry: Maintenance and Non-destructive Testing (NDT) Use Cases
ABABSA, Fakhreddine
In recent years, a structural transformation of the manufacturing industry has been occurring as a result of the digital revolution. Thus, digital tools are now systematically used throughout the entire value chain, from design to production to marketing, especially virtual and augmented reality. Therefore, the purpose of this paper is to review, through concrete use cases, the progress of these novel technologies and their use in the manufacturing industry.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10985/19418</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
<dc:creator>ABABSA, Fakhreddine</dc:creator>
<dc:description>In recent years, a structural transformation of the manufacturing industry has been occurring as a result of the digital revolution. Thus, digital tools are now systematically used throughout the entire value chain, from design to production to marketing, especially virtual and augmented reality. Therefore, the purpose of this paper is to review, through concrete use cases, the progress of these novel technologies and their use in the manufacturing industry.</dc:description>
</item>
<item>
<title>3D Human Pose Estimation with a Catadioptric Sensor in Unconstrained Environments Using an Annealed Particle Filter</title>
<link>http://hdl.handle.net/10985/19775</link>
<description>3D Human Pose Estimation with a Catadioptric Sensor in Unconstrained Environments Using an Annealed Particle Filter
ABABSA, Fakhreddine; HADJ-ABDELKADER, Hicham; BOUI, Marouane
The purpose of this paper is to investigate the problem of 3D human tracking in complex environments using a particle filter with images captured by a catadioptric vision system. This issue has been widely studied in the literature on RGB images acquired from conventional perspective cameras, while omnidirectional images have seldom been used and published research works in this field remains limited. In this study, the Riemannian varieties was considered in order to compute the gradient on spherical images and generate a robust descriptor used along with an SVM classifier for human detection. Original likelihood functions associated with the particle filter are proposed, using both geodesic distances and overlapping regions between the silhouette detected in the images and the projected 3D human model. Our approach was experimentally evaluated on real data and showed favorable results compared to machine learning based techniques about the 3D pose accuracy. Thus, the Root Mean Square Error (RMSE) was measured by comparing estimated 3D poses and truth data, resulting in a mean error of 0.065 m when walking action was applied.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10985/19775</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
<dc:creator>ABABSA, Fakhreddine</dc:creator>
<dc:creator>HADJ-ABDELKADER, Hicham</dc:creator>
<dc:creator>BOUI, Marouane</dc:creator>
<dc:description>The purpose of this paper is to investigate the problem of 3D human tracking in complex environments using a particle filter with images captured by a catadioptric vision system. This issue has been widely studied in the literature on RGB images acquired from conventional perspective cameras, while omnidirectional images have seldom been used and published research works in this field remains limited. In this study, the Riemannian varieties was considered in order to compute the gradient on spherical images and generate a robust descriptor used along with an SVM classifier for human detection. Original likelihood functions associated with the particle filter are proposed, using both geodesic distances and overlapping regions between the silhouette detected in the images and the projected 3D human model. Our approach was experimentally evaluated on real data and showed favorable results compared to machine learning based techniques about the 3D pose accuracy. Thus, the Root Mean Square Error (RMSE) was measured by comparing estimated 3D poses and truth data, resulting in a mean error of 0.065 m when walking action was applied.</dc:description>
</item>
<item>
<title>Computation of dynamic transmission error for gear transmission systems using modal decomposition and Fourier series</title>
<link>http://hdl.handle.net/10985/22661</link>
<description>Computation of dynamic transmission error for gear transmission systems using modal decomposition and Fourier series
ABBOUD, Eddy; GROLET, Aurélien; MAHÉ, Hervé; THOMAS, Olivier
In this paper, a method for computing the dynamics of a geared system excited by its static transmission error is proposed. The method is based on the iterative spectral method (ISM) and on the harmonic balance method (HBM). It is shown that the dynamic transmission error (DTE) can be obtained in the frequency domain by solving a linear system of equations, which in turn allows the computation of the modal and physical coordinates of the system.
</description>
<pubDate>Mon, 01 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10985/22661</guid>
<dc:date>2021-11-01T00:00:00Z</dc:date>
<dc:creator>ABBOUD, Eddy</dc:creator>
<dc:creator>GROLET, Aurélien</dc:creator>
<dc:creator>MAHÉ, Hervé</dc:creator>
<dc:creator>THOMAS, Olivier</dc:creator>
<dc:description>In this paper, a method for computing the dynamics of a geared system excited by its static transmission error is proposed. The method is based on the iterative spectral method (ISM) and on the harmonic balance method (HBM). It is shown that the dynamic transmission error (DTE) can be obtained in the frequency domain by solving a linear system of equations, which in turn allows the computation of the modal and physical coordinates of the system.</dc:description>
</item>
<item>
<title>DEVELOPMENT OF VIRTUAL ENVIRONMENT TO ENHANCE USER EXPERIENCE WITH THE HELP OF ELECTROENCEPHALOGRAPHY</title>
<link>http://hdl.handle.net/10985/25841</link>
<description>DEVELOPMENT OF VIRTUAL ENVIRONMENT TO ENHANCE USER EXPERIENCE WITH THE HELP OF ELECTROENCEPHALOGRAPHY
ABDALHADI, Abdualrhman; NITIN, Koundal; LOU, Ruding; MOOSAVI, Mahdiyeh; MERIENNE, Frederic; YUSOFF, Mohd Zuki; SAAD, Naufal
The use of virtual reality (VR) has made significant advancements, and now it’s widely used across a range of applications. However, consumers’ capacity to fully enjoy VR experiences continues to be limited by a chronic problem known as cybersickness. This issue is caused by the stark discrepancy between the self-motion the vis-ual system perceives through immersive displays and the real motion the vestibular system detects. According to the sensory conflict theory, this mismatch between visual and vestibular cues leads to feelings of sickness and discomfort. This paper presents an initial techniques and framework to reduce the visually induced self-motion in virtual scenes using geometric simplification approaches. The primary goal is to reduce the amount of optic flow experienced by user during the interaction with the virtual environment in a controlled laboratory setup. The proposed framework combines EEG neurofeedback with virtual reality, allowing users’ brain wave activity and cognitive states to be monitored and assessed throughout VR encounters. The empirical evidence amassed from our investigation delineates a significant correlation between the manifestation of cybersickness and en-hanced neural activation within the parietal and temporal lobes. These findings were consistently manifested under two experimental conditions—non simplification and geometrical simplification within the virtual reality (VR) environment. Notably, the observed decrement in activation intensity when employing geometrical simpli-fication substantiates the effectiveness of our VR environment simplification strategy in the attenuation of cy-bersickness.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10985/25841</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
<dc:creator>ABDALHADI, Abdualrhman</dc:creator>
<dc:creator>NITIN, Koundal</dc:creator>
<dc:creator>LOU, Ruding</dc:creator>
<dc:creator>MOOSAVI, Mahdiyeh</dc:creator>
<dc:creator>MERIENNE, Frederic</dc:creator>
<dc:creator>YUSOFF, Mohd Zuki</dc:creator>
<dc:creator>SAAD, Naufal</dc:creator>
<dc:description>The use of virtual reality (VR) has made significant advancements, and now it’s widely used across a range of applications. However, consumers’ capacity to fully enjoy VR experiences continues to be limited by a chronic problem known as cybersickness. This issue is caused by the stark discrepancy between the self-motion the vis-ual system perceives through immersive displays and the real motion the vestibular system detects. According to the sensory conflict theory, this mismatch between visual and vestibular cues leads to feelings of sickness and discomfort. This paper presents an initial techniques and framework to reduce the visually induced self-motion in virtual scenes using geometric simplification approaches. The primary goal is to reduce the amount of optic flow experienced by user during the interaction with the virtual environment in a controlled laboratory setup. The proposed framework combines EEG neurofeedback with virtual reality, allowing users’ brain wave activity and cognitive states to be monitored and assessed throughout VR encounters. The empirical evidence amassed from our investigation delineates a significant correlation between the manifestation of cybersickness and en-hanced neural activation within the parietal and temporal lobes. These findings were consistently manifested under two experimental conditions—non simplification and geometrical simplification within the virtual reality (VR) environment. Notably, the observed decrement in activation intensity when employing geometrical simpli-fication substantiates the effectiveness of our VR environment simplification strategy in the attenuation of cy-bersickness.</dc:description>
</item>
<item>
<title>Study of the Acute Stress Effects on Decision Making Using Electroencephalography and Functional Near-Infrared Spectroscopy: A Systematic Review</title>
<link>http://hdl.handle.net/10985/25122</link>
<description>Study of the Acute Stress Effects on Decision Making Using Electroencephalography and Functional Near-Infrared Spectroscopy: A Systematic Review
ABDALHADI, Abdualrhman; KOUNDAL, Nitin; YUSOFF, Mohd Zuki; AL-QURAISHI, Maged S.; MERIENNE, Frédéric; SAAD, Naufal M.
This systematic review provides a comprehensive analysis of studies that use electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) to investigate how acute stress affects decision-making processes. The primary goal of this systematic review was to examine the influence of acute stress on decision making in challenging or stressful situations. Furthermore, we aimed to identify the specific brain regions affected by acute stress and explore the feature extraction and classification methods employed to enhance the detection of decision making under pressure. Five academic databases were carefully searched and 27 papers that satisfied the inclusion criteria were found. Overall, the results indicate the potential utility of EEG and fNIRS as techniques for identifying acute stress during decision-making and for gaining knowledge about the brain mechanisms underlying stress reactions. However, the varied methods employed in these studies and the small sample sizes highlight the need for additional studies to develop more standardized approaches for acute stress effects in decision-making tasks. The implications of the findings for the development of stress induction and technology in the decision-making process are also explained.
</description>
<pubDate>Thu, 11 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10985/25122</guid>
<dc:date>2024-04-11T00:00:00Z</dc:date>
<dc:creator>ABDALHADI, Abdualrhman</dc:creator>
<dc:creator>KOUNDAL, Nitin</dc:creator>
<dc:creator>YUSOFF, Mohd Zuki</dc:creator>
<dc:creator>AL-QURAISHI, Maged S.</dc:creator>
<dc:creator>MERIENNE, Frédéric</dc:creator>
<dc:creator>SAAD, Naufal M.</dc:creator>
<dc:description>This systematic review provides a comprehensive analysis of studies that use electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) to investigate how acute stress affects decision-making processes. The primary goal of this systematic review was to examine the influence of acute stress on decision making in challenging or stressful situations. Furthermore, we aimed to identify the specific brain regions affected by acute stress and explore the feature extraction and classification methods employed to enhance the detection of decision making under pressure. Five academic databases were carefully searched and 27 papers that satisfied the inclusion criteria were found. Overall, the results indicate the potential utility of EEG and fNIRS as techniques for identifying acute stress during decision-making and for gaining knowledge about the brain mechanisms underlying stress reactions. However, the varied methods employed in these studies and the small sample sizes highlight the need for additional studies to develop more standardized approaches for acute stress effects in decision-making tasks. The implications of the findings for the development of stress induction and technology in the decision-making process are also explained.</dc:description>
</item>
<item>
<title>Tabu Search Algorithm for Single and Multi-model Line Balancing Problems</title>
<link>http://hdl.handle.net/10985/20996</link>
<description>Tabu Search Algorithm for Single and Multi-model Line Balancing Problems
ABDELJAOUAD, Mohamed Amine; KLEMENT, Nathalie
This paper deals with the assembly line balancing issue. The considered objective is to minimize the weighted sum of products’ cycle times. The originality of this objective is that it is the generalization of the cycle time minimization used in single-model lines (SALBP) to the multi-model case (MALBP). An optimization algorithm made of a heuristic and a tabu-search method is presented and evaluated through an experimental study carried out on several and various randomly generated instances for both the single and multiproduct cases. The returned solutions are compared to optimal solutions given by a mathematical model from the literature and to a proposed lower bound inspired from the classical SALBP bound. The results show that the algorithm is high performing as the average relative gap between them is quite low for both problems.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10985/20996</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
<dc:creator>ABDELJAOUAD, Mohamed Amine</dc:creator>
<dc:creator>KLEMENT, Nathalie</dc:creator>
<dc:description>This paper deals with the assembly line balancing issue. The considered objective is to minimize the weighted sum of products’ cycle times. The originality of this objective is that it is the generalization of the cycle time minimization used in single-model lines (SALBP) to the multi-model case (MALBP). An optimization algorithm made of a heuristic and a tabu-search method is presented and evaluated through an experimental study carried out on several and various randomly generated instances for both the single and multiproduct cases. The returned solutions are compared to optimal solutions given by a mathematical model from the literature and to a proposed lower bound inspired from the classical SALBP bound. The results show that the algorithm is high performing as the average relative gap between them is quite low for both problems.</dc:description>
</item>
<item>
<title>Towards a SLAM-based augmented reality application for the 3D annotation of rock art</title>
<link>http://hdl.handle.net/10985/17575</link>
<description>Towards a SLAM-based augmented reality application for the 3D annotation of rock art
ABERGEL, Violette; JACQUOT, Kévin; DE LUCA, Livio; VERON, Philippe
The digital technologies developed in recent decades have considerably enriched the survey and documentation practices in the field of cultural heritage. They now raise new issues and challenges, particularly in the management of multidimensional datasets, which require the development of new methods for the analysis, interpretation and sharing of heterogeneous data. In the case of rock art sites, additional challenges are added to this context, due to their nature and fragility. In many cases, digital data alone is not sufficient to meet contextualization, analysis or traceability needs.&amp;#x0D; In this context, we propose to develop an application dedicated to rock art survey, allowing 3D annotation in augmented reality. This work is a part of an ongoing project about an information system dedicated to cultural heritage documentation. For this purpose, we propose a registration method based on a spatial resection calculation. We will also raise the perspectives that this opens up for heritage survey and documentation, in particular in terms of visualization enhancement.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10985/17575</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
<dc:creator>ABERGEL, Violette</dc:creator>
<dc:creator>JACQUOT, Kévin</dc:creator>
<dc:creator>DE LUCA, Livio</dc:creator>
<dc:creator>VERON, Philippe</dc:creator>
<dc:description>The digital technologies developed in recent decades have considerably enriched the survey and documentation practices in the field of cultural heritage. They now raise new issues and challenges, particularly in the management of multidimensional datasets, which require the development of new methods for the analysis, interpretation and sharing of heterogeneous data. In the case of rock art sites, additional challenges are added to this context, due to their nature and fragility. In many cases, digital data alone is not sufficient to meet contextualization, analysis or traceability needs.&amp;#x0D; In this context, we propose to develop an application dedicated to rock art survey, allowing 3D annotation in augmented reality. This work is a part of an ongoing project about an information system dedicated to cultural heritage documentation. For this purpose, we propose a registration method based on a spatial resection calculation. We will also raise the perspectives that this opens up for heritage survey and documentation, in particular in terms of visualization enhancement.</dc:description>
</item>
</channel>
</rss>
