

Three computer-mouse functions are considered in our research: mouse movement, left-clicking, and right-clicking. Finally, to control the mouse cursor based on a virtual screen, the fingertip location is mapped to RGB images. The K-cosine algorithm is used to detect the fingertip location, based on the hand-contour coordinates. Then, the hand contours are extracted and described by a border-tracing algorithm. The hand region of interest and the center of the palm are first extracted from depth images provided by the Kinect V2 skeletal tracker and converted to binary images. In this paper, we propose a gesture-based interface where users interact with a computer using fingertip detection in RGB-D inputs. To overcome these disadvantages, a system that is intuitive, affordably priced, easy to use, and allows a user to accurately control a mouse cursor using their fingertips, should be introduced. Therefore, long-term fingertip tracking remains a challenging task. In addition, choosing a target person when multi-people directly stand to face the camera is a challenging case because it is difficult to determine accurately who will be the target. However, the system only works for hand tracking, not fingertip tracking.įingertip detection with multiple people simultaneously poses a great difficulty that current systems have not yet overcome. These methods use a complex mesh model and achieve real-time performance. Some systems use depth images from Kinect and achieve high speeds, while avoiding the disadvantages of traditional RGB cameras by tracking depth maps from frame to frame. Microsoft’s Kinect RGB with depth (RGB-D) camera has extended depth-sensing technology and interfaces for human-motion analysis applications. However, even with RGB cameras, most existing algorithms tend to fail when faced with changing light levels, complex backgrounds, multiple people, or background or foreground movements during the hand tracking. Therefore, virtual mouse control by fingertip detection from images has been one of the main goals of vision-based technology in the last decades, especially with traditional red-green-blue (RGB) cameras. Hand-gesture-based interfaces allow humans to interact with a computer in the most natural way, typically by using fingertip movements.įingertip detection is broadly applied in practical applications, e.g., virtual mice, remote controls, sign-language recognition, or immersive gaming technology. The Natural User Interface (NUI) of hand-gesture recognition is an important topic in HCI. With the development of augmented-reality technology, researchers are working to reduce people’s workload while increasing their productivity by studying human–computer interactions (HCI). This fingertip-gesture-based interface allows humans to easily interact with computers by hand. The experimental results showed a high accuracy level the system can work well in real-world environments with a single CPU. The system tracks fingertips in real-time at 30 FPS on a desktop computer using a single CPU and Kinect V2. Finally, the fingertip location is mapped to RGB images to control the mouse cursor based on a virtual screen. Then, the contours of the hands are extracted and described by a border-tracing algorithm. The hand region of interest and the center of the palm are first extracted using in-depth skeleton-joint information images from a Microsoft Kinect Sensor version 2, and then converted into a binary image. In this work, we propose a novel virtual-mouse method using RGB-D images and fingertip detection. Using fingertip tracking as a virtual mouse is a popular method of interacting with computers without a mouse device. A real-time fingertip-gesture-based interface is still challenging for human–computer interactions, due to sensor noise, changing light levels, and the complexity of tracking a fingertip across a variety of subjects.
