The main purpose of the study of human-computer interaction is to develop techniques such that they enhance the way users can interact with their computers and make it more intuitive. The use of physical devices like mouse, keyboard for human-computer interaction hinders the intuitiveness and naturalness of the interface as there is a strong barrier between the user and computer. With the development of ubiquitous computing, current user human-computer Interaction with a personal computer today limited to just keyboard and mouse interaction is not sufficient. Being able to interact with the system naturally is becoming ever more important in many fields of human-computer interaction. Direct use of hands as an input device is an attractive method for providing natural human-computer interaction rather than traditional text-based interfaces through graphical-based user interfaces. Although the market for hand gesture-based interface design is huge, building a robust hand gesture recognition system remains a challenging problem for traditional vision-based approaches. Hence this hand gesture recognition system which can efficiently track both static and dynamic hand gestures would be an intuitive and natural interface for the users with their computers. This system translates the detected gesture into actions such as opening websites, launching applications and many more with very minimal hardware. Another approach to make this interaction more intuitive would be the help of gaze gestures where a head-mounted display (HMD) is used which is a portable interactive display device that can track eye movement as means of interaction. This technique is highly effective and very effortless to the user because humans can freely control their eye movements. Hence eye-tracking technology can be used as a method for HCI. At present, HCIs have been well established for gestures and voice input but this kind of gesture-based HCI method is unsuitable when both hands are occupied or in environments where speech is not an option. Thus, a simpler and more effective method to approach HCI with HMDs is crucial. Also, this system achieves HMD-based gaze interaction using an HMD webcam to detect and track the human gaze direction in real-time at a close range and to analyze the user’s intent based on gaze. In recent years, HCI research based on gaze gestures has emerged and is increasing rapidly.
Methodology For Hand Gesture Recognition For Human-Computer Interaction: In this method, when the user gives a gesture to the system it instantly captures the image of the hand gesture with the help of its camera module. Then the image is converted to a greyscale image with the help of various grey-scaling algorithms. This grey-scaled image is then processed for noise removal and smoothening of the image. Otsu binarization automatically calculates a threshold value from image histogram for a bimodal image, which is an image whose histogram has two peaks Thresholding is applied to the image to obtain a binary image from the processed greyscale image which is done by setting threshold value which is obtained from the binarization and converting all pixels to either black or white based on its value either less than or greater than that threshold value to achieve greater accuracy. Contour extraction is performed for object detection. Convex hull is found along with convexity defects. Depending upon these defects, gestures are then recognized. For gestures like palm and fist where there are no convexity defects, Haar cascade classifier is used where a collection of positive images, a minimum of 10 original images, taken at different lighting conditions and angles is used for recognizing these gestures. Based on these gestures actions are then mapped to each gesture. Finally, the application which is mapped to a particular gesture gets launched.
- It is an intuitive and natural way of interaction.
- More user-friendly.
- Recognizes both static and dynamic hand movements.
- Fast and sufficiently reliable as a recognition system.
- Easily to be implemented in real-time systems.
- These gestures are customizable, and any task can be assigned to each gesture.
- It has minimal hardware requirements.
- Low cost.
- Accuracy of recognition will drop if there is any involvement of a complex background.
- Irrelevant objects with hand can mislead the recognition system.
- The system might require the hand to be vertical and fingers pointing exactly to the camera.
- The performance of this system drops as the distance between user and camera increases.
- Ambient light effect color detection reducing system performance.
- It still doesn’t produce an interface that can replace physical controllers.
Methodology For Gaze Gesture And Their Applications In Human-Computer Interaction With a Head-Mounted Display: This method involves the use of eye-tracking which is a technique for measuring the gaze point of human eyes and their degree of movement relative to head pose. This system achieves an HMD based gaze interaction style to detect and track human gaze direction in real-time at a close range. The process starts with photographing the eye with the help of near eyed camera integrated into the HMD to calculate the gaze direction. The system will customize the range of the head pose distribution to directly provide head pose information. We collect the data based on the range, and these data are consistent with the image captured by our HMD system. Using pupil as the center the image is magnified by some factor. The pupil coordinates are made as center coordinates and randomly reduces the number of pixels and finally, Gaussian filtering is applied to the image. Then two modules of deep convolutional neural networks models are used to classify the image’s gaze trajectories. This network contains data from almost ten thousand gaze trajectories collected from various individuals. Due to the gap between the feature distributions in synthetic images and those of real images, learning from synthetic images may not achieve the expected performance. To bridge the gap between a synthetic image distribution and a real image distribution this network uses a model pre-trained on Net to learn from large amounts of data and then train the resulting model. Using real data solves the data distribution problem and enhances the recognition of the system while retaining the labeling information. These varieties of gaze gestures are finally mapped to a specific function that the user opts for.
- This system is very robust and user-friendly.
- Very helpful in situations where hands are preoccupied, and speech is not an option.
- Eye gaze movement is significantly faster than any other gesture movements.
- This technique collects large amounts of accurate and precise data for gesture recognition.
- Enhanced accuracy of recognition with the help of neural network.
- Uses two neural networks in parallel to map different features to ensure consistency between the obtained results.
- Can adopt to various lighting conditions either indoor or outdoor.
- Can act as a means of interface for disabled people.
- New users might tend to draw inaccurate patterns.
- Requires some adoption time for the user to get used to the interface.
- Relative positions of eyes and camera vary from person to person.
- Contact lenses, glasses can all impact camera’s ability to track eye movements.
- Eye-tracking and training the data set for the neural net can be expensive.
Use Of These Techniques In Real Time Scenarios
1. Hand Gesture: There is a huge potential for the development of new kinds of interface designs that refine how we used to interact with computers previous. There is a lot of research going on the development of these kinds of interfaces. There is a huge market for hand gesture-based interface. This kind of interface has many practical applications where the user can easily act without needing to reach for a physical controlling device. These kinds of interfaces are even currently used now, for example in launching products for a company they use these hand gesture movements to go to the next slide avoiding a physical clicker in their hands. This kind of interface can be very effective for small smart mobile devices and other smart wearable devices like smartwatches where the interface is limited and often have very small screens restricting the users to use other kind of interface rather touch screen. Hence this kind of hand gesture movement can be implemented in such devices with the help of infrared sensors. These gestures can be used for basic but frequent functionalities like launching certain applications, increasing or decreasing the volume, skipping songs, calling a particular preset contact, etc. and such functions can be easily implemented. This creates a convenient interface for the user with the device that has limited physical area hence a visual interface will be cumbersome for the user to interact with the device.
2. Gaze Gesture: This technology has a huge demand in other interface paradigms like Augmented reality and Virtual reality (AR and VR) where the tracking of eye movement is used for various functions. Traditional interaction methods can be unsuitable in some environments where hands are preoccupied, and speech may not be an option. At this time gaze gesture can be used since these applications are completely free of any kind of physical interaction devices except for the head mount which is only for the tracking of the eye movement of the user. Since these devices require only a free and convenient form of interaction for the user which gaze gesture interface offers, this technology can be extensively used in these fields. These kinds of technologies are even being currently used like we can take the example of Microsoft HoloLens.
In smart devices like smart lens, wear spectacles is used as a smart device to display information, gaze gestures can be used as a way of interface for navigation and other user-specified functions. Also, this method has a high accuracy of eye-tracking and hence can be used as a drawing tool in a virtual environment. These can also be used detecting drowsiness of the user and can warn the user if wanted. Other scenarios where tracking of the user’s eye is important is commerce where detecting which product has grabbed the attention of the user watching the advertisement.
- Haria, A., Subramanian, A., Asokkumar, N., Poddar, S., & Nayak, J. S. (2017). Hand Gesture
Recognition for Human-Computer Interaction. Procedia Computer Science, 115, 367–374.
- Chen, W.X., Cui, X.Y., Zheng, J., Zhang, J.M., Chen, S. and Yao, Y.D.(2019). Gaze Gestures
and Their Applications in human-computer interaction with a head-mounted display. arXiv
- Human - Computer interaction through the ages
- Interaction of a Program with Hardware
- Artificial Intelligence(AI) Replacing Human Jobs
- Top 10 Technology Trends That You Can Learn in 2020
- Top Machine Learning Trends in 2019
- Top 5 Trends in Artificial Intelligence That May Dominate 2020s
- What is Computer Crime?
- What happens when we turn on computer?
- Computer Animation
- Computer Science 101
- A simple understanding of Computer
- Computer Graphics | Antialiasing
- Stack machine in Computer Organisation
- Memory Organisation in Computer Architecture
- Do programmers need a Computer Science degree to get a job?
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.