Enhancing accessibility and assistance through eye gaze tracking: The I alert approach

Poornima M 1, Paramesha R 1 and Rohith M N 2, *

1 Department of Electronics and Communication, Govt Polytechnic Mirle, India.
2 Department of Electronics and Communication Engineering, JSS Science and Technology University. India.
 
Research Article
International Journal of Science and Research Archive, 2024, 13(02), 3735-3741.
Article DOI: 10.30574/ijsra.2024.13.2.2625
Publication history: 
Received on 16 November 2024; revised on 28 December 2024; accepted on 30 December 2024
 
Abstract: 
The growing demand for assistive technologies for individuals with physical disabilities has led to significant advancements in human-computer interaction (HCI). Eye gaze tracking, a promising input modality, offers a non-invasive and intuitive way to enhance accessibility and interaction. This paper presents iAlert, an innovative eye gaze-based alert system designed to provide timely assistance to individuals with limited mobility or communication abilities. By analyzing eye movements, iAlert aims to detect user intent and trigger appropriate responses, thereby facilitating improved interaction with the environment, enhancing safety, and offering real-time assistance in everyday tasks. This system holds great potential for improving the quality of life for individuals with physical impairments, elderly individuals, and others requiring assistive technologies.
The proposed methodology integrates eye gaze tracking with machine learning algorithms to build an intelligent alert system. Eye gaze data is captured using specialized hardware such as infrared sensors or cameras, which track the position and movement of the eyes. These gaze patterns are then pre-processed to remove noise and identify key features, such as fixations, saccades, and gaze direction. Machine learning models, specifically Convolutional Neural Networks (CNNs), are employed to classify the gaze data and predict user intent, allowing for real-time decision-making. A Support Vector Machine (SVM) classifier is used for detecting specific gestures or commands, such as a blink or a prolonged gaze, which are mapped to particular actions (e.g., triggering an alert, controlling a device, or communicating a need). The system adapts to user-specific behaviors over time through continuous learning, ensuring personalized assistance.
The expected results from the iAlert system are twofold: first, a high level of accuracy in detecting and responding to user gaze commands, and second, an enhanced user experience in terms of real-time responsiveness and adaptability. The system is expected to achieve over 90% accuracy in identifying user intent, especially in controlled environments. The integration of machine learning algorithms like CNNs and SVM is critical for ensuring robust classification of eye gaze data, reducing the error rate in real-time applications. Moreover, the machine learning approach allows the system to continuously improve through adaptive learning, providing more accurate and personalized responses over time. The importance of using these algorithms lies in their ability to handle complex, non-linear relationships within gaze data, enabling the system to function effectively across different users and contexts. By leveraging deep learning techniques, the iAlert system can scale for a wide range of assistive applications, making it a valuable tool for enhancing independence and accessibility for individuals with physical challenges
 
Keywords: 
Eye Gaze Tracking; Assistive Technology; Human-Computer Interaction (HCI); I Alert System; Machine Learning; Convolutional Neural Networks (CNN); Support Vector Machine (SVM); Real-time Assistance
 
Full text article in PDF: