148429 - Predicting Human Workload During Context-Aware Action Modifications for Ai Assistance and Imitation Learning
This research aims to explore the prediction of human context-aware action modifications in visually demanding tasks for the design and development of AI assistance and augmenting imitation learning algorithms. In the case of AI assistance, predicting human workload provides feedback to the system of when a human is overwhelmed with a problem and needs AI assistance, and furthermore we can design an AI assistant that provides adequate feedback at the right time and measures how effective the feedback is. In the case of imitation learning, training an offline prediction network to predict a human workload based on image information dismisses the need for electroencephalograms (EEG) which are not easy to use in everyday practice. From the workload prediction we can prioritize certain actions or moments in the game to aid the learning process. In this paper, we utilize the benchmark Atari environment to remove domain expertise and provide emphasis on context-aware human decision-making. We extract connected component labeling features from the frames and human eye gaze to predict when a human performs a context-aware action modification for both frame-by-frame and time-domain applications. We train an offline network to predict important moments in games and when the user will take longer to make an action modification. Furthermore, we identify specific environmental scenarios where eye gaze provides a wealth of information and increases real-time workload classification.
Presenting Author:
Presenting Author Biography:
Predicting Human Workload During Context-Aware Action Modifications for Ai Assistance and Imitation Learning