Affective states play a major role in students’ learning process. When students are appropriately challenged, they may feel slightly confused and such confusion may be beneficial to learning by keeping students engaged. However, when the tasks are too challenging and the confusion cannot be resolved, students may be discouraged and be disengaged in the task. In physical classrooms, such affective states are detected by the teachers who adjust their instructions accordingly. However, in interactive digital learning environments, Arguel, Lockyer, Lipp, Lodge, and Kennedy (2017) argued that affective states are currently not detected even though there is a call for more adaptive systems.
The authors reviewed several methodologies for detecting confusion:
-
Self-reports
-
Behavioral responses in the form of facial expressions, facial electromyography, posture and conversational cues, visual exploration, and learner-computer interaction analysis
-
Physiological responses in the form of electrodermal activity, heart rate and heart-rate variability, brain imaging, and pupillometry
The methods reviewed differed in sensitivity, specificity, cost, and the potential interference with the learning processes. Therefore, the authors suggested taking a multimodal approach. In addition, the ideal method should be technically feasible, allow remote collection of data, and not impose on the learning process. At present, interaction data that are generated when students interact with the digital learning environments fulfil the criteria. However, more studies are needed to determine learners’ state of confusion based on predictive modelling of the interaction data. One of the first steps towards an adaptive system would be to reliably detect affective states related to learning (e.g., confusion) so that systems can be designed to adaptively support learning.