The hidden workload: Student data work in multimodal algorithmic evaluations
Escuela Superior Politecnica del Litoral
image: The presentation tutoring rooms on campus (RAP In-Person System) are equipped with a Raspberry Pi camera (a), an ambient microphone (b), acoustic foam panels (c), a rear-projection screen (d) that displays a virtual audience during presentations, and a projection area for visual resources (f). Assistants in a control room (e) help students with the technical and logistical aspects of their presentation (e.g., starting and ending recordings and collecting slides).
Credit: Gonzalo Mendez/ESPOL
As universities increasingly adopt digital tools and automated analytics systems, attention often centers on these tools’ gains in accuracy and efficiency. Far less visible, however, is another critical dimension: the additional work students must do to produce, organize, and interpret their own data within these systems.
A research team at ESPOL (Escuela Superior Politécnica del Litoral) examined this “invisible effort” through a mixed-methods study involving behavioral observations, surveys, and interviews with 43 students using the collocated and mobile version of RAP, an automated presentation feedback system. RAP, used institutionally since 2017, records audio and video to analyze features such as posture, gaze, vocal volume, and slide design during oral presentations. While research increasingly positions analytics tools like RAP as especially promising for supporting complex skills such as collaboration and public speaking, the study found that students must perform substantial additional “data work” not included in the curriculum but highly demanding of time, attention, and cognitive effort.
The researchers identified two main forms of this hidden workload.
1. Speculating over a definition of “good data quality”
Particularly in the mobile version of RAP, students focused not only on delivering a strong presentation but also on “recording for the algorithm.” Participants adjusted their behavior and environment based on what they believed the system required. They searched for quiet, well-lit locations, managed camera framing and backgrounds, rearranged physical spaces, and repeated recordings until they felt the system would rate the attempt well. In practice, this sometimes shifted students’ priorities away from improving their communication skills and toward producing better data quality for automated analysis.
2. Making sense of unclear automated reports
A second layer of effort emerged when students interpreted system-generated reports. Many encountered mismatches between automated feedback and their lived presentation-recording experience. To resolve these discrepancies, students engaged in additional interpretive work — contextualizing results by considering data-altering factors such as background noise, camera placement, or lighting conditions — and questioning whether an algorithm could fairly evaluate real-world presentation performance.
These forms of additional workload unveiled a critical social side effect of algorithmic systems such as RAP: this interpretive and corrective forms of labor are typically done in isolation. Platforms like RAP tend to individualize the feedback experience, reducing opportunities for community-supported data practices — such as discussing results with peers, sharing recording strategies, or collaboratively interpreting feedback.
Based on these findings, the researchers recommend that institutions pair analytics technologies with structured learning communities. Rather than leaving students alone with automated reports, they suggest creating spaces where learners can jointly review presentations, discuss feedback, and share improvement strategies without over-focusing on scores.
Specific recommendations include:
- Providing trained human support, such as volunteer students and instructors with expertise in public speaking and system operation, to facilitate group feedback sessions.
- Being transparent about what the algorithm can and cannot reliably assess, reducing unproductive attempts to “optimize the metric” through trial and error.
- Offering real-time practical guidance — such as prompts about lighting or camera position — so data quality does not depend on guesswork or added cognitive burden.
- Strengthening critical data and algorithm literacy. Because RAP is institutionally deployed, many students did not question the workload it imposed, how their recordings might be reused, or the privacy implications of capturing audio and video in home environments. The authors call for more open discussion of ethical and social issues, including privacy, data reuse, and algorithmic bias.
Overall, the study concludes that the educational value of algorithmic assessment systems depends not only on analytical performance but also on how they fit into students’ real learning contexts. When producing and interpreting data requires significant extra effort, that burden should be explicitly recognized and designed for — so that technology supports learning rather than adding invisible pressure.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.