2-10 Development And Usability Evaluation Of A Desktop Software Application For Pain Assesment In Infants

Development And Usability Evaluation Of A Desktop Software Application For Pain Assesment In Infants

Amos S Hundert1, Marsha Campbell-Yeo1, Harrison R Brook2, Lori M Wozney1, Kelly O’Connor1

1) Canada 2) United Kingdom

Background and Aims: To facilitate infant pain assessment in a research context, coding of pain indicators are often completed from up close video recordings. Modern software capabilities present new opportunities to increase efficiency and data quality compared to existing software. The aims of this research were: (1) to develop software called Pain Assessment in Neonates (PAiN) to support coding of pain in infants based on video recordings; (2) to evaluate the usability of PAiN in terms of effectiveness, efficiency, and satisfaction among novice and expert users; and (3) to compare the efficiency and satisfaction of PAiN to existing software for coding of infant pain among expert users.

Methods: A quantitative usability testing approach was conducted with two participant groups, representing novice and expert end-users. Testing included an observed session, with each participant completing a pain assessment coding task, followed by administration of the Post Study System Usability Questionnaire (PSSUQ) and Desirability Toolkit. For comparison, the usability of existing coding software currently in use was also evaluated by the expert group.

Results: Twelve novice and 6 expert users participated. Novice users committed 14 non-critical navigational errors, and experts committed 6. Among expert users, the median time for completing the coding task was 28.6 (range 25.4 – 30.1) minutes in PAiN, compared to 46.5 (range 35.1 – 109.2) minutes using the existing software. The mean overall PSSUQ score among novice (1.89) and expert users (1.40) was not significantly different (p = .0917). Lower scores indicate a more positive response. Among expert users, the overall score for the existing software (4.83) was significantly (p = .0277) higher compared to PAiN (1.40).

Conclusions: Participants in both groups were highly satisfied with PAiN, and expert users were more satisfied and efficient using PAiN compared to the existing software. PAiN will now be implemented as a research tool.