9–13 Jun 2025
Brighton, UK
Europe/London timezone

AI judge assistant for recognition of jump rope skills in videos

12 Jun 2025, 11:30
5m
Concert Hall

Concert Hall

0_0n3hzxd2
Lightning Talk Lightning Talks Lightning Talks: Second Strike

Speaker

Mike De Decker (HOGENT)

Description

Judging jump rope freestyle routines at the highest competitive level has become increasingly challenging due to the evolution of jump rope. Both the number of skills that are included in a routine as well as the speed with which these are executed keep increasing. This is particularly evident in so-called Double Dutch Freestyle routines, which is why assigning scores to these freestyles is done by a combination of live and delayed evaluation. The creativity of a routine (including its variation and musicality) is scored in real time, but the assignment of the appropriate difficulty level is done based on a recording of the routine replayed at half speed right after it is performed. Even though this helps reduce errors in difficulty scoring, a certain variability in the assigned scores persists/can still be seen. To make the objectivity in scoring more robust in Gymnastics, Fujitsu collaborates sinds 2017 with the International Gymnastic Federation to develop a Jury Support System (JSS). The results were first introduced at the 2019 Artistic Gymnastics World Championship being the first in the field. Since then, even more accessible AI tools, better computational resources, and pre-trained models have emerged. Inspired by this example and others such as sign-language recognition, or NextJumps speed counter (2023), which outperforms judges in counting speed steps, this study sets out to explore the creation of an AI jump rope assistant capable of recognizing skills based on video recordings, which is different from the sensory input the JSS is using.

The current idea is divided into three independent modular sections, as a simple but rather unique composition. The first section involves localizing the jumpers in the field as most obtained recordings are not fully zoomed in or recorded using a static camera. This means jumpers can be cropped, sparing computational resources. Using Ultralytics latest YOLO version provides satisfying cropping results. While they are not perfect yet, focus is put on the second and third sections, namely skill segmentation and skill recognition.
By integrating state-of-the-art action recognition AI models such as temporal convolutional networks, convolutional networks using attention, or video vision transformers, full recordings can be split into multiple skills, which mostly means splitting the video by leaving or landing on the floor. Processing these predicted action segments, each split should contain one identifiable skill, which can be predicted using the same model or the best model for skill recognition.

In case it works, it is not only useful for jump rope freestyles but also applicable in other judge-related competitions such as gymnastic routines, figure skating, or synchronized swimming. Meanwhile it is usable or adaptable for educational environments, research on movement analysis, or rehabilitation centers keeping track of a patient's evolution.
Using its modularity, training can be done through distributed systems, and in case better models show up for a single module, it can be replaced separately.

Primary author

Presentation materials