SIGNATURE-BSL
Scalable Intelligent Generation, detectioN and Animation of Three-dimensional Users for Real-time Engagement in BSL
-
SIGNATURE-BSL explores the use of AI, motion capture, and volumetric video to create natural, trustworthy 3D virtual humans for detecting, generating, and delivering British Sign Language.
Overview
This project aims at sketching out what needs to be done for the development of AI-enabled services for British Sign Language (BSL) detection, generation, and translation, delivered through naturalistic animated 3D virtual humans.
Despite rapid progress in AI and immersive technologies, BSL presents a distinctive triple challenge. First, BSL comprises an estimated 20,000-100,000 signs that are combined productively through space and time to convey meaning, resulting in a high level of linguistic complexity. Second, effective machine translation for sign language requires large volumes of high-quality training data, which must be captured laboriously using high-fidelity motion capture and facial tracking, with transitions and combinations inflating data demands. Third, virtual humans used to deliver BSL translations often suffer from the 'uncanny valley' effect, where near-human representations create discomfort and reduce user trust, posing a significant barrier to acceptance.
The project’s overarching goal is to leverage the capabilities of the new OpenXR studio facilities to establish a robust proof of concept for AI-enabled BSL detection, generation, and translation. Three objectives structure the work. First, the project will experimentally use the motion capture studio to collect additional high-quality hand, body, and facial expression data from BSL experts, expanding the existing inventory of recorded signs. Second, it will investigate efficient and future-proof methods for serving captured movement data programmatically to animate virtual humans, implemented through a Unity-based proof-of-concept application that demonstrates replay, control, and integration within XR environments. Third, the project will use the volumetric video studio to create 4D recordings of BSL speakers and assess the feasibility of a novel mesh-based transformer architecture tailored to spatio-temporal 3D data, supporting future automated recognition and generation with a different approach.
Access to specialist studios, expertise, and infrastructure significantly strengthens current capabilities and accelerates progress toward making sign language more universally available, aligning with a strong 'AI for the greater good' mission of IET. Automated BSL translation delivered through trustworthy virtual humans could potentially have substantial economic, societal, and environmental impact. This project is a first step to explore this.
The role of IET
Institute of Educational Technology (IET) leads this project, providing strategic direction, technical coordination, and academic leadership across all work packages. IET ensures effective integration of motion capture, volumetric video, and XR workflows in the OpenXR studios, while coordinating AI, virtual human, and webXR-based development activities. Drawing on its expertise in immersive learning, accessibility, and human-centred technology design, IET will ensure the project addresses both technical feasibility and user acceptance. IET will also lead stakeholder engagement, governance, dissemination, and future funding strategy, positioning the project for scale-up through further research projects.
Funders
- HEIF
Partners
- OpenXR Studios