The American Sign Language Knowledge Graph: Infusing ASL Models with Linguistic Knowledge
Jan 22, 2025·,
,,,·
0 min read
Lee Kezar
Nidhi Munikote

Zian Zeng
Zed Sehyr
Naomi Caselli
Jesse Thomason

Abstract
Language models for American Sign Language (ASL) could make language technologies substantially more accessible to those who sign. To train these models on tasks such as isolated sign recognition (ISR) and ASL-to-English translation, datasets provide video examples of ASL signs, facilitating deep learning methods. In order to improve the generalizability and explainability of these models, we introduce the American Sign Language Knowledge Graph (ASLKG), compiled from twelve sources of expert linguistic knowledge. We use the ASLKG to train neuro-symbolic models for 3 ASL understanding tasks, achieving 91% accuracy on ISR, 14% accuracy at predicting the semantic features of unseen signs, and 36% accuracy at classifying the topic of Youtube-ASL videos.
Type
Publication
2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics