Ten years ago, Dr. Hamzah Luqman set out to solve a problem that few had seriously tackled: how to recognize and translate Arabic Sign Language using artificial intelligence (AI). Now, as a researcher at KFUPM’s Joint Research Center for Artificial Intelligence, he has turned this mission into a national effort to make communication more accessible for the Deaf community in Saudi Arabia.
At the time, almost no resources existed on training AI to recognize sign language in Arabic. Therefore, the first and hardest challenge was building the datasets used for such training. Over the years, Dr. Luqman’s team developed three separate datasets for Arabic Sign Language. But as the project matured, their focus narrowed to Saudi Sign Language, where the need for interpreters remains high and the supply is drastically limited.
Creating the dataset for this newer phase was both labor-intensive and foundational to the success of any machine learning model. The released dataset, called Isharah, consisted of 30,000 video clips, each capturing real people signing in different environments, using regular smartphone cameras rather than lab equipment. That decision was intentional. The team didn’t want to build an AI model that only works in perfect lighting with a plain background; they wanted to create one that functions in the real world, where no two situations look the same.
Another standout feature is the dataset’s structure. Instead of just recording isolated words or gestures, the team captured 2,000 full sentences and annotated them at the word level. That kind of detail makes it possible to train systems that can follow natural sign language in context, a point that many similar projects tend to overlook.
All of this groundwork led to the development of SAMEA, the AI system built to recognize Saudi Sign Language. It has already been tested on signers who were not involved in the training phase, and the early results are promising. SAMEA has proven widely accurate and doesn’t need a sterile environment or specific clothing to work. It’s designed to handle the kinds of unpredictability that exist in everyday life.
The project has drawn attention from national institutions. The Saudi Data and AI Authority (SDAIA) provided support early on, while both Saudi TV and the Saudi Sign Language Interpreters Association helped build the Saudi sign language dataset. Their collaboration allowed the team to create a product nearing readiness for commercialization. With three patents already granted for the Arabic Sign Language AI models and two more for the Saudi Sign Language filed, the team will soon release SAMEA for trial use.
SAMEA, however, addresses only one side of a two-part challenge. Recognizing sign language is half the equation. Translation of spoken or written words into sign language is the other. For now, the team has prioritized recognition, given its complexity, but they don’t plan to stop there; translation tools are also in development.
Once launched, the system will be accessible to the public, opening up new opportunities for both the Deaf community and hearing individuals. And, once the model proves itself, the team hopes to expand their research into other Arabic dialects, taking one more step toward a more connected and accessible world.
Good Health and Well-being
Quality Education
Reduced Inequalities