Multi-Stream Isolated Sign Language Recognition Based on Finger Features Derived from Pose Data     
Yazarlar (2)
Dr. Öğr. Üyesi Ali AKDAĞ Tokat Gaziosmanpaşa Üniversitesi, Türkiye
Ömer Kaan Baykan
Konya Teknik Üniversitesi, Türkiye
Makale Türü Açık Erişim Özgün Makale
Makale Alt Türü SSCI, AHCI, SCI, SCI-Exp dergilerinde yayınlanan tam makale
Dergi Adı Electronics Switzerland
Dergi ISSN 2079-9292 Wos Dergi Scopus Dergi
Dergi Tarandığı Indeksler SCI-Expanded
Dergi Grubu Q2
Makale Dili Türkçe
Basım Tarihi 04-2024
Cilt No 13
Sayı 8
DOI Numarası 10.3390/electronics13081591
Makale Linki http://dx.doi.org/10.3390/electronics13081591
Özet
This study introduces an innovative multichannel approach that focuses on the features and configurations of fingers in isolated sign language recognition. The foundation of this approach is based on three different types of data, derived from finger pose data obtained using MediaPipe and processed in separate channels. Using these multichannel data, we trained the proposed MultiChannel-MobileNetV2 model to provide a detailed analysis of finger movements. In our study, we first subject the features extracted from all trained models to dimensionality reduction using Principal Component Analysis. Subsequently, we combine these processed features for classification using a Support Vector Machine. Furthermore, our proposed method includes processing body and facial information using MobileNetV2. Our final proposed sign language recognition method has achieved remarkable accuracy rates of 97.15%, 95.13%, 99.78%, and 95.37% on the BosphorusSign22k-general, BosphorusSign22k, LSA64, and GSL datasets, respectively. These results underscore the generalizability and adaptability of the proposed method, proving its competitive edge over existing studies in the literature.
Anahtar Kelimeler
deep learning | feature fusion | sign language recognition