by Ralph Elliott, Helen Cooper, John Glauert, Richard Bowden, François Lefebvre-Albaret
Abstract:
We describe a prototype Search-by-Example or look-up tool for signs, based on a newly developed 1000-concept sign lexicon for four national sign languages (GSL, DGS, LSF, BSL), which includes a spoken language gloss, a HamNoSys description, and a video for each sign. The look-up tool combines an interactive sign recognition system, supported by Kinect technology, with a real-time sign synthesis system, using a virtual human signer, to present results to the user. The user performs a sign to the system and is presented with animations of signs recognised as similar. The user also has the option to view any of these signs performed in the other three sign languages. We describe the supporting technology and architecture for this system, and present some preliminary evaluation results.
Reference:
Ralph Elliott, Helen Cooper, John Glauert, Richard Bowden, François Lefebvre-Albaret, "Search-By-Example in Multilingual Sign Language Databases", In Proceedings of the Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Dundee, Scotland, 2011.
Bibtex Entry:
@INPROCEEDINGS{Elliott_Search_2011,
author = {Ralph Elliott and Helen Cooper and John Glauert and Richard Bowden and Fran\c{c}ois Lefebvre-Albaret},
title = {Search-By-Example in Multilingual Sign Language Databases},
booktitle = {Proceedings of the Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT)},
year = {2011},
address = {Dundee, Scotland},
month = oct # { 23},
abstract = {We describe a prototype Search-by-Example or look-up tool
for signs, based on a newly developed 1000-concept sign
lexicon for four national sign languages (GSL, DGS, LSF,
BSL), which includes a spoken language gloss, a HamNoSys
description, and a video for each sign. The look-up tool
combines an interactive sign recognition system, supported
by Kinect technology, with a real-time sign synthesis system,
using a virtual human signer, to present results to the
user. The user performs a sign to the system and is presented
with animations of signs recognised as similar. The
user also has the option to view any of these signs performed
in the other three sign languages. We describe the supporting
technology and architecture for this system, and present
some preliminary evaluation results.},
url = {http://personal.ee.surrey.ac.uk/Personal/H.Cooper/research/papers/SBE_SLTAT.pdf}
}