|SPCH10: Integration of speech and language technologies|
State-of-the-art recognition systems typically consist of several components, depending on the application. For instance in a system for automatic subtitling of TV programs, there is a component for segmenting the audio data (e.g. speech versus non-speech), one for labeling the speakers, one for the recognition itself, one for turning sentences into (shorter) subtitles, and so on.
In order to explain a recognition system and the interaction between its components, or in order to demonstrate improvements on one specific component, a good tool for the visualisation of recognition systems is needed.
Therefore the aim of this project is to make a flexible tool to demonstrate recognition systems. The choice of programming language to do so can depend on the skills of the student. The idea is not to develop any specific component, but to learn enough about typical components in recognition systems to allow and foresee their incorporation into the demonstration tool.