|SPCH02: Play chess using speech|
The aim of this project is the development of a speech interface for playing chess. So instead of using the keyboard, chess moves are the result of fluent spoken natural language instructions (eg. "knight captures bishop", "pawn moves from e3 to e4").
The following components are available: a speech recognition engine, a chess engine and a visualisation tool.
The challenges of the project are to be situated in two different fields: speech recognition and natural language understanding. The recognition component transforms speech input into text while the understanding component relates the recognizer's output to appropriate chess actions.
For speech recognition the student will have to build an appropriate lexicon for the task of playing chess, making sure that at least major terminology is covered. In addition a language model has to be developed, based on a task-specific finite-state network. Acoustic modeling is made available in the recognition engine.
As to understanding, the student should construct a mapping between the recognized instruction and the intended action in the chess game. Such mapping is not always straightforward as it requires a degree of robustness against recognition errors, checking of legal moves, identification of intended referents, low-level interaction with the user in case of illegal instructions and so on. For example, supposing player 1 still has 2 knights on the board, his instruction "knight captures queen" requires inferring which of his knights occupies the correct position for doing so.