Aluno: Vicente Peruffo Minotto
Orientador: Prof. Dr. Claudio Rosito Jung
Titulo: Audiovisual Voice Activity Detection and Localization of Speech Sources
Linha de Pesquisa: Computação Gráfica, Processamento de Imagens e Interação
Data: 05/07/2013
Hora: 13h30min
Local: Sala 220 (conselhos). Prédio 43412 – Instituto de Informática
Banca Examinadora:
Prof. Dr. Eduardo Antônio Barros da Silva (UFRJ)
Prof. Dr. João Luiz Dihl Comba (UFRGS)
Prof. Dr. Marcelo Walter (UFRGS)
Presidente da Banca: Prof. Dr. Claudio Rosito Jung
Resumo: Given the tendency of creating interfaces between human and machines that increasingly allow simple ways of interaction, it is only natural that research effort is put into techniques that seek to simulate the most conventional mean of communication humans use: the speech. In the human auditory system, voice is automatically processed by the brain in a effortless and effective way, also commonly aided by visual cues, such as mouth movement and location of the speakers. This processing done by the brain includes two important components that speech-based communication require: Voice Activity Detection (VAD) and Sound Source Localization (SSL). Consequently, VAD and SSL also serve as mandatory preprocessing tools for high-end Human Computer Interface (HCI) applications in a computing environment, as the case of automatic speech recognition and speaker identification. However, VAD and SSL are still challenging problems when dealing with realistic acoustic scenarios, particularly in the presence of noise, reverberation and multiple simultaneous speakers. In this work we propose some approaches for tackling these problems using audiovisual information, both for the single source and the competing sources scenario, exploiting distinct ways of fusing the audio and video modalities. Our work also employs a microphone array for the audio processing, which allows the spatial information of the acoustic signals to be explored through the state-of-the art method Steered Response Power (SRP). As an additional consequence, a very fast GPU version of the SRP is developed, so that real-time processing is achieved. Our experiments show an average accuracy of 95% when performing VAD of up to three simultaneous speakers and an average error of 10cm when locating such speakers.