Português English
Contato
Publicado em: 10/03/2011

Tese de Doutorado em Inteligência Artificial

UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL
INSTITUTO DE INFORMÁTICA
PROGRAMA DE POS-GRADUAÇÃO EM COMPUTAÇÃO
———————————————————
DEFESA DE TESE DE DOUTORADO

UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL
INSTITUTO DE INFORMÁTICA
PROGRAMA DE POS-GRADUAÇÃO EM COMPUTAÇÃO
———————————————————
DEFESA DE TESE DE DOUTORADO

Aluno: Milton Roberto Heinen
Orientador: Prof. Dr. Paulo Martins Engel

Título: A Connectionist Approach for Incremental Function Approximation and On-line Tasks
Linha de Pesquisa: Inteligência Artificial

Data: 15/03/2011
Hora: 14:00
Local: Auditório José M. V. Castilho, prédio 43424 (72)

Banca Examinadora:

Prof. Dr. Denis F. Wolf (USP – São Carlos)
Prof. Dr. Mauro Roisenberg (UFSC)
Prof. Dr. Luís da Cunha Lamb (UFRGS)

Presidente da Banca: Prof. Dr. Paulo Martins Engel

Resumo: This works proposes IGMN (standing for Incremental Gaussian Mixture Network), a new connectionist approach for incremental function approximation and real time tasks. It is inspired on recent theories about the brain, specially the Memory-Prediction Framework and the Constructivist Artificial Intelligence, which endows it with some unique features that are not present in most ANN models such as MLP, RBF and GRNN. Moreover, IGMN is based on strong statistical principles (Gaussian mixture models) and asymptotically converges to the optimal regression surface as more training data arrive. The main advantages of IGMN over other ANN models are: (i) IGMN learns incrementally using a single scan over the training data (each training pattern can be immediately used and discarded); (ii) it can produce reasonable estimates based on few training data; (iii) the learning process can proceed perpetually as new training data arrive (there is no separate phases for leaning and recalling); (iv) IGMN can handle the stability-plasticity dilemma and does not suffer from catastrophic interference; (v) the neural network topology is defined automatically and incrementally (new units added whenever is necessary); (vi) IGMN is not sensible to initialization conditions (in fact there is no random initialization/decision in IGMN); (vii) the same neural network can be used to solve both forward and inverse problems (the information flow is bidirectional) even in regions where the target data are multi-valued; and (viii) IGMN can provide the confidence levels of its estimates. Another relevant contribution of this thesis is the use of IGMN in some important state-of-the-art machine learning and robotic tasks such as model identification, incremental concept formation, reinforcement learning, robotic mapping and time series prediction. In fact, the efficiency of IGMN and its representational power expand the set of potential tasks in which the neural networks can be applied, thus opening new research directions in which important contributions can be made. Through several experiments using the proposed model it is demonstrated that IGMN is also robust to overfitting, does not require fine-tunning its configuration parameters and has a very good computational performance, thus allowing its use in real time control applications. Therefore, IGMN is a very useful machine learning tool for incremental function approximation and on-line prediction.

Palavras-Chave: Machine learning, artificial neural networks, incremental learning, Bayesian methods, Gaussian mixture models, function approximation, regression, clustering, reinforcement learning, autonomous mobile robots.