Português English
Contato
Publicado em: 21/03/2011

Tese de Doutorado em Processamento Paralelo e Distribuído

UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL

INSTITUTO DE INFORMÁTICA

PROGRAMA DE POS-GRADUAÇÃO EM COMPUTAÇÃO

———————————————————

DEFESA DE TESE DE DOUTORADO

Aluna: Márcia Cristina Cera

Orientador: Prof. Dr. Philippe Olivier Alexandre Navaux

Co-orientador: Prof. Dr. Nicolas Bruno Maillard

Título: Providing Adaptability to MPI Applications on Current Parallel Architectures

Linha de Pesquisa: Processamento Paralelo e Distribuído

Data: 24/03/2011

Hora: 13h 30min

Local: Auditório Verde

Banca Examinadora:

Prof. Dr. Jairo Panetta (CPTEC/INPE)

Profa. Dra. Andrea Schwertner Charão (UFSM)

Prof. Dr. Alexandre da Silva Carissimi (UFRGS)

Presidente da Banca: Prof. Dr. Philippe Olivier Alexandre Navaux

Resumo:

Currently, adaptability is a desired feature in parallel applications. For instance, the increasingly number of user competing for resources of the parallel architectures causes dynamic changes in the set of available processors. Adaptive applications are able to execute using a set of volatile processors, providing better resource utilization. This adaptive behavior is known as malleability. Another example comes from the constant evolution of the multi-core architectures, which increases the number of cores to each new  generation of chips. Adaptability is the key to allow parallel programs portability from one multi-core machine to another. Thus, parallel programs can adapt the unfolding of the parallelism to the specific degree of parallelism of the target architecture. This adaptive behavior can be seen as a particular case of evolutivity. In this sense, this thesis is focused on: (i) malleability to adapt the execution of parallel applications as changes in processors availability; and (ii) evolutivity to adapt the unfolding of the parallelism at runtime as the architecture and input data properties. Thus, the open issue is “How to provide and support adaptive applications?”. This thesis aims to answer this question taking into account the MPI (Message-Passing Interface), which is the standard parallel API for HPC in distributed-memory environments. Our work is based on MPI-2 features that allow spawning processes at runtime, adding some flexibility to the MPI applications.

Malleable MPI applications use dynamic process creation to expand themselves in growth action (to use further processors). The shrinkage actions (to release processors) end the execution of the MPI processes on the required processors in such a way that the application’s data are preserved. Notice that malleable applications require a runtime environment support to execute, once they must be notified about the processors availability. Evolving MPI applications follow the explicit task parallelism paradigm to allow their runtime adaptation. Thus, dynamic process creation is used to unfold the  parallelism, i.e., to create new MPI tasks on demand. To provide these applications we defined the abstract MPI tasks, implemented the synchronization among these tasks through message exchanges, and proposed an approach to adjust MPI tasks granularity aiming at efficiency in distributed-memory environments.

Experimental results validated our hypothesis that adaptive applications can be provided using the MPI-2 features. Additionally, this thesis identifies the requirements to support these applications in cluster environments. Thus, malleable MPI applications were able to improve the cluster utilization; and the explicit task ones were able to adapt the unfolding of the parallelism to the target architecture, showing that this programming paradigm can be efficient also in distributed-memory contexts.

 

 

Palavras-Chave: MPI, Adaptability, Malleability, Explicit Task Parallelism