[petsc4py] What should be the 'default' communicator?

Lisandro Dalcin dalcinl at gmail.com
Mon Aug 25 13:22:12 CDT 2008


On Mon, Aug 25, 2008 at 1:08 PM, Matthew Knepley <knepley at gmail.com> wrote:
> I would still maintain that PETSC_COMM_WORLD is the correct default. There
> are better paradigms for embarassingly parallel operation, like Condor. PETSc
> is intended for parallel, domain decomposition runs.

Yes, you are completelly right. But I believe that many people still
use PETSc in a sequential way just because PETSc is full featured,
well designed, easy to learn, etc. So, despite PETSc being intended
for parallel, domain decomposition applications, many people are going
to use it for sequential apps and embarassingly parallel operations.

To be honest, I've never looked too much at paradigms like Condor. But
using them implies to learn yet another framework. Another anecdote: a
guy sent me a mail with questions about mpi4py for solving a
embarassingly parallel problems. I asked why he was trying to use such
a "heavy weight" approach. And then he answered he was tired of the
complications and performance of using a Grid-based approach, and that
'mpiexec' a Python script with some coordinating MPI calls was far
easier to setup, extend, and maintain and had better overall running
times than submitting jobs to "The Grid".


> On Mon, Aug 25, 2008 at 10:54 AM, Lisandro Dalcin <dalcinl at gmail.com> wrote:
>> After working hard on mpi4py, this week I'll spend my time cleaning-up
>> and adding features to the new Cython-based petsc4py. Then, I'll be
>> asking questions to this list requesting for advise.
>>
>> In all calls that create new PETSc objects, I've decided to make the
>> 'comm' argument optional. If the communicator is not passed,
>> PETSC_COMM_WORLD is currently used. This is the approach PETSc uses in
>> some C++ calls implemented through PetscPolymorphicFunction().
>>
>> But now I believe that is wrong, and that PETSC_COMM_SELF should be
>> the default. Or perhaps even better, I should let users set the
>> default communicator used by petsc4py to create new (parallel)
>> objects.
>>
>> An anecdote: some time ago, a petsc4py user wrote a sequential code
>> and created objects without passing communicator arguments, next he
>> wanted to solve many of those problems in different worker processes
>> in a "ambarrasingly parallel" fashion and collect results at the
>> master process. Of course, he run into trouble. Then I asked him to
>> initialize PETSc in such a way that PETSC_COMM_WORLD was actually
>> PETSC_COMM_SELF (by setting the world comm before PetscInitalize()).
>> This mostly works, but has a problem: we have lost the actual
>> PETSC_COMM_WORLD, so we are not able to create a parallel object after
>> PetscInitialize().
>>
>> Any thoughts?
>>
>>
>> --
>> Lisandro Dalcín
>> ---------------
>> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
>> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
>> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
>> PTLC - Güemes 3450, (3000) Santa Fe, Argentina
>> Tel/Fax: +54-(0)342-451.1594
>>
>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener
>
>



-- 
Lisandro Dalcín
---------------
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594




More information about the petsc-dev mailing list