Petsc and Slepc with multiple process groups
Yujie
recrusader at gmail.com
Sat Sep 20 00:36:55 CDT 2008
Dear Hong:
Thank you very much for your information. The basic framework for generating
subcommunicator is useful for me. However, I need to scatter matrixs and
vectors. Just like what you said, I need to consider how to scatter data.
thanks a lot.
Regards,
Yujie
On Thu, Sep 18, 2008 at 6:19 AM, Hong Zhang <hzhang at mcs.anl.gov> wrote:
>
> Yujie,
>
> See the data structure "PetscSubcomm"
> in ~petsc/include/petscsys.h
>
> An example of its implementation is PCREDUNDANT
> (see src/ksp/pc/impls/redundant/redundant.c),
> for which, we first split the parent communicator with N processors
> into n subcommunicator for parallel LU preconditioner,
> then scatter the solution from the subcommunicator back to
> the parent communicator.
>
> Note, the scatter used there is unique for our particular
> application. You likely need to implement your own scattering
> according to your need.
>
> Hong
>
>
> On Thu, 18 Sep 2008, Lisandro Dalcin wrote:
>
> On Wed, Sep 17, 2008 at 9:22 PM, Yujie <recrusader at gmail.com> wrote:
>>
>>> Thank you very much, Lisandro. You are right. It look like a little
>>> difficult to "transfer" data from one node to "N" nodes or from N nodes
>>> to M
>>> nodes. My method is to first send all the data in a node and to
>>> redistribute
>>> it in "N" or "M" nodes. do you have any idea about it? is it
>>> time-consuming?
>>> In Petsc, how to support such type of operations? thanks a lot.
>>>
>>
>> Mmm.. I believe there is not way to do that with PETSc. You just have
>> to make MPI calls. Perhaps if you can give me a bit more of details
>> about your communication patters, then I can give you a good
>> suggestion.
>>
>>
>>> Regards,
>>>
>>> Yujie
>>>
>>> On Wed, Sep 17, 2008 at 5:05 PM, Lisandro Dalcin <dalcinl at gmail.com>
>>> wrote:
>>>
>>>>
>>>> A long as you create your SLEPc objects with the appropriate
>>>> communicator (ie. the one obtained with MPI_Comm_split), then all
>>>> should just work. Of course, you will have to make appropriate MPI
>>>> calls to 'transfer' data from your N group to the many M groups, and
>>>> the other way to collect results.
>>>>
>>>>
>>>> On Wed, Sep 17, 2008 at 8:25 PM, Yujie <recrusader at gmail.com> wrote:
>>>>
>>>>> You are right :). I am thinking the whole framwork for my codes. thank
>>>>> you,
>>>>> Lisandro. In Step 3, there are different M slepc-based process groups,
>>>>> which
>>>>> should mean M communication domain for Petsc and Slepc (I have created
>>>>> a
>>>>> communication domain for them) is it ok? thanks again.
>>>>>
>>>>> Regards,
>>>>>
>>>>> Yujie
>>>>>
>>>>> On Wed, Sep 17, 2008 at 4:08 PM, Lisandro Dalcin <dalcinl at gmail.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> I bet you have not even tried to actually implent and run this :-).
>>>>>>
>>>>>> This should work. If not, I would consider that a bug. Let us know of
>>>>>> any problem you have.
>>>>>>
>>>>>>
>>>>>> On Wed, Sep 17, 2008 at 7:59 PM, Yujie <recrusader at gmail.com> wrote:
>>>>>>
>>>>>>> Hi, Petsc Developer:
>>>>>>>
>>>>>>> Currently, I am using Slepc for my application. It is based on Petsc.
>>>>>>>
>>>>>>> Assuming I have a cluster with N nodes.
>>>>>>>
>>>>>>> My codes are like
>>>>>>>
>>>>>>> main()
>>>>>>>
>>>>>>> {
>>>>>>>
>>>>>>> step 1: Initialize Petsc and Slepc;
>>>>>>>
>>>>>>> step 2: Use Petsc; (use all N nodes in one process group)
>>>>>>>
>>>>>>> step 3: Use Slepc; (N nodes is divided into M process groups. these
>>>>>>> groups
>>>>>>> are indepedent. However, they need to communicate with each other)
>>>>>>>
>>>>>>> step 4: Use Petsc; (use all N nodes in one process group)
>>>>>>>
>>>>>>> }
>>>>>>>
>>>>>>> My method is:
>>>>>>>
>>>>>>> when using Slepc, MPI_Comm_split() is used to divide N nodes into M
>>>>>>> process
>>>>>>> groups which means to generate M communication domains. Then,
>>>>>>> MPI_Intercomm_create() creates inter-group communication domain to
>>>>>>> process
>>>>>>> the communication between different M process groups.
>>>>>>>
>>>>>>> I don't know whether this method is ok regarding Petsc and Slepc.
>>>>>>> Because
>>>>>>> Slepc is developed based on Petsc. In Step 1, Petsc and Slepc is
>>>>>>> initialized
>>>>>>> with all N nodes in a communication domain. Petsc in Step 2 uses this
>>>>>>> communication domain. However, in Step 3, I need to divide all N
>>>>>>> nodes
>>>>>>> and
>>>>>>> generate M communication domains. I don't know how Petsc and Slepc
>>>>>>> can
>>>>>>> process this change? If the method doesn't work, could you give me
>>>>>>> some
>>>>>>> advice? thanks a lot.
>>>>>>>
>>>>>>> Regards,
>>>>>>>
>>>>>>> Yujie
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Lisandro Dalcín
>>>>>> ---------------
>>>>>> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
>>>>>> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
>>>>>> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
>>>>>> PTLC - Güemes 3450, (3000) Santa Fe, Argentina
>>>>>> Tel/Fax: +54-(0)342-451.1594
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Lisandro Dalcín
>>>> ---------------
>>>> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
>>>> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
>>>> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
>>>> PTLC - Güemes 3450, (3000) Santa Fe, Argentina
>>>> Tel/Fax: +54-(0)342-451.1594
>>>>
>>>>
>>>
>>>
>>
>>
>> --
>> Lisandro Dalcín
>> ---------------
>> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
>> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
>> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
>> PTLC - Güemes 3450, (3000) Santa Fe, Argentina
>> Tel/Fax: +54-(0)342-451.1594
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20080919/296fa552/attachment.htm>
More information about the petsc-users
mailing list