Petsc and Slepc with multiple process groups
Yujie
recrusader at gmail.com
Sat Sep 20 00:33:53 CDT 2008
Dear Lisandro:
Thank you very much for your help.
Our basic idea is
main()
{
step 1: Initialize Petsc and Slepc;
step 2: Use Petsc; (use all N nodes in one process group)
step 3: Use Slepc; (N nodes is divided into M process groups. these groups
are indepedent. However, they need to communicate with each other)
step 4: Use Petsc; (use all N nodes in one process group)
}
Assuming, the dimension of the whole matrix is N*N when using all Nodes in
one process group. At the end of step 2, I need to get M different matrices
and vectors (I should be able to make them be stored in M single different
nodes which belong to M different process group.). Before step3, I need to
scatter M matrices and vectors in M different process groups. Then, I can
compute based on M matrices and vectors in M subcommunication domains. After
calculating, I need to collect M solution vectors back to their parent
communication domain. In Step4, I use this solution to further compute.
Could you give me any further advice? thanks again.
Regards,
Yujie
On Thu, Sep 18, 2008 at 5:44 AM, Lisandro Dalcin <dalcinl at gmail.com> wrote:
> On Wed, Sep 17, 2008 at 9:22 PM, Yujie <recrusader at gmail.com> wrote:
> > Thank you very much, Lisandro. You are right. It look like a little
> > difficult to "transfer" data from one node to "N" nodes or from N nodes
> to M
> > nodes. My method is to first send all the data in a node and to
> redistribute
> > it in "N" or "M" nodes. do you have any idea about it? is it
> time-consuming?
> > In Petsc, how to support such type of operations? thanks a lot.
>
> Mmm.. I believe there is not way to do that with PETSc. You just have
> to make MPI calls. Perhaps if you can give me a bit more of details
> about your communication patters, then I can give you a good
> suggestion.
>
> >
> > Regards,
> >
> > Yujie
> >
> > On Wed, Sep 17, 2008 at 5:05 PM, Lisandro Dalcin <dalcinl at gmail.com>
> wrote:
> >>
> >> A long as you create your SLEPc objects with the appropriate
> >> communicator (ie. the one obtained with MPI_Comm_split), then all
> >> should just work. Of course, you will have to make appropriate MPI
> >> calls to 'transfer' data from your N group to the many M groups, and
> >> the other way to collect results.
> >>
> >>
> >> On Wed, Sep 17, 2008 at 8:25 PM, Yujie <recrusader at gmail.com> wrote:
> >> > You are right :). I am thinking the whole framwork for my codes. thank
> >> > you,
> >> > Lisandro. In Step 3, there are different M slepc-based process groups,
> >> > which
> >> > should mean M communication domain for Petsc and Slepc (I have created
> a
> >> > communication domain for them) is it ok? thanks again.
> >> >
> >> > Regards,
> >> >
> >> > Yujie
> >> >
> >> > On Wed, Sep 17, 2008 at 4:08 PM, Lisandro Dalcin <dalcinl at gmail.com>
> >> > wrote:
> >> >>
> >> >> I bet you have not even tried to actually implent and run this :-).
> >> >>
> >> >> This should work. If not, I would consider that a bug. Let us know
> of
> >> >> any problem you have.
> >> >>
> >> >>
> >> >> On Wed, Sep 17, 2008 at 7:59 PM, Yujie <recrusader at gmail.com> wrote:
> >> >> > Hi, Petsc Developer:
> >> >> >
> >> >> > Currently, I am using Slepc for my application. It is based on
> Petsc.
> >> >> >
> >> >> > Assuming I have a cluster with N nodes.
> >> >> >
> >> >> > My codes are like
> >> >> >
> >> >> > main()
> >> >> >
> >> >> > {
> >> >> >
> >> >> > step 1: Initialize Petsc and Slepc;
> >> >> >
> >> >> > step 2: Use Petsc; (use all N nodes in one process group)
> >> >> >
> >> >> > step 3: Use Slepc; (N nodes is divided into M process groups. these
> >> >> > groups
> >> >> > are indepedent. However, they need to communicate with each other)
> >> >> >
> >> >> > step 4: Use Petsc; (use all N nodes in one process group)
> >> >> >
> >> >> > }
> >> >> >
> >> >> > My method is:
> >> >> >
> >> >> > when using Slepc, MPI_Comm_split() is used to divide N nodes into M
> >> >> > process
> >> >> > groups which means to generate M communication domains. Then,
> >> >> > MPI_Intercomm_create() creates inter-group communication domain to
> >> >> > process
> >> >> > the communication between different M process groups.
> >> >> >
> >> >> > I don't know whether this method is ok regarding Petsc and Slepc.
> >> >> > Because
> >> >> > Slepc is developed based on Petsc. In Step 1, Petsc and Slepc is
> >> >> > initialized
> >> >> > with all N nodes in a communication domain. Petsc in Step 2 uses
> this
> >> >> > communication domain. However, in Step 3, I need to divide all N
> >> >> > nodes
> >> >> > and
> >> >> > generate M communication domains. I don't know how Petsc and Slepc
> >> >> > can
> >> >> > process this change? If the method doesn't work, could you give me
> >> >> > some
> >> >> > advice? thanks a lot.
> >> >> >
> >> >> > Regards,
> >> >> >
> >> >> > Yujie
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Lisandro Dalcín
> >> >> ---------------
> >> >> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
> >> >> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
> >> >> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
> >> >> PTLC - Güemes 3450, (3000) Santa Fe, Argentina
> >> >> Tel/Fax: +54-(0)342-451.1594
> >> >>
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Lisandro Dalcín
> >> ---------------
> >> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
> >> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
> >> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
> >> PTLC - Güemes 3450, (3000) Santa Fe, Argentina
> >> Tel/Fax: +54-(0)342-451.1594
> >>
> >
> >
>
>
>
> --
> Lisandro Dalcín
> ---------------
> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
> PTLC - Güemes 3450, (3000) Santa Fe, Argentina
> Tel/Fax: +54-(0)342-451.1594
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20080919/dac72fee/attachment.htm>
More information about the petsc-users
mailing list