<div dir="ltr">Dear Hong:<br><br>Thank you very much for your information. The basic framework for generating subcommunicator is useful for me. However, I need to scatter matrixs and vectors. Just like what you said, I need to consider how to scatter data. thanks a lot.<br>
<br>Regards,<br>Yujie<br><br><div class="gmail_quote">On Thu, Sep 18, 2008 at 6:19 AM, Hong Zhang <span dir="ltr"><<a href="mailto:hzhang@mcs.anl.gov">hzhang@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
Yujie,<br>
<br>
See the data structure "PetscSubcomm"<br>
in ~petsc/include/petscsys.h<br>
<br>
An example of its implementation is PCREDUNDANT<br>
(see src/ksp/pc/impls/redundant/redundant.c),<br>
for which, we first split the parent communicator with N processors<br>
into n subcommunicator for parallel LU preconditioner,<br>
then scatter the solution from the subcommunicator back to<br>
the parent communicator.<br>
<br>
Note, the scatter used there is unique for our particular<br>
application. You likely need to implement your own scattering<br>
according to your need.<br><font color="#888888">
<br>
Hong</font><div><div></div><div class="Wj3C7c"><br>
<br>
On Thu, 18 Sep 2008, Lisandro Dalcin wrote:<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
On Wed, Sep 17, 2008 at 9:22 PM, Yujie <<a href="mailto:recrusader@gmail.com" target="_blank">recrusader@gmail.com</a>> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Thank you very much, Lisandro. You are right. It look like a little<br>
difficult to "transfer" data from one node to "N" nodes or from N nodes to M<br>
nodes. My method is to first send all the data in a node and to redistribute<br>
it in "N" or "M" nodes. do you have any idea about it? is it time-consuming?<br>
In Petsc, how to support such type of operations? thanks a lot.<br>
</blockquote>
<br>
Mmm.. I believe there is not way to do that with PETSc. You just have<br>
to make MPI calls. Perhaps if you can give me a bit more of details<br>
about your communication patters, then I can give you a good<br>
suggestion.<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
Regards,<br>
<br>
Yujie<br>
<br>
On Wed, Sep 17, 2008 at 5:05 PM, Lisandro Dalcin <<a href="mailto:dalcinl@gmail.com" target="_blank">dalcinl@gmail.com</a>> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
A long as you create your SLEPc objects with the appropriate<br>
communicator (ie. the one obtained with MPI_Comm_split), then all<br>
should just work. Of course, you will have to make appropriate MPI<br>
calls to 'transfer' data from your N group to the many M groups, and<br>
the other way to collect results.<br>
<br>
<br>
On Wed, Sep 17, 2008 at 8:25 PM, Yujie <<a href="mailto:recrusader@gmail.com" target="_blank">recrusader@gmail.com</a>> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
You are right :). I am thinking the whole framwork for my codes. thank<br>
you,<br>
Lisandro. In Step 3, there are different M slepc-based process groups,<br>
which<br>
should mean M communication domain for Petsc and Slepc (I have created a<br>
communication domain for them) is it ok? thanks again.<br>
<br>
Regards,<br>
<br>
Yujie<br>
<br>
On Wed, Sep 17, 2008 at 4:08 PM, Lisandro Dalcin <<a href="mailto:dalcinl@gmail.com" target="_blank">dalcinl@gmail.com</a>><br>
wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
I bet you have not even tried to actually implent and run this :-).<br>
<br>
This should work. If not, I would consider that a bug. Let us know of<br>
any problem you have.<br>
<br>
<br>
On Wed, Sep 17, 2008 at 7:59 PM, Yujie <<a href="mailto:recrusader@gmail.com" target="_blank">recrusader@gmail.com</a>> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi, Petsc Developer:<br>
<br>
Currently, I am using Slepc for my application. It is based on Petsc.<br>
<br>
Assuming I have a cluster with N nodes.<br>
<br>
My codes are like<br>
<br>
main()<br>
<br>
{<br>
<br>
step 1: Initialize Petsc and Slepc;<br>
<br>
step 2: Use Petsc; (use all N nodes in one process group)<br>
<br>
step 3: Use Slepc; (N nodes is divided into M process groups. these<br>
groups<br>
are indepedent. However, they need to communicate with each other)<br>
<br>
step 4: Use Petsc; (use all N nodes in one process group)<br>
<br>
}<br>
<br>
My method is:<br>
<br>
when using Slepc, MPI_Comm_split() is used to divide N nodes into M<br>
process<br>
groups which means to generate M communication domains. Then,<br>
MPI_Intercomm_create() creates inter-group communication domain to<br>
process<br>
the communication between different M process groups.<br>
<br>
I don't know whether this method is ok regarding Petsc and Slepc.<br>
Because<br>
Slepc is developed based on Petsc. In Step 1, Petsc and Slepc is<br>
initialized<br>
with all N nodes in a communication domain. Petsc in Step 2 uses this<br>
communication domain. However, in Step 3, I need to divide all N<br>
nodes<br>
and<br>
generate M communication domains. I don't know how Petsc and Slepc<br>
can<br>
process this change? If the method doesn't work, could you give me<br>
some<br>
advice? thanks a lot.<br>
<br>
Regards,<br>
<br>
Yujie<br>
</blockquote>
<br>
<br>
<br>
--<br>
Lisandro Dalcín<br>
---------------<br>
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)<br>
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)<br>
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)<br>
PTLC - Güemes 3450, (3000) Santa Fe, Argentina<br>
Tel/Fax: +54-(0)342-451.1594<br>
<br>
</blockquote>
<br>
<br>
</blockquote>
<br>
<br>
<br>
--<br>
Lisandro Dalcín<br>
---------------<br>
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)<br>
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)<br>
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)<br>
PTLC - Güemes 3450, (3000) Santa Fe, Argentina<br>
Tel/Fax: +54-(0)342-451.1594<br>
<br>
</blockquote>
<br>
<br>
</blockquote>
<br>
<br>
<br>
-- <br>
Lisandro Dalcín<br>
---------------<br>
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)<br>
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)<br>
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)<br>
PTLC - Güemes 3450, (3000) Santa Fe, Argentina<br>
Tel/Fax: +54-(0)342-451.1594<br>
<br>
</blockquote>
</div></div></blockquote></div><br></div>