<div dir="ltr"><p>Thank you very much, Lisandro. You are right. It look like a little difficult to "transfer" data from one node to "N" nodes or from N nodes to M nodes. My method is to first send all the data in a node and to redistribute it in "N" or "M" nodes. do you have any idea about it? is it time-consuming? In Petsc, how to support such type of operations? thanks a lot.</p>
<p>Regards,<br></p><p>Yujie</p><br><div class="gmail_quote">On Wed, Sep 17, 2008 at 5:05 PM, Lisandro Dalcin <span dir="ltr"><<a href="mailto:dalcinl@gmail.com">dalcinl@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
A long as you create your SLEPc objects with the appropriate<br>
communicator (ie. the one obtained with MPI_Comm_split), then all<br>
should just work. Of course, you will have to make appropriate MPI<br>
calls to 'transfer' data from your N group to the many M groups, and<br>
the other way to collect results.<br>
<div><div class="Wj3C7c"><br>
<br>
On Wed, Sep 17, 2008 at 8:25 PM, Yujie <<a href="mailto:recrusader@gmail.com">recrusader@gmail.com</a>> wrote:<br>
> You are right :). I am thinking the whole framwork for my codes. thank you,<br>
> Lisandro. In Step 3, there are different M slepc-based process groups, which<br>
> should mean M communication domain for Petsc and Slepc (I have created a<br>
> communication domain for them) is it ok? thanks again.<br>
><br>
> Regards,<br>
><br>
> Yujie<br>
><br>
> On Wed, Sep 17, 2008 at 4:08 PM, Lisandro Dalcin <<a href="mailto:dalcinl@gmail.com">dalcinl@gmail.com</a>> wrote:<br>
>><br>
>> I bet you have not even tried to actually implent and run this :-).<br>
>><br>
>> This should work. If not, I would consider that a bug. Let us know of<br>
>> any problem you have.<br>
>><br>
>><br>
>> On Wed, Sep 17, 2008 at 7:59 PM, Yujie <<a href="mailto:recrusader@gmail.com">recrusader@gmail.com</a>> wrote:<br>
>> > Hi, Petsc Developer:<br>
>> ><br>
>> > Currently, I am using Slepc for my application. It is based on Petsc.<br>
>> ><br>
>> > Assuming I have a cluster with N nodes.<br>
>> ><br>
>> > My codes are like<br>
>> ><br>
>> > main()<br>
>> ><br>
>> > {<br>
>> ><br>
>> > step 1: Initialize Petsc and Slepc;<br>
>> ><br>
>> > step 2: Use Petsc; (use all N nodes in one process group)<br>
>> ><br>
>> > step 3: Use Slepc; (N nodes is divided into M process groups. these<br>
>> > groups<br>
>> > are indepedent. However, they need to communicate with each other)<br>
>> ><br>
>> > step 4: Use Petsc; (use all N nodes in one process group)<br>
>> ><br>
>> > }<br>
>> ><br>
>> > My method is:<br>
>> ><br>
>> > when using Slepc, MPI_Comm_split() is used to divide N nodes into M<br>
>> > process<br>
>> > groups which means to generate M communication domains. Then,<br>
>> > MPI_Intercomm_create() creates inter-group communication domain to<br>
>> > process<br>
>> > the communication between different M process groups.<br>
>> ><br>
>> > I don't know whether this method is ok regarding Petsc and Slepc.<br>
>> > Because<br>
>> > Slepc is developed based on Petsc. In Step 1, Petsc and Slepc is<br>
>> > initialized<br>
>> > with all N nodes in a communication domain. Petsc in Step 2 uses this<br>
>> > communication domain. However, in Step 3, I need to divide all N nodes<br>
>> > and<br>
>> > generate M communication domains. I don't know how Petsc and Slepc can<br>
>> > process this change? If the method doesn't work, could you give me some<br>
>> > advice? thanks a lot.<br>
>> ><br>
>> > Regards,<br>
>> ><br>
>> > Yujie<br>
>><br>
>><br>
>><br>
>> --<br>
>> Lisandro Dalcín<br>
>> ---------------<br>
>> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)<br>
>> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)<br>
>> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)<br>
>> PTLC - Güemes 3450, (3000) Santa Fe, Argentina<br>
>> Tel/Fax: +54-(0)342-451.1594<br>
>><br>
><br>
><br>
<br>
<br>
<br>
</div></div>--<br>
<div><div class="Wj3C7c">Lisandro Dalcín<br>
---------------<br>
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)<br>
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)<br>
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)<br>
PTLC - Güemes 3450, (3000) Santa Fe, Argentina<br>
Tel/Fax: +54-(0)342-451.1594<br>
<br>
</div></div></blockquote></div><br></div>