[PETSC #17275] Rank of the neighboring processors

Barry Smith bsmith at mcs.anl.gov
Thu Feb 7 16:06:42 CST 2008


On Feb 6, 2008, at 6:38 AM, Tobias Kempe wrote:

> Hi Barry,
>
> i have to propose you an additional PETSc feature that could  
> interesting for
> some users. In our lagrangian particle tracking it is neccessary to  
> send the
> particle data to some specific processor - the new particle "master
> processor" which performs the evaluation of the equations of motion  
> for this
> particle.
>
> In general we can do this with the MPI_isend, MPI_irecv routines.
>
> In the PETSc environment this may be realized by an MPI_vector

    Do you mean a PETSc vector of type VECMPI, that is a parallel Vec?

> with different
> numer of entrys on each processor. The entrys of this MPI_vector may  
> be
> adressed by its index  and the rank of the processor where you wish  
> to set
> the value.

   Do you mean something like

     VecSetValues(vec,n,ranks,localindices,values)

where ranks[n], localindices[n], values[n]? Much like VecSetValues()  
with the "global indices"
replaced with ranks,localindices?

   Or do you want something more like a VecScatter operation where the  
"global indices"
are replaced with ranks,localindices

    Barry

>
>
> Perhaps it is possible to realize this feature in following petsc  
> releases.
>
> Best regards,
>
> Tobias
>
>
>
>
> Am Samstag 02 Februar 2008 schrieben Sie:
>>   Tobias,
>>
>>     I have added DAGetNeighbors().  You will need to use petsc-dev
>> see the developers
>> page link from www.mcs.anl.gov/petsc
>>
>>   Please let us know if you have any troubles.
>>
>>    Barry
>>
>> On Feb 1, 2008, at 3:00 AM, Tobias Kempe wrote:
>>> Hi PETSc team,
>>>
>>> we are using distributed arrays for the communication of the
>>> processors in our
>>> domain decomposition. Nomally the user dont has to worry obout the
>>> rank of
>>> the neighboring processors. Now we wont to compute large amounts of
>>> particles
>>> in viscous fluids. The problem is  when a particle leaves the  
>>> physical
>>> domain, an other processer becomes the "master" of this particle.
>>> With  the
>>> backround of this lagrangian particle tracking it becomes neccesarry
>>> to
>>> determine the ranks of the neighboring processors in order to send
>>> particle
>>> specific data to this processors.
>>>
>>> Is there some PETSc routine to dertermine the rank of the  
>>> neighboring
>>> processors in the east, west, front ..... and so on?
>>>
>>> Thank you very much.
>>>
>>> Best regards
>>>
>>> Tobias Kempe
>>>
>>>
>>>
>>>
>>> *****************************************************
>>> Dipl.-Ing. Tobias Kempe
>>> Technische Universität Dresden
>>> Institut für Strömungsmechanik
>>> Tel.:   0351-46336651
>>> Fax:   0351-46335246
>>> E-Mail: tobias.kempe at ism.mw.tu-dresden.de
>>> *****************************************************
>
>
>
> -- 
> *****************************************************
> Dipl.-Ing. Tobias Kempe
> Technische Universität Dresden
> Institut für Strömungsmechanik
> Tel.:   0351-46336651
> Fax:   0351-46335246
> E-Mail: tobias.kempe at ism.mw.tu-dresden.de
> *****************************************************
>




More information about the petsc-dev mailing list