[petsc-users] A problem with MPI Derived Data Type and 'calloc'
Zhenglun (Alan) Wei
zhenglun.wei at gmail.com
Thu Apr 19 09:49:06 CDT 2012
Dear Dr. Knepley,
It is very nice to hear that. I will read the manual. Do we have
any examples showing its functions?
thank you so much,
Alan
On 4/19/2012 9:35 AM, Matthew Knepley wrote:
> On Thu, Apr 19, 2012 at 10:18 AM, Zhenglun (Alan) Wei
> <zhenglun.wei at gmail.com <mailto:zhenglun.wei at gmail.com>> wrote:
>
> "
> TESTVAR ***a, ***b, ***c;
> TESTVAR **aa, **bb, **cc;
> TESTVAR *arraya, *arrayb, *arrayc;
>
> arraya = (TESTVAR*) calloc(SIZE*SIZE*SIZE, sizeof(TESTVAR));
> arrayb = (TESTVAR*) calloc(SIZE*SIZE*SIZE, sizeof(TESTVAR));
> arrayc = (TESTVAR*) calloc(SIZE*SIZE*SIZE, sizeof(TESTVAR));
>
> aa =(TESTVAR**) calloc(SIZE*SIZE, sizeof(TESTVAR*));
> bb =(TESTVAR**) calloc(SIZE*SIZE, sizeof(TESTVAR*));
> cc =(TESTVAR**) calloc(SIZE*SIZE, sizeof(TESTVAR*));
> for(i = 0; i < SIZE*SIZE; i++) {
> aa[i] = &arraya[i*SIZE];
> bb[i] = &arrayb[i*SIZE];
> cc[i] = &arrayc[i*SIZE];
> }
>
> a =(TESTVAR***) calloc(SIZE*SIZE, sizeof(TESTVAR**));
> b =(TESTVAR***) calloc(SIZE*SIZE, sizeof(TESTVAR**));
> c =(TESTVAR***) calloc(SIZE*SIZE, sizeof(TESTVAR**));
> for(i = 0; i < SIZE; i++) {
> a[i] = &aa[i*SIZE];
> b[i] = &bb[i*SIZE];
> c[i] = &cc[i*SIZE];
> }
> "
> It works. However, I wonder if there is any other good ideas for
> 3D problem other than this kinda of 'two-layer' approach.
>
> *_What is the reason for not using DMDA?_
> *In 2D, I established a 2D array for data communication between
> nodes by using MPI derived data type. It allows me to easily
> communicate both contiguous (i.e. MPI_TYPE_CONTIGUOUS) and
> non-contiguous (i.e. MPI_TYPE_VECTOR) data. That is why I use this
> similar approach in 3D, though an additional data type, i.e.
> MPI_TYPE_INDEXED, need to be used. Does DMDA have those type of
> function or derived data type?
>
>
> It definitely does communication between the local pieces. Do you want
> something else?
>
> "2, I have a little question on PETSc about 3D processor
> ordering. Does PETSc have any function giving me the
> nodes/rank number of neighboring nodes/ranks? Are those
> 'Application Ordering' functions applicable for my case?"
>
>
> _*What do you mean by neighboring? If it is jsut stencil
> neighbors, then use a local vector.*_
> When I send and receive data with MPI_Send and MPI_RECV, I need
> provide the 'destination' (in MPI_Send refer
> to'http://www.mcs.anl.gov/research/projects/mpi/www/www3/MPI_Send.html')
> and 'source' (in MPI_RECV refer
> to'http://www.mcs.anl.gov/research/projects/mpi/www/www3/MPI_Recv.html').
> In a 2D problem with Cartesian grid, 4 processes divide the whole
> domain to 4 sub-domain.
> ----------------------------
> 2 | 3 |
> ----------------------------
> 0 | 1 |
> ---------------------------
> Then, for node 1, the neighboring nodes are '0' and '3', which
> '0' is the left node and '3' is the top node. I wonder if PETSc
> has any function that I can call to obtain those neighboring nodes
> so that I do not need to construct my function.
>
>
> Yes, it looks like you should just use a DMDA. See the manual section.
>
> Matt
>
> I'm sorry for confusing you.
>
> thanks in advance,
> Alan
>
> On 4/19/2012 4:52 AM, Matthew Knepley wrote:
>> On Wed, Apr 18, 2012 at 3:52 PM, Alan Wei <zhenglun.wei at gmail.com
>> <mailto:zhenglun.wei at gmail.com>> wrote:
>>
>> Dear all,
>> I hope you're having a nice day. I have a further
>> question on this issue in 3D.
>> 1, Following the idea of Dr. Brown and Dr. Knepley, I
>> finished a 2D test, which works very fine. Here, I did it in
>> 3D by
>> "
>> TESTVAR ***a, ***b, ***c;
>> TESTVAR **aa, **bb, **cc;
>> TESTVAR *arraya, *arrayb, *arrayc;
>>
>> arraya = (TESTVAR*) calloc(SIZE*SIZE*SIZE, sizeof(TESTVAR));
>> arrayb = (TESTVAR*) calloc(SIZE*SIZE*SIZE, sizeof(TESTVAR));
>> arrayc = (TESTVAR*) calloc(SIZE*SIZE*SIZE, sizeof(TESTVAR));
>>
>> aa =(TESTVAR**) calloc(SIZE*SIZE, sizeof(TESTVAR*));
>> bb =(TESTVAR**) calloc(SIZE*SIZE, sizeof(TESTVAR*));
>> cc =(TESTVAR**) calloc(SIZE*SIZE, sizeof(TESTVAR*));
>> for(i = 0; i < SIZE*SIZE; i++) {
>> aa[i] = &arraya[i*SIZE];
>> bb[i] = &arrayb[i*SIZE];
>> cc[i] = &arrayc[i*SIZE];
>> }
>>
>> a =(TESTVAR***) calloc(SIZE*SIZE, sizeof(TESTVAR**));
>> b =(TESTVAR***) calloc(SIZE*SIZE, sizeof(TESTVAR**));
>> c =(TESTVAR***) calloc(SIZE*SIZE, sizeof(TESTVAR**));
>> for(i = 0; i < SIZE; i++) {
>> a[i] = &aa[i*SIZE];
>> b[i] = &bb[i*SIZE];
>> c[i] = &cc[i*SIZE];
>> }
>> "
>> It works. However, I wonder if there is any other good
>> ideas for 3D problem other than this kinda of 'two-layer'
>> approach.
>>
>>
>> What is the reason for not using DMDA?
>>
>> 2, I have a little question on PETSc about 3D processor
>> ordering. Does PETSc have any function giving me the
>> nodes/rank number of neighboring nodes/ranks? Are those
>> 'Application Ordering' functions applicable for my case?
>>
>>
>> What do you mean by neighboring? If it is jsut stencil neighbors,
>> then use a local vector.
>>
>> Matt
>>
>> thanks,
>> Alan
>>
>> On Fri, Apr 13, 2012 at 5:41 PM, Jed Brown
>> <jedbrown at mcs.anl.gov <mailto:jedbrown at mcs.anl.gov>> wrote:
>>
>> On Fri, Apr 13, 2012 at 17:38, Zhenglun (Alan) Wei
>> <zhenglun.wei at gmail.com <mailto:zhenglun.wei at gmail.com>>
>> wrote:
>>
>> I have a final question on it. Is it taken a lot
>> of memory for doing this? As I understand, pointers
>> won't occupy many memories and it works like an
>> alias. It will not, to my limit knowledge, take much
>> extra memory by doing this.
>>
>>
>> A pointer takes about as much space as a floating point
>> value, so that array of pointers costs about 1*N compared
>> to the N*N matrix.
>>
>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to
>> which their experiments lead.
>> -- Norbert Wiener
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120419/bd48cf91/attachment.htm>
More information about the petsc-users
mailing list