[petsc-users] Problems with VecGetArray under sub-communicators
Zin Lin
zinlin.zinlin at gmail.com
Sun Apr 22 14:34:30 CDT 2018
Thanks a lot!
That clears things up.
I do have one more question though.
The behavior of DMDAVecGetArray is different then? From ex3.c (below), it
doesn't seem like the raw array needs the local indices?
>From ex3.c,
DMDAVecGetArray
<http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMDA/DMDAVecGetArray.html#DMDAVecGetArray>(cda,global,&coors);
...
DMDAGetCorners
<http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMDA/DMDAGetCorners.html#DMDAGetCorners>(cda,&mstart,&nstart,&pstart,&m,&n,&p);
...
for (i=mstart; i<mstart+m; i++) {
for (j=nstart; j<nstart+n; j++) {
for (k=pstart; k<pstart+p; k++) {
...
coors[k][j][i].x = ...
On Sun, Apr 22, 2018 at 3:14 PM, Dave May <dave.mayhem23 at gmail.com> wrote:
>
> On Sun, 22 Apr 2018 at 20:13, Zin Lin <zinlin.zinlin at gmail.com> wrote:
>
>> Hi
>> I am experiencing possible memory issues with VecGetArray when it is used
>> under sub-communicators (when I split the PETSC_COMM_WORLD to multiple
>> subcomms). The following is the minimal code. Basically, you can test that
>> if you parallelize the vector to more than one processor under a subcomm,
>> the array obtained from the VecGetArray call doesn't seem to be correct.
>> Please test it with
>>
>> 1) mpirun -np 1 ./example -ncomms 1
>> 2) mpirun -np 2 ./example -ncomms 2
>> 3) mpirun -np 2 ./example -ncomms 1
>> 4) mpirun -np 4 ./example -ncomms 2
>>
>> you will 1) and 2) work as expected while in 3) and 4) some entries of
>> the array are assigned erroneous values.
>>
>
> First - your call to PetscPrintf contains a mistake. The second instance
> of i is missing.
>
> Second (more important), you cannot access the raw array obtained via
> VecGetArray() using global indices. You must use local indices.
> Change the access to _u[i-ns] and the code should work.
>
> Also, debugging based on what printf() tells you can be misleading. It's
> much better to use valgrind - see here
>
> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
>
>
> Thanks,
> Dave
>
>
>> Any input will be appreciated.
>> Thanks
>> Zin
>>
>> Minimal Code
>>
>> PetscErrorCode main(int argc, char **argv)
>> {
>>
>> MPI_Init(NULL, NULL);
>> PetscInitialize(&argc,&argv,NULL,NULL);
>> int size;
>> MPI_Comm_size(MPI_COMM_WORLD, &size);
>> PetscPrintf(PETSC_COMM_WORLD,"\tThe total number of processors is
>> %d\n",size);
>>
>> //check if the number of processors is divisible by the number of
>> subcomms
>> int ncomms, np_per_comm;
>> PetscOptionsGetInt(NULL,"-ncomms",&ncomms,NULL);
>> if(!(size%ncomms==0)) SETERRQ(PETSC_COMM_WORLD,1,"The number of
>> processes must be a multiple of ncomms so that it is divisible by the
>> number of subcomms.");
>> np_per_comm=size/ncomms;
>> PetscPrintf(PETSC_COMM_WORLD,"\tThe number of subcomms is %d.\n\tEach
>> subcomm has %d processors.\n",ncomms,np_per_comm);
>>
>> //calculate the colour of each subcomm ( = rank of each processor /
>> number of processors in each subcomm )
>> //note once calculated, the colour is fixed throughout the entire run
>> int rank;
>> MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>> MPI_Comm subcomm;
>> int colour = rank/np_per_comm;
>> MPI_Comm_split(MPI_COMM_WORLD, colour, rank, &subcomm);
>>
>> Vec u;
>> PetscScalar *_u;
>> int i,ns,ne;
>> PetscScalar tmp;
>>
>> VecCreateMPI(subcomm,PETSC_DECIDE,10,&u);
>> VecSet(u,1.0+PETSC_i*0);
>>
>> VecGetArray(u,&_u);
>> VecGetOwnershipRange(u,&ns,&ne);
>> for(i=ns;i<ne;i++){
>> VecGetValues(u,1,&i,&tmp);
>> PetscPrintf(PETSC_COMM_SELF,"colour %d, u[%d]_array = %g + i * (%g),
>> u[%d]_vec = %g + i * %g \n",colour,i,creal(_u[i]),cimag(_u[i]),
>>
>
> Insert i here
>
> creal(tmp),cimag(tmp));
>> }
>> VecRestoreArray(u,&_u);
>>
>> PetscFinalize();
>> return 0;
>>
>> }
>>
>>
--
Zin Lin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180422/f31d0c36/attachment-0001.html>
More information about the petsc-users
mailing list