[petsc-users] Local and global size of IS

Smith, Barry F. bsmith at mcs.anl.gov
Tue Apr 2 13:08:34 CDT 2019



> On Apr 2, 2019, at 5:54 AM, Eda Oktay <eda.oktay at metu.edu.tr> wrote:
> 
> Hi Barry,
> 
> I did what you said but now I get the error in ISSetPermutation that index set is not a permutation.

   When you create the IS you need to pass in PETSC_COMM_WORLD as the MPI_Comm, not PETSC_COMM_SELF. Otherwise it thinks the IS is sequential and checks if it is a permutation and hence generates the error.

> ı think it is because since I selected indices on each process as you said. I did the following, did I understand you wrong:
> 
>   PetscMPIInt rank,size;
>   MPI_Comm_rank(PETSC_COMM_WORLD, &rank);
>   MPI_Comm_size(PETSC_COMM_WORLD, &size);
>  
>   PetscInt *idxx,ss;
>   ss = siz/size;
>   PetscMalloc1(ss,&idxx);
>   if (rank != size-1) {
>     j =0;
>     for (i=rank*ss; i<(rank+1)*ss; i++) {
>       idxx[j] = idx[i];
>       j++;
>     }
> 
>   } else {
>       
>     j =0;
>     for (i=rank*ss; i<siz; i++) {
>       idxx[j] = idx[i];
>       j++;
>     }
>       
>   }
> 
> Smith, Barry F. <bsmith at mcs.anl.gov>, 31 Mar 2019 Paz, 01:47 tarihinde şunu yazdı:
> 
> 
> > On Mar 27, 2019, at 7:33 AM, Eda Oktay via petsc-users <petsc-users at mcs.anl.gov> wrote:
> > 
> > Hello,
> > 
> > I am trying to permute a matrix A(of size 2n*2n) according to a sorted eigenvector vr (of size 2n) in parallel using 2 processors (processor number can change).
> 
>     I don't understand your rational for wanting to permute the matrix rows/columns based on a sorted eigenvector? 
> 
> > However, I get an error in MatPermute line stating that arguments are out of range and a new nonzero caused a malloc even if I used MatSetOption. 
> > 
> > I discovered that this may be because of the unequality of local sizes of is (and newIS) and local size of A. 
> > 
> > Since I allocate index set idx according to size of the vector vr, global size of is becomes 2n and the local size is also 2n (I think it should be n since both A and vr has local sizes n because of the number of processors). If I change the size of idx, then because of VecGetArrayRead, I think is is created wrongly.
> > 
> > So, how can I make both global and local sizes of is,newIS and A?
> > 
> > Below, you can see the part of my program.
> > 
> > Thanks,
> > 
> > Eda
> >  
> >  ierr = VecGetSize(vr,&siz);CHKERRQ(ierr);                            
> >   ierr = PetscMalloc1(siz,&idx);CHKERRQ(ierr);
> >   for (i=0; i<siz;i++) idx[i] = i;
> 
>    VecGetArray only gives you access to the local portion of the vector, not the entire vector so you cannot do the next two lines of code
> >   ierr = VecGetArrayRead(vr,&avr);CHKERRQ(ierr);
> 
>    The sort routine is sequential; it knows nothing about parallelism so each process is just sorting its part. Hence this code won't work as expected
> 
> >   ierr = PetscSortRealWithPermutation(siz,avr,idx);CHKERRQ(ierr); 
> 
>    If you need to sort the parallel vector and get the entire permutation then you need to do the following 
> 
> 1)   communicate the entire vector to each process, you can use VecScatterCreateToAll() for this
> 2)   call VecGetArray on the new vector (that contains all entries on each process)
> 3)   Call PetscSortRealWithPermutation() on the entire vector on each process
> 4)   select out a piece of the resulting indices idx on each process; for example with two processes I think rank = 0 would get the first half of the idx and rank = 1 would get the second half.
> >                  
> >   ierr = ISCreateGeneral(PETSC_COMM_SELF,siz,idx,PETSC_COPY_VALUES,&is);CHKERRQ(ierr);  
> >   ierr = ISSetPermutation(is);CHKERRQ(ierr);
> 
>   You don't need to duplicate the is, just pass it in twice.
> 
> >   ierr = ISDuplicate(is,&newIS);CHKERRQ(ierr);
> 
>    You should definitely not need the next line. The A matrix is untouched by the subroutine call so you don't need to change its allocation options
> 
> > MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE);CHKERRQ(ierr);  
> >   ierr = MatPermute(A,is,newIS,&PL);CHKERRQ(ierr);  
> 



More information about the petsc-users mailing list