[petsc-users] Problem with MPI, MatAXPY and SAME_NONZERO_PATTERN

Barry Smith bsmith at mcs.anl.gov
Fri Mar 27 12:32:57 CDT 2015


> On Mar 27, 2015, at 11:14 AM, Klaus Kaiser <kaiser at igpm.rwth-aachen.de> wrote:
> 
> Hallo Barry,
> 
> I think you're right, but this is the setup I used for more than a year, so I first want to check if there is an error in my code and then switch with an correct code to a newer version when I do have the time for the installation.
> 
> Is the faster symbolic MatAXPY for the DIFFERENT_NONZERO_STRUCTURE flag faster than using SAME_NONZERO_STRUCTURE?

  No
> 
> Best and Thanks a lot
> 
> Klaus
> 
> On 03/27/2015 04:57 PM, Barry Smith wrote:
>>   Klaus,
>> 
>>     You would really benefit by upgrading to PETSc 3.5.3, we added much faster symbolic MatAXPY() for the DIFFERENT_NONZERO_STRUCTURE flag. Plus it is much easier for us to support the newest version.
>> 
>>   Barry
>> 
>>> On Mar 27, 2015, at 9:48 AM, Klaus Kaiser <kaiser at igpm.rwth-aachen.de> wrote:
>>> 
>>> Hallo,
>>> 
>>> I have a strange behavior in my code concerning the function MatAXPY. I create 3 different Matrices
>>> 
>>>    ierr = MatCreateBAIJ(PETSC_COMM_WORLD, block_size, local_size, local_size, system_size, system_size, 0, d_nnz, 0, o_nnz,&A);
>>>    ierr = MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE);
>>>    ierr = MatSetOption(A,MAT_KEEP_NONZERO_PATTERN,PETSC_TRUE);
>>>    ierr = MatSetOption(A,MAT_IGNORE_OFF_PROC_ENTRIES,PETSC_TRUE);
>>> 
>>>    ierr = MatCreateBAIJ(PETSC_COMM_WORLD, block_size, local_size, local_size, system_size, system_size, 0, d_nnz, 0, o_nnz,&At);
>>>    ierr = MatSetOption(At,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE);
>>>    ierr = MatSetOption(At,MAT_KEEP_NONZERO_PATTERN,PETSC_TRUE);
>>>    ierr = MatSetOption(At,MAT_IGNORE_OFF_PROC_ENTRIES,PETSC_TRUE);
>>> 
>>>    ierr = MatCreateBAIJ(PETSC_COMM_WORLD, block_size, local_size, local_size, system_size, system_size, 0, d_nnz, 0, o_nnz,&Ah);
>>>    ierr = MatSetOption(Ah,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE);
>>>    ierr = MatSetOption(Ah,MAT_KEEP_NONZERO_PATTERN,PETSC_TRUE);
>>>    ierr = MatSetOption(Ah,MAT_IGNORE_OFF_PROC_ENTRIES,PETSC_TRUE);
>>> 
>>> and want to sum these three matrixes with different factors. First I fill the Matrix A with some values, and duplicate the structure of A to the other two matrices:
>>> 
>>>    MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);
>>>    MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);
>>>    MatDuplicate(A,MAT_DO_NOT_COPY_VALUES,&Ah);
>>>    MatDuplicate(A,MAT_DO_NOT_COPY_VALUES,&At);
>>>    MatAssemblyBegin(Ah,MAT_FINAL_ASSEMBLY);
>>>    MatAssemblyEnd(Ah,MAT_FINAL_ASSEMBLY);
>>>    MatAssemblyBegin(At,MAT_FINAL_ASSEMBLY);
>>>    MatAssemblyEnd(At,MAT_FINAL_ASSEMBLY);
>>> 
>>> After this I fill the matrices At and Ah with some other values, which are not beside the non zero structure (I also tried with just copying the Matrix A). Now after another MatAssembly I want to add these Matrices in the form A+c*(Ah+d*At):
>>> 
>>>    MatAXPY(Ah,c,At,SAME_NONZERO_PATTERN);
>>>    MatAXPY(A,d,Ah,SAME_NONZERO_PATTERN);
>>> 
>>> When I run the method with mpi and one core everything works fine. Starting the same method with more cores, the sum of the matrices fails. It seems like some values are added correctly and many values are missed. Using DIFFERENT_NONZERO_STRUCTURE leads to the right behavior in the multi-core case, but is very slow. I checked with a viewer if all matrices have the same nonzero structure and this is the case.
>>> 
>>> Does anyone know why this fails, or do I have made any wrong thoughts?
>>> 
>>> I'm corrently working with a petsc version (Petsc Release Version 3.3.0, Patch 5, Sat Dec  1 15:10:41 CST 2012), I looked into the changelogs up to the current version and did not find any note about MatAXPY or MatAYPX.
>>> 
>>> 
>>> Best and Thanks a lot for your help
>>> 
>>> Klaus
>>> 
> 



More information about the petsc-users mailing list