[petsc-users] mpiaijperm and mpiaijcrl
Barry Smith
bsmith at mcs.anl.gov
Thu Aug 30 18:21:07 CDT 2012
You have no reason to use those formats. They are specialized for the Cray X1 vector machine which no longer exists. They will not be faster on another machine. Sorry for the confusion.
Barry
On Aug 30, 2012, at 3:51 PM, Gaetan Kenway <kenway at utias.utoronto.ca> wrote:
> Hi
>
> I was wondering if anyone had any experience with using these new matrix formats in PETSc. I have a block aij matrix (with block size 5) and tried converting to either of these types and it just trashed memory for 10 minutes and I then killed it. Matrix assemble is 13 seconds for reference. I then tried to construct the matrix directly with "MatCreateMPIAIJPERM" or "MatCreateMPIAIJCRL". Neither of these functions are wrapped in Fortran. Also the documentation for "MatCreateMPIAIJCRL" is extremely confusing as it says that comm is set to PETSC_COMM_SELF which defeats the purpose of a MPI matrix.
>
> Therefor, two questions:
> 1. Have any users seen a performance benefit of either of these formats (in parallel)?
> 2. How does one actually efficiently create such matrices? (from Fortran)
>
> On a side note, is it possible somehow to indicate on the manual pages all the functions that are not actually wrapped in Fortran? It would make it easier to know which one you can and can't use instead of having the compiler tell you it can't find a symbol.
>
> Thanks,
> Gaetan
More information about the petsc-users
mailing list