[petsc-users] MPI_Attr_delete from MatDestroy

Niall Moran nmoran at thphys.nuim.ie
Thu Jul 29 08:45:00 CDT 2010


On 29 Jul 2010, at 14:37, Jose E. Roman wrote:
>> I am getting some errors from a code that uses PETSc and SLEPc to diagonalise matrices in parallel. The code has been working fine on many machines but is giving problems on a Cray XT4 machine. The PETSc sparse matrix type MPIAIJ is used to store the matrix and then the SLEPc Krylov-Schur solver is used to iteratively diagonalise. For each run the dimension of the matrices diagonalised can vary wildly from tens or hundreds of rows to hundreds of millions of rows. Even though the smaller matrices can be computed easily on a single core I wanted to be able to perform all calculations from a single run. When running on thousands of processors SLEPc does not like it when you have more cores than rows in the matrix.
> 
> In slepc-dev I have made a fix for the case when the number of rows assigned to one of the processes is zero. In slepc-3.0.0 I don't see this problem.
> Jose

Thanks for making me aware of this Jose. I no longer need to create my own communicators. I have also realised that the runs that were failing with this message were using the -eps_monitor argument. So not using this argument solves the problem as well. This argument did not make any difference on other platforms though.

Niall. 



More information about the petsc-users mailing list