[petsc-users] MPI_Attr_delete from MatDestroy

Keita Teranishi keita at cray.com
Thu Jul 29 11:11:34 CDT 2010


Niall,

Please report your problem to the system administrator of the machine you are using.  I am interested in why PETSc fails with -eps_monitor on XT.

Thanks,
================================
 Keita Teranishi
 Scientific Library Group
 Cray, Inc.
 keita at cray.com
================================


-----Original Message-----
From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Niall Moran
Sent: Thursday, July 29, 2010 8:45 AM
To: PETSc users list
Subject: Re: [petsc-users] MPI_Attr_delete from MatDestroy

On 29 Jul 2010, at 14:37, Jose E. Roman wrote:
>> I am getting some errors from a code that uses PETSc and SLEPc to diagonalise matrices in parallel. The code has been working fine on many machines but is giving problems on a Cray XT4 machine. The PETSc sparse matrix type MPIAIJ is used to store the matrix and then the SLEPc Krylov-Schur solver is used to iteratively diagonalise. For each run the dimension of the matrices diagonalised can vary wildly from tens or hundreds of rows to hundreds of millions of rows. Even though the smaller matrices can be computed easily on a single core I wanted to be able to perform all calculations from a single run. When running on thousands of processors SLEPc does not like it when you have more cores than rows in the matrix.
> 
> In slepc-dev I have made a fix for the case when the number of rows assigned to one of the processes is zero. In slepc-3.0.0 I don't see this problem.
> Jose

Thanks for making me aware of this Jose. I no longer need to create my own communicators. I have also realised that the runs that were failing with this message were using the -eps_monitor argument. So not using this argument solves the problem as well. This argument did not make any difference on other platforms though.

Niall. 



More information about the petsc-users mailing list