[petsc-users] MPI_Attr_delete from MatDestroy

Niall Moran nmoran at thphys.nuim.ie
Thu Jul 29 12:35:05 CDT 2010


Hi Keita,

I am afraid that I do not have direct access to the machine in question nor do I have a line of communication to the administrative team. I do not envisage there being issues with -eps_monitor in standard usage. However I am creating a new communicator and using this. I have encountered problems in the past with options like -mat_view_matrix when using a communicator other than PETSC_COMM_WORLD. I think the fact that this happens for -eps_monitor on the XT and not on the other platforms I tried is down to differences in the MPI implementation present and not any fault with the system.

Regards,

Niall.  

On 29 Jul 2010, at 17:11, Keita Teranishi wrote:

> Niall,
> 
> Please report your problem to the system administrator of the machine you are using.  I am interested in why PETSc fails with -eps_monitor on XT.
> 
> Thanks,
> ================================
>  Keita Teranishi
>  Scientific Library Group
>  Cray, Inc.
>  keita at cray.com
> ================================
> 
> 
> -----Original Message-----
> From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Niall Moran
> Sent: Thursday, July 29, 2010 8:45 AM
> To: PETSc users list
> Subject: Re: [petsc-users] MPI_Attr_delete from MatDestroy
> 
> On 29 Jul 2010, at 14:37, Jose E. Roman wrote:
>>> I am getting some errors from a code that uses PETSc and SLEPc to diagonalise matrices in parallel. The code has been working fine on many machines but is giving problems on a Cray XT4 machine. The PETSc sparse matrix type MPIAIJ is used to store the matrix and then the SLEPc Krylov-Schur solver is used to iteratively diagonalise. For each run the dimension of the matrices diagonalised can vary wildly from tens or hundreds of rows to hundreds of millions of rows. Even though the smaller matrices can be computed easily on a single core I wanted to be able to perform all calculations from a single run. When running on thousands of processors SLEPc does not like it when you have more cores than rows in the matrix.
>> 
>> In slepc-dev I have made a fix for the case when the number of rows assigned to one of the processes is zero. In slepc-3.0.0 I don't see this problem.
>> Jose
> 
> Thanks for making me aware of this Jose. I no longer need to create my own communicators. I have also realised that the runs that were failing with this message were using the -eps_monitor argument. So not using this argument solves the problem as well. This argument did not make any difference on other platforms though.
> 
> Niall. 



More information about the petsc-users mailing list