[petsc-dev] MUMPS and 64 bit indices

Barry Smith bsmith at mcs.anl.gov
Tue Dec 16 21:12:10 CST 2014


  I've run the 64 bit version under valgrind and it detected no problems. 

  I also ran with -mat_umfpack_prl 10 to get additional information and it appears to be using a different ordering with 64 bit indices (I don't know why it is, need to look at the code in more detail). My current conclusion is that there is nothing "buggy" with the code, it is just using a different ordering that results in zero pivots with 64 bit indices than when running with 32 bit indices. This goes back to your question about sparse factorizations, they can generally always fail with zero pivots (for some matrices) depending on what ordering they use unless they do very aggressive numerical pivoting which can slow them down a great deal.

   I will try to see why it is using a different ordering in the 64 bit case.

  Barry

> On Dec 16, 2014, at 2:52 PM, Garth N. Wells <gnw20 at cam.ac.uk> wrote:
> 
> Attached is test code and matrix/vector files to reproduce a problem I'm seeing with Umfpack and 64-bit indices. It works fine with Umfpack and 32-bit indices. The matrix comes from a mixed formulation of the Poisson equation. The matrix is indefinite. I've computed all eigenvalues and they're nonzero.
> 
> Running the test program with:
> 
>   ./test -pc_factor_mat_solver_package umfpack
> 
> the output is
> 
>   Solution L2 norm: -nan
>   Solution L2 norm should be: 2.96824
> 
> Running the test program with
> 
>   ./test -pc_factor_mat_solver_package superlu_dist
> 
> the output is
> 
>   Solution L2 norm: 2.968238
>   Solution L2 norm should be: 2.96824
> 
> My PETSc configure line is
> 
>   --with-64-bit-indices --download-hypre=yes --download-suitesparse=1 --download-parmetis --download-metis --with-debugging=no COPTFLAGS="-O3 -march=native" FOPTFLAGS="-O3 -march=native" CXXOPTFLAGS="-O3 -march=native" --prefix=/home/garth/local/packages/petsc-64 --download-ptscotch --download-superlu_dist
> 
> Git log is 7fbfed63fbc5325cf1eefdb967627c4417715c46
> 
> (message copied to petsc-maint so the attachment doesn't get scrubbed)
> 
> Garth
> 
> On Mon, 15 Dec, 2014 at 9:49 PM, Garth N. Wells <gnw20 at cam.ac.uk> wrote:
>>> On 15 Dec 2014, at 21:42, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>>> On Dec 15, 2014, at 3:16 PM, Garth N. Wells <gnw20 at cam.ac.uk> wrote:
>>>> It's possible to configure PETSc with the options
>>>>  --with-64-bit-indices --download-mumps
>>>> and compile and run, but it doesn't look like the MUMPS interface supports 64 bit indices. Should the above combination throw an error at configure time?
>>>  Yes, there is commented code in the consistency check, I cannot remember why it was commented.
>>> #      if self.double and not self.scalartypes.precision.lower() == 'double':
>>> #        raise RuntimeError('Cannot use '+self.name+' withOUT double precision numbers, it is not coded for this capability')
>>> #      if not self.complex and self.scalartypes.scalartype.lower() == 'complex':
>>> #        raise RuntimeError('Cannot use '+self.name+' with complex numbers it is not coded for this capability')
>>> #      if self.libraryOptions.integerSize == 64 and self.requires32bitint:
>>> #        raise RuntimeError('Cannot use '+self.name+' with 64 bit integers, it is not coded for this capability')
>>>> (On a side note, SuperLU_dist and Umfpack are flaky with 64 bit indices, which seems largely related to calling ParMETIS/METIS).
>>>  Hmm, never heard such reports. Are you using the --download-parmetis and metis? You should be. Is it reproducible problem?  We'd like to fix it.
>> Yes, I’m using --download-parmetis and --download-metis.
>> Yes, it’s reproducible for a given matrix but hard to know which matrices will be a problem, and for which solver (SuperLU_dist/Umfpack). I’ll try to package up a couple of examples that fail or give wrong results.
>> Garth
>>>  Barry
>>>> Garth
> <petsc_lu_test.tgz>




More information about the petsc-dev mailing list