[petsc-users] Error reported by MUMPS in numerical factorization phase
Danyang Su
danyang.su at gmail.com
Wed Dec 2 11:17:53 CST 2015
Hi Hong,
It's not easy to run in debugging mode as the cluster does not have
petsc installed using debug mode. Restart the case from the crashing
time does not has the problem. So if I want to detect this error, I need
to start the simulation from beginning which takes hours in the cluster.
Do you mean I need to redo symbolic factorization? For now, I only do
factorization once at the first timestep and then reuse it. Some of the
code is shown below.
if (timestep == 1) then
call PCFactorSetMatSolverPackage(pc_flow,MATSOLVERMUMPS,ierr)
CHKERRQ(ierr)
call PCFactorSetUpMatSolverPackage(pc_flow,ierr)
CHKERRQ(ierr)
call PCFactorGetMatrix(pc_flow,a_flow_j,ierr)
CHKERRQ(ierr)
end if
call KSPSolve(ksp_flow,b_flow,x_flow,ierr)
CHKERRQ(ierr)
Thanks,
Danyang
On 15-12-02 08:39 AM, Hong wrote:
> Danyang :
>
> My code fails due to the error in external library. It works fine
> for the previous 2000+ timesteps but then crashes.
>
> [4]PETSC ERROR: Error in external library
> [4]PETSC ERROR: Error reported by MUMPS in numerical factorization
> phase: INFO(1)=-1, INFO(2)=0
>
> This simply says an error occurred in proc[0] during numerical
> factorization, which usually either encounter a zeropivot or run out
> of memory. Since it is at a later timesteps, which I guess you reuse
> matrix factor, zeropivot might be the problem.
> Is possible to run it in debugging mode? In this way, mumps would dump
> out more information.
>
>
> Then I tried the same simulation on another machine using the same
> number of processors, it does not fail.
>
> Does this machine have larger memory?
>
> Hong
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20151202/e696bd88/attachment.html>
More information about the petsc-users
mailing list