[petsc-users] MatAssemblyEnd taking too long

Jed Brown jed at jedbrown.org
Wed Aug 19 19:42:27 CDT 2020


Can you share a couple example stack traces from that debugging?  About how many nonzeros per row?

Manav Bhatia <bhatiamanav at gmail.com> writes:

> Hi, 
>
>    I have an application that uses the KSP solver and I am looking at the performance with increasing system size. I am currently running on MacBook Pro with 32 GB memory and Petsc obtained from GitHub (commit df0e43005dbe6ff47eff22a32b336a6c37d02c3a). 
>
>    The application runs fine till about 2e6 DoFs using gamg without any problems. 
>
>    However, when I try a larger system size, in this case with 5.4e6 DoFs, the application hangs for an hour and I have to kill the MPI processes. 
>
>    I tried to use Xcode Instruments to profile the 8 MPI processes and I have attached a screenshot of the recorded results from each process. All processes are stuck inside MatAssemblyEnd, but at different function calls. 
>
>    I am not sure how to debug this issue, and would greatly appreciate any guidance. 
>
>    For reference, I am calling PETSc with the following options:
>   -ksp_type gmres -pc_type gamg -mat_block_size 3 -mg_levels_ksp_max_it 4 -ksp_monitor -ksp_converged_reason
>
> Regards,
> Manav


More information about the petsc-users mailing list