[petsc-users] MatAssemblyEnd taking too long
Manav Bhatia
bhatiamanav at gmail.com
Wed Aug 19 19:50:33 CDT 2020
Thanks for the followup, Jed.
> On Aug 19, 2020, at 7:42 PM, Jed Brown <jed at jedbrown.org> wrote:
>
> Can you share a couple example stack traces from that debugging?
Do you mean a similar screenshot at different system sizes? Or a different format?
> About how many nonzeros per row?
This is a 3D elasticity run with Hex8 elements. So, each row has 81 non-zero entries, although I have not verified that (I will do so now). Is there a command line argument that will print this for the matrix? Although, on second thought that will not be printed unless the Assembly routine has finished.
>
> Manav Bhatia <bhatiamanav at gmail.com> writes:
>
>> Hi,
>>
>> I have an application that uses the KSP solver and I am looking at the performance with increasing system size. I am currently running on MacBook Pro with 32 GB memory and Petsc obtained from GitHub (commit df0e43005dbe6ff47eff22a32b336a6c37d02c3a).
>>
>> The application runs fine till about 2e6 DoFs using gamg without any problems.
>>
>> However, when I try a larger system size, in this case with 5.4e6 DoFs, the application hangs for an hour and I have to kill the MPI processes.
>>
>> I tried to use Xcode Instruments to profile the 8 MPI processes and I have attached a screenshot of the recorded results from each process. All processes are stuck inside MatAssemblyEnd, but at different function calls.
>>
>> I am not sure how to debug this issue, and would greatly appreciate any guidance.
>>
>> For reference, I am calling PETSc with the following options:
>> -ksp_type gmres -pc_type gamg -mat_block_size 3 -mg_levels_ksp_max_it 4 -ksp_monitor -ksp_converged_reason
>>
>> Regards,
>> Manav
More information about the petsc-users
mailing list