[petsc-users] MatMPIAIJSetPreallocation deadlocks

Mark Adams mfadams at lbl.gov
Sat Jul 19 17:40:41 CDT 2025


Valgrind is a good place to start, but it can be hard to use ... so if you
are clean or don't want to bother, DDT is useful.

If you have ddt you can simply run interactively and "pause all" and poke
around and collect some stack traces (at least one) which is super useful.

You can run non-interactively with something like:

srun -n 4 ddt --offline
--output=ddt-output.html --snapshot-interval=<MINUTES> ./myprogram

This should dump a stack trace, periodically, into ddt-output.html that is
readable and has stack variables, for all processors.

Mark

On Fri, Jul 18, 2025 at 4:09 PM Junchao Zhang <junchao.zhang at gmail.com>
wrote:

> Do you have any chance to collect stack traces of all the MPI processes?
>
> --Junchao Zhang
>
>
> On Fri, Jul 18, 2025 at 12:20 PM Edoardo alinovi <
> edoardo.alinovi at gmail.com> wrote:
>
>> Hello Petsc friends,
>>
>> Hope you are all doing well.
>>
>> Today I was doing a simulation (27Mln cell on 64 cores) and I came across
>> an issue. Indeed, I am  deadlocking somewhere in *MatMPIAIJSetPreallocation.
>> D*o you have any clue about the reason for this? Any suggestions to
>> track this down?
>>
>> Many thanks,
>>
>> Edo
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20250719/4b40d082/attachment.html>


More information about the petsc-users mailing list