<div dir="ltr">Ok, good to know. I'll update to latest Petsc, and do some testing, and let you know either way.<div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Apr 2, 2023 at 6:31 AM Jed Brown <<a href="mailto:jed@jedbrown.org">jed@jedbrown.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Vector communication used a different code path in 3.13. If you have a reproducer with current PETSc, I'll have a look. Here's a demo that the solution is bitwise identical (the sha256sum is the same every time you run it, though it might be different on your computer from mine due to compiler version and flags).<br>
<br>
$ mpiexec -n 8 ompi/tests/snes/tutorials/ex5 -da_refine 3 -snes_monitor -snes_view_solution binary && sha256sum binaryoutput<br>
0 SNES Function norm 1.265943996096e+00<br>
1 SNES Function norm 2.831564838232e-02<br>
2 SNES Function norm 4.456686729809e-04<br>
3 SNES Function norm 1.206531765776e-07<br>
4 SNES Function norm 1.740255643596e-12<br>
5410f84e91a9db3a74a2ac33603bbbb1fb48e7eaf739614192cfd53344517986 binaryoutput<br>
<br>
Mark McClure <<a href="mailto:mark@resfrac.com" target="_blank">mark@resfrac.com</a>> writes:<br>
<br>
> In the typical FD implementation, you only set local rows, but with FE and<br>
> sometimes FV, you also create values that need to be communicated and<br>
> summed on other processors.<br>
> Makes sense.<br>
><br>
> Anyway, in this case, I am certain that I am giving the solver bitwise<br>
> identical matrices from each process. I am not using a preconditioner,<br>
> using BCGS, with Petsc version 3.13.3.<br>
><br>
> So then, how can I make sure that I am "using an MPI that follows the<br>
> suggestion for implementers about determinism"? I am using MPICH version<br>
> 3.3a2, didn't do anything special when installing it. Does that sound OK?<br>
> If so, I could upgrade to the latest Petsc, try again, and if confirmed<br>
> that it persists, could provide a reproduction scenario.<br>
><br>
><br>
><br>
> On Sat, Apr 1, 2023 at 9:53 PM Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>> wrote:<br>
><br>
>> Mark McClure <<a href="mailto:mark@resfrac.com" target="_blank">mark@resfrac.com</a>> writes:<br>
>><br>
>> > Thank you, I will try BCGSL.<br>
>> ><br>
>> > And good to know that this is worth pursuing, and that it is possible.<br>
>> Step<br>
>> > 1, I guess I should upgrade to the latest release on Petsc.<br>
>> ><br>
>> > How can I make sure that I am "using an MPI that follows the suggestion<br>
>> for<br>
>> > implementers about determinism"? I am using MPICH version 3.3a2.<br>
>> ><br>
>> > I am pretty sure that I'm assembling the same matrix every time, but I'm<br>
>> > not sure how it would depend on 'how you do the communication'. Each<br>
>> > process is doing a series of MatSetValues with INSERT_VALUES,<br>
>> > assembling the matrix by rows. My understanding of this process is that<br>
>> > it'd be deterministic.<br>
>><br>
>> In the typical FD implementation, you only set local rows, but with FE and<br>
>> sometimes FV, you also create values that need to be communicated and<br>
>> summed on other processors.<br>
>><br>
</blockquote></div>