[petsc-users] Is OpenMP still available for PETSc?
Matthew Knepley
knepley at gmail.com
Mon Jul 3 09:29:15 CDT 2017
On Mon, Jul 3, 2017 at 9:23 AM, Damian Kaliszan <damian at man.poznan.pl>
wrote:
> Hi,
>
>
> >> 1) You can call Bcast on PETSC_COMM_WORLD
>
> To be honest I can't find Bcast method in petsc4py.PETSc.Comm (I'm
> using petsc4py)
>
> >> 2) If you are using WORLD, the number of iterates will be the same on
> each process since iteration is collective.
>
> Yes, this is how it should be. But what I noticed is that for
> different --cpus-per-task numbers in slurm script I get different
> number of solver iterations which is in turn related to timings. The
> imparity is huge. For example for some configurations where
> --cpus-per-task=1 I receive 900
> iterations and for --cpus-per-task=2 I receive valid number of 100.000
> which is set as max
> iter number set when setting solver tolerances.
>
I am trying to understand what you are saying. You mean that you make 2
different runs and get a different
number of iterates with a KSP? In order to answer questions about
convergence, we need to see the output
of
-ksp_view -ksp_monitor_true_residual -ksp_converged_reason
for all cases.
Thanks,
Matt
> Best,
> Damian
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
http://www.caam.rice.edu/~mk51/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170703/0315f400/attachment.html>
More information about the petsc-users
mailing list