[petsc-dev] WTF

Jeff Hammond jeff.science at gmail.com
Wed Jun 29 22:06:24 CDT 2016


On Wednesday, June 29, 2016, Barry Smith <bsmith at mcs.anl.gov> wrote:

>
>    Who are these people and why to they have this webpage?
>
>
Pop up 2-3 directories and you'll see this is a grad student who appears to
be trying to learn applied math. Is this really your enemy? Don't you guys
have some DOE bigwigs to bash?


>     Almost for sure they are doing no process binding and no proper
> assignment of processes to memory domains.


>
MVAPICH2 sets affinity by default. Details not given but "infiniband
enabled" means it might have been used. I don't know what OpenMPI does by
default but affinity alone doesn't explain this.


>  In addition they are likely filling up all the cores on the first node
> before adding processes to the second core etc.


>
That's how I would show scaling. Are you suggesting using all the nodes and
doing breadth first placement?

Jeff


> If the studies had been done properly there should be very little fail off
> on the strong scaling in going from 1 to 2 to 4 processes and even beyond.
> Similarly the huge fail off in going from 4 to 8 to 16 would not occur for
> weak scaling.
>
>    Barry
>
>
> > On Jun 29, 2016, at 7:47 PM, Matthew Knepley <knepley at gmail.com
> <javascript:;>> wrote:
> >
> >
> >
> >   http://guest.ams.sunysb.edu/~zgao/work/airfoil/scaling.html
> >
> > Can we rerun this on something at ANL since I think this cannot be true.
> >
> >    Matt
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
>
>

-- 
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20160629/4185c72e/attachment.html>


More information about the petsc-dev mailing list