[petsc-users] How to confirm the performance of asynchronous computations
Viet H.Q.H.
hqhviet at tohoku.ac.jp
Fri Jan 29 05:51:39 CST 2021
Thank you everyone for your valuable materials and comments.
Currently, I can use a maximum of 8 nodes on a computer system with a 10 Gb
InfiniBand network.
I am applying to use all the nodes in this computer system (about 300
nodes).
It will take some time.
I also hope 300 nodes are enough to check the effectiveness of a simple
nonblocking computation test where the inner product overlaps the
matrix-vector multiplication.
Have a great weekend!!
Viet
On Fri, Jan 29, 2021 at 1:35 AM Jed Brown <jed at jedbrown.org> wrote:
> Lawrence Mitchell <wencel at gmail.com> writes:
>
> >> On 27 Jan 2021, at 16:30, Matthew Knepley <knepley at gmail.com> wrote:
> >>
> >> This is very important to do _first_. It would probably only take you a
> day to measure the Allreduce time on your target, say the whole machine you
> run on.
> >
> > Why plots like this are not _absolutely standard_ on all HPC sites'
> webpages is a source of continuing mystery to me.
>
> I've been asking for it for years. They say if you care, you should just
> go run it. Never mind how wasteful that is, and the time commitment to
> doing so. I think they often avoid making a commitment because latency is
> super variable (depending on the partition you get and what other jobs are
> running elsewhere on the machine; Blue Gene famously didn't have that
> problem).
>
> Meanwhile, latency on cloud providers keeps dropping and they're sure to
> beat conventional HPC centers to publishing a dashboard of expected latency
> for different configurations.
>
> This page illustrates how hardware reductions scale much better than
> log(P).
>
> https://www.mcs.anl.gov/~fischer/gop/
>
> > Although I guess Figure 2 from here
> https://www.mcs.anl.gov/papers/P5347-0515.pdf probably gives me a clue.
> >
> > Viet, I suspect that Matt thinks you should try and produce a figure
> like Figure 3 from that linked paper.
> >
> > Lawrence
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210129/1ab9fe5f/attachment.html>
More information about the petsc-users
mailing list