[petsc-dev] Fwd: [Xlab] slow pagefault on KNC
Karl Rupp
rupp at mcs.anl.gov
Mon Dec 17 12:00:21 CST 2012
Assuming that the penalty for page faults reduces ideally with the
number of threads, one would obtain an average ~4us with 240 active
threads. Nevertheless, 850 us is terribly large, so Amdahl's Law will
hit hard...
Since I don't think the Xeon Phi will be the last type of accelerator
from Intel, why not include such a test? I don't know how portable this
is, though...
Best regards,
Karli
On 12/17/2012 11:23 AM, Jed Brown wrote:
> He should be using all the threads for this. Using one thread to fault a
> bunch of memory is a recipe for terrible performance all around.
>
>
> On Mon, Dec 17, 2012 at 9:21 AM, Barry Smith <bsmith at mcs.anl.gov
> <mailto:bsmith at mcs.anl.gov>> wrote:
>
>
> Should we have a test like this in the PETSc benchmark directory?
>
> Barry
>
>
> Begin forwarded message:
>
> > From: Kazutomo Yoshii <kazutomo at mcs.anl.gov
> <mailto:kazutomo at mcs.anl.gov>>
> > Subject: [Xlab] slow pagefault on KNC
> > Date: December 17, 2012 10:48:06 AM CST
> > To: "xlab at cels.anl.gov <mailto:xlab at cels.anl.gov>"
> <xlab at cels.anl.gov <mailto:xlab at cels.anl.gov>>
> >
> > Hi,
> >
> > I noticed that pagefault is very slow on KNC. It takes 850 usec
> > while it takes ~1 usec on Xeon, so prefaulting 1GB of memory region
> > takes 222 sec on KNC.
> >
> > This may not impact a big app runs for hours and hours, but I guess
> > this definitely affects short-lived processes or threads, which might
> > make MIC less fascinating.
> >
> > This could be a hardware problem(need to check Phi), a kernel bug,
> > or maybe sage of SMP kernel on many-core.
> > If this is the last case, it would be really interesting for me.
> >
> > Attached a simple page fault benchmark.
> >
> > - kaz
> [see attached file:
> pftest.c]_______________________________________________
> > Xlab mailing list
> > Xlab at lists.cels.anl.gov <mailto:Xlab at lists.cels.anl.gov>
> > https://lists.cels.anl.gov/mailman/listinfo/xlab
>
>
More information about the petsc-dev
mailing list