[petsc-dev] Using multiple mallocs with PETSc
Jeff Hammond
jeff.science at gmail.com
Mon Mar 13 21:37:05 CDT 2017
OpenMP did not prevent OpenCL, C11, C++11 or Fortran 2008 from introducing
parallelism. Not sure if your comment was meant to be serious, but it
appears unfounded nonetheless.
Jeff
On Sun, Mar 12, 2017 at 11:16 AM Jed Brown <jed at jedbrown.org> wrote:
> Implementation-defined, but it's exactly the same as malloc, which also
> doesn't promise unfaulted pages. This is one reason some of us keep saying
> that OpenMP sucks. It's a shitty standard that obstructs better standards
> from being created.
>
>
> On March 12, 2017 11:19:49 AM MDT, Jeff Hammond <jeff.science at gmail.com>
> wrote:
>
>
> On Sat, Mar 11, 2017 at 9:00 AM Jed Brown <jed at jedbrown.org> wrote:
>
> Jeff Hammond <jeff.science at gmail.com> writes:
> > I agree 100% that multithreaded codes that fault pages from the main
> thread in a NUMA environment are doing something wrong ;-)
> >
> > Does calloc *guarantee* pages are not mapped? If I calloc(8), do I get
> the zero page or part of the arena that's already mapped that is zeroed by
> the heap manager?
>
> Is your argument that calloc() should never be used in multi-threaded code?
>
>
> I never use it for code that I want to behave well in a NUMA environment.
>
>
> If the allocation is larger than MMAP_THRESHOLD (128 KiB by default for
> glibc) then it calls mmap. This obviously leaves an intermediate size
> that could be poorly mapped (assuming 4 KiB pages), but it's also so
> small that it easily fits in cache.
>
>
> Is this behavior standardized or merely implementation-defined? I'm not
> interested in writing code that assumes Linux/glibc.
>
> Jeff
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com
> http://jeffhammond.github.io/
>
> --
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20170314/98334bbb/attachment.html>
More information about the petsc-dev
mailing list