<div dir="ltr">Sounds great to me - what library do I download that we're all going to use for managing the memory pool? :-)<div><br></div><div>Seriously though: why doesn't MPI give us an ability to get unique tag IDs for a given communicator? I like the way libMesh deals with this: <a href="https://github.com/libMesh/libmesh/blob/master/include/parallel/parallel_implementation.h#L1343">https://github.com/libMesh/libmesh/blob/master/include/parallel/parallel_implementation.h#L1343</a></div><div><br></div><div>I would definitely sign on for all of us to use the same library for getting unique tag IDs... and then we would need a lot less communicators...</div><div><br></div><div>Derek</div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr">On Tue, Apr 3, 2018 at 3:20 PM Jed Brown <<a href="mailto:jed@jedbrown.org">jed@jedbrown.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Derek Gaston <<a href="mailto:friedmud@gmail.com" target="_blank">friedmud@gmail.com</a>> writes:<br>
<br>
> Do you think there is any possibility of getting Hypre to use disjoint tags<br>
> from PETSc so you can just use the same comm? Maybe a configure option to<br>
> Hypre to tell it what number to start at for its tags?<br>
<br>
Why have malloc when we could just coordinate each of our libraries to<br>
use non-overlapping memory segments???<br>
</blockquote></div>