[petsc-dev] Generality of VecScatter
Jed Brown
jedbrown at mcs.anl.gov
Tue Nov 29 08:50:15 CST 2011
You don't need any of this unless your threads need to actually sleep
instead of spin. I think we can afford to have them spin, so we don't need
any locking. You let the threads continue by doing an ordinary write to a
memory location. (You can do more sophisticated things with cmpxchg, which
you likely need at synchronization points.)
On Nov 29, 2011 8:41 AM, "Dmitry Karpeev" <karpeev at mcs.anl.gov> wrote:
> Barry,
>
> You might want to take a look at this:
> http://stackoverflow.com/questions/2994216/linux-pthread-suspend
> The solution suggested there (Answer 1) is to use sigwait(). Answer 3,
> although not the actual solution,
> explains why the pthread_suspend/pthread_continue-based solution is racy.
> I think this may be a general issue
> with the problem, which requires the use of mutexes.
>
> What example do you use to test this? I'd like to take a look.
>
> Thanks.
> Dmitry.
>
> On Thu, Nov 24, 2011 at 4:10 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>
>> On Thu, Nov 24, 2011 at 16:01, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>
>>> Yes, but they can only get access to that shared variable one at a
>>> time: first get's it, then second get's it, then third gets, .... Ok for a
>>> couple of cores but not for dozens.
>>>
>>> Take a look at src/sys/objects/pthread.c for the various ways we have
>>> coded for "waking" the threads. Maybe I am missing something but this is
>>> the best Kerry and I could figure out.
>>>
>>
>> What exactly should I be looking at? Can't you have all the threads spin
>> on a normal shared variable (not a mutex) that is only written by the
>> thread that needs to spark them? Or use a fetch-and-add atomic if you want
>> to keep track of how many are running or limit the number? The latter could
>> use a tree to get logarithmic cost, but if they are stored next to each
>> other, you would still have O(P) cache invalidations.
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20111129/544ebaae/attachment.html>
More information about the petsc-dev
mailing list