[mpich-discuss] mpich-discuss Digest, Vol 40, Issue 15
Jie Chen
jiechen at mcs.anl.gov
Tue Jan 10 15:23:02 CST 2012
Jed, please consider the following work flow for process i (np is the total number of processes)
{
do something with self data
get data from (i+1)%np
do something with self data and data from (i+1)%np
get data from (i+2)%np
do something with self data and data from (i+2)%np
...
do something with self data and data from (i+np-2)%np
get data from (i+np-1)%np
do something with self data and data from (i+np-1)%np
}
If using one sided operations, between every "do something" and "get data", I need to put a MPI_Win_fence() to synchronize processes. The fence works as barrier, meaning every process has to wait the slowest one to finish "do something" before it can get data from some other process. Since the work loads of "do something" are different among processes, I am concerned about the time wasted in the fence. In fact, in this particular scenario, one sided operations offer no superiority over send/recv. There is alternative to fence, namely MPI_Win_lock(); I am not sure if it would have such a barrier effect though.
What I hope is that by using threads instead of one-sided operations, the cost of probing messages could be significantly smaller than the time wasted in synchronizing the processes due to the difference in workloads. Anyone with experience in this issue is welcome to comment. Thanks.
Jie
----- Original Message -----
From: mpich-discuss-request at mcs.anl.gov
To: mpich-discuss at mcs.anl.gov
Sent: Tuesday, January 10, 2012 2:50:29 AM
Subject: mpich-discuss Digest, Vol 40, Issue 15
Message: 1
Date: Mon, 9 Jan 2012 13:18:53 -0600
From: Jed Brown <jedbrown at mcs.anl.gov>
Subject: Re: [mpich-discuss] MPI_Probe
To: mpich-discuss at mcs.anl.gov
Message-ID:
<CAM9tzSmjgLOKFRmKORKcuRDeTk+Gq07wSdbovBF+QEcn_Wzxvw at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
You might also consider MPI one-sided operations.
More information about the mpich-discuss
mailing list