[mpich-discuss] MPI One-Sided Communication, and indexed datatype

Ziaul Haque Olive mzh.olive at gmail.com
Thu May 31 21:41:18 CDT 2012


no I don't have right now. but I will create one and let you know. for the
time being, could you please tell me some of the errors according to C
standard that you mentioned in your previous email.

Thanks
Ziaul

On Thu, May 31, 2012 at 9:31 PM, Jeff Hammond <jhammond at alcf.anl.gov> wrote:

> Can't you just put it on the Internet somewhere?  Do you have a
> world-readable SVN repo?
>
> Jeff
>
> On Thu, May 31, 2012 at 9:18 PM, Ziaul Haque Olive <mzh.olive at gmail.com>
> wrote:
> > Thanks Jeff,
> >
> > the code is quite large to post here. It is from graph500 benchmark's mpi
> > implementation. multiple files are required to build this code. I can
> send
> > the whole directory for debugging. what do you say?
> >
> > Thanks,
> > Ziaul.
> >
> >
> > On Thu, May 31, 2012 at 8:51 PM, Jeff Hammond <jhammond at alcf.anl.gov>
> wrote:
> >>
> >> This is an interesting question.
> >>
> >> "Is it okay that I am freeing memory before the closing fence?"
> >>
> >> With respect to the memory associated with the MPI_Accumulate itself,
> >> the answer is clearly "no".
> >>
> >> MPI-2.2 11.3:
> >>
> >> "The local communication buffer of an RMA call should not be updated,
> >> and the local communication buffer of a get call should not be
> >> accessed after the RMA call, until the subsequent synchronization call
> >> completes."
> >>
> >> Freeing memory is obviously a modification.
> >>
> >> However, it seems you are freeing the MPI_Datatype used in this
> >> operation and the input arrays associated with it.
> >>
> >> From MPI-2.2 4.1.9:
> >>
> >> "Marks the datatype object associated with datatype for deallocation
> >> and sets datatype to MPI_DATATYPE_NULL. Any communication that is
> >> currently using this datatype will complete normally. Freeing a
> >> datatype does not affect any other datatype that was built from the
> >> freed datatype. The system behaves as if input datatype arguments to
> >> derived datatype constructors are passed by value."
> >>
> >> It would seem that your usage is correct.
> >>
> >> I cannot debug your code because it is incomplete and has a few bugs
> >> according to the C standard.  Can you post the code that actually
> >> results in the error you're seeing?
> >>
> >> There may be other issues with your code that others may point out.
> >> My email is not intended to capture all possible sources of error.
> >>
> >> Best,
> >>
> >> Jeff
> >>
> >>
> >> On Thu, May 31, 2012 at 6:23 PM, Ziaul Haque Olive <mzh.olive at gmail.com
> >
> >> wrote:
> >> > Hello,
> >> >
> >> > I am not sure, if my code is correct according to MPICH2(v1.4.1p1.).
> the
> >> > code is given as follows, I am doing MPI one-sided communication
> inside
> >> > a
> >> > function - data_transfer. this function is being called inside a fence
> >> > epoch. inside data_transfer, I am allocating memory for non-contiguous
> >> > data,
> >> > creating derived data type, using this datatype in MPI_Accumulate, and
> >> > after
> >> > calling MPI_Accumulate, freeing the indexed data type and also freeing
> >> > the
> >> > memory containing indices for indexed data type. is it okay that I am
> >> > freeing memory before the closing fence.
> >> >
> >> > void data_transfer(void *data, int *sources_disp, int *targets_disp,
> int
> >> > *target, int size, int *blength, int func, MPI_Op op, MPI_Win win,
> >> > MPI_Datatype dtype){
> >> >
> >> > int i,j, index;
> >> > int tmp_target;
> >> > int *flag;
> >> > int *source_disp;
> >> > int *target_disp;
> >> > MPI_Datatype source_type, target_type;
> >> > MPI_Alloc_mem( size*sizeof(int), MPI_INFO_NULL, &source_disp);
> >> > MPI_Alloc_mem( size*sizeof(int), MPI_INFO_NULL, &target_disp);
> >> > MPI_Alloc_mem( size*sizeof(int), MPI_INFO_NULL, &flag );
> >> >
> >> > memset(flag, 0, size*sizeof(int));
> >> >
> >> > for(i=0;i<size;i++){
> >> > if(flag[i]==0){
> >> > tmp_target = target[i];
> >> > index = 0;
> >> > for(j=i; j<size; j++){
> >> > if(flag[j]==0 && tmp_target == target[j] ){
> >> > source_disp[index] = sources_disp[j];
> >> > target_disp[index] = targets_disp[j];
> >> > //printf("src, target disp %d  %d\n", j, disp[j]);
> >> > index++;
> >> > flag[j] = 1;
> >> > }
> >> > }
> >> >
> >> > MPI_Type_indexed(index, blength , source_disp, dtype, &source_type);
> >> > MPI_Type_commit(&source_type);
> >> > MPI_Type_indexed(index, blength , target_disp, dtype, &target_type);
> >> > MPI_Type_commit(&target_type);
> >> > MPI_Accumulate( data, 1, source_type, tmp_target, 0, 1, target_type ,
> >> > op,
> >> > win);
> >> >
> >> > MPI_Type_free(&source_type);
> >> > MPI_Type_free(&target_type);
> >> > }
> >> > }
> >> > MPI_Free_mem(source_disp);
> >> > MPI_Free_mem(target_disp);
> >> > MPI_Free_mem(flag);
> >> >
> >> > }
> >> >
> >> > void main(){
> >> > int i;
> >> > while(i<N){
> >> > MPI_Win_fence(MPI_MODE_NOPRECEDE, queue2_win);
> >> >
> >> > data_transfer();
> >> >
> >> > MPI_Win_fence(MPI_MODE_NOSUCCEED, queue2_win);
> >> > }
> >> > }
> >> >
> >> > Thanks
> >> > Ziaul
> >> >
> >> > _______________________________________________
> >> > mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> >> > To manage subscription options or unsubscribe:
> >> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >> >
> >>
> >>
> >>
> >> --
> >> Jeff Hammond
> >> Argonne Leadership Computing Facility
> >> University of Chicago Computation Institute
> >> jhammond at alcf.anl.gov / (630) 252-5381
> >> http://www.linkedin.com/in/jeffhammond
> >> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
> >> _______________________________________________
> >> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> >> To manage subscription options or unsubscribe:
> >> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >
> >
> >
> > _______________________________________________
> > mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> > To manage subscription options or unsubscribe:
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >
>
>
>
> --
> Jeff Hammond
> Argonne Leadership Computing Facility
> University of Chicago Computation Institute
> jhammond at alcf.anl.gov / (630) 252-5381
> http://www.linkedin.com/in/jeffhammond
> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
> _______________________________________________
> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20120531/08008f27/attachment.html>


More information about the mpich-discuss mailing list