[mpich-discuss] DataType Problem

James Dinan dinan at mcs.anl.gov
Mon Jan 31 13:44:48 CST 2011


Hi Michele,

I've attached a small test case derived from what you sent.  This runs 
fine for me with the integer change suggested below.

I'm still a little confused about the need for 
mpi_type_create_resized().  You're setting the lower bound to 1 and the 
extent to the size of a double complex.  These adjustments are in bytes, 
so if I'm interpreting this correctly you are effectively shifting the 
beginning of the data type 1 byte into the first value in the array and 
then accessing a full double complex from that location.  This seems 
like it's probably not what you would want to do.

Could you explain the subset of the data you're trying to cover with the 
datatype?

Thanks,
  ~Jim.

On 01/31/2011 11:13 AM, James Dinan wrote:
> Hi Michele,
>
> Another quick comment:
>
> Don't forget to free your MPI datatypes when you're finished with them.
> This shouldn't cause the error you're seeing, but it can be a resource
> leak that builds up over time if you call this routine frequently.
>
> call mpi_type_free(temp, errorMPI)
> call mpi_type_free(temp2, errorMPI)
> call mpi_type_free(temp3, errorMPI)
>
> Best,
> ~Jim.
>
> On 01/31/2011 11:07 AM, James Dinan wrote:
>> Hi Michele,
>>
>> I'm looking this over and trying to put together a test case from the
>> code you sent. One thing that looks questionable is the type for 'ext'.
>> The call to mpi_type_size wants an integer, however the
>> mpi_type_create_resized calls want an integer of kind=MPI_ADDRESS_KIND.
>> Could you try adding something like this:
>>
>> integer :: dcsize
>> integer (kind=MPI_ADDRESS_KIND) :: ext
>>
>> call mpi_type_size( mpi_double_complex , dcsize , errorMPI)
>> ext = dcsize
>>
>> Thanks,
>> ~Jim.
>>
>> On 01/30/2011 02:15 AM, Michele Rosso wrote:
>>> Hi,
>>>
>>>
>>> I am developing a subroutine to handle the communication inside a group
>>> of processors.
>>> The source code is attached.
>>>
>>> Such subroutine is contained in a module and accesses many of the data
>>> it needs and the header "mpi.h" from another module (pmu_var).
>>>
>>> As an input I have a 3D array (work1) which is allocated in the main
>>> program. As an output I have another 3D matrix (work2) which is
>>> allocated in the main program too. Both of them are of type complex and
>>> have intent INOUT (I wanna use the subroutine in a reversible way ).
>>>
>>> Since the data I wanna send are not contiguous, I defined several data
>>> types. Then I tested all of them with a simple send-receive
>>> communication in the group "mpi_comm_world".
>>> The problem arises when I tested the data type "temp3": the esecution of
>>> the program stops and I receive the error:
>>>
>>> rank 0 in job 8 enterprise_45569 caused collective abort of all ranks
>>> exit status of rank 0: killed by signal 9
>>>
>>> Notice that work1 and work2 have different size but the same shape and
>>> the data type should be coherent with them.
>>>
>>> Has anyone and idea of which the problem could be?
>>>
>>>
>>> Thanks in advance,
>>>
>>> Michele
>>>
>>>
>>>
>>> _______________________________________________
>>> mpich-discuss mailing list
>>> mpich-discuss at mcs.anl.gov
>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>
>> _______________________________________________
>> mpich-discuss mailing list
>> mpich-discuss at mcs.anl.gov
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss

-------------- next part --------------
A non-text attachment was scrubbed...
Name: dtype_test.f95
Type: text/x-fortran
Size: 2117 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20110131/408d707c/attachment.bin>


More information about the mpich-discuss mailing list