[mpich-discuss] MPI_Waitsome and MPI_Getcount incorrect

Tamás Faragó fafarago at gmail.com
Wed Dec 16 07:48:11 CST 2009


My actual problem, why I initiated the previous post at
http://lists.mcs.anl.gov/pipermail/mpich-discuss/2009-December/006187.html
([mpich-discuss] MPI_GET_COUNT behaviour unclear).

See the very simple source code below. I initialise two persistent
requests, then run a Waitsome on both of them and getting the
top-level datatypes received. With Waitsome the second one returns
MPI_UNDEFINED, no idea why. Waitall correctly returns 1 in both cases.
What is going on, is it my fault and can it be solved?

I have also uploaded the source code to http://www.liacs.nl/~tfarago/test.cpp
NOTE: right now the tags are all the same, but even if different tags
are given to the sending and receiving side (eg 0 and 1), the outcoume
is the same. From the output it can be seen that even if MPI_GET_COUNT
returns some kind of an error the program's behaviour is still
correct.
NOTE: also, strangely I cannot get Waitsome to return both completed
requests, not even if I let the client sleep/idle for several seconds.

---CODE--
#include <stdarg.h>
#include <stdio.h>
#include <mpi.h>

#pragma comment(lib, "cxx.lib")
#pragma comment(lib, "mpi.lib")

void __cdecl debug(const char* msg, ...)
{
	va_list va;
	va_start(va, msg);
	vfprintf(stderr, msg, va);
	fputs("\n", stderr);
	fflush(stderr);
}

int main() {
	MPI::Init();
	int my_node = MPI::COMM_WORLD.Get_rank();

	int a, b;
	MPI::Prequest req[2];
	a = 0; b = 0;
	if (my_node == 0) {
		a = 1; b = 2;
		req[0] = MPI::COMM_WORLD.Send_init(&a, 1, MPI::INT, 1, 0);
		req[1] = MPI::COMM_WORLD.Send_init(&b, 1, MPI::INT, 1, 0);
	} else {
		size_t size = MPI::INT.Pack_size(1, MPI::COMM_WORLD);

		req[0] = MPI::COMM_WORLD.Recv_init(&a, size, MPI::INT, 0, MPI::ANY_TAG);
		req[1] = MPI::COMM_WORLD.Recv_init(&b, size, MPI::INT, 0, MPI::ANY_TAG);	
	}

	MPI::Prequest::Startall(2, req);

	if (my_node == 0) {
		debug("host: a %d, b %d", a, b);
	} else {
		debug("client before: a %d, b %d", a, b);
		int array_of_indeces[2];
		MPI::Status array_of_statuses[2];
		MPI::Datatype array_of_types[2];
		array_of_types[0] = MPI::INT;
		array_of_types[1] = MPI::INT;
#if 1
		for (;;) {
			/* wait for one, or multiple requests to finish */
			int outcount = MPI::Request::Waitsome(2, req, array_of_indeces,
array_of_statuses);
			if (outcount == MPI_UNDEFINED) break; /* no active handles */

			debug("received count: %d", outcount);
			for (outcount--; outcount >= 0; --outcount) {
				size_t index = array_of_indeces[outcount];

				debug("MPI_Waitsome index %d", index);
				int recv_count = array_of_statuses[index].Get_count(array_of_types[index]);
				debug("MPI_GET_COUNT %d", recv_count);
			}
		}
#else
		MPI::Prequest::Waitall(2, req, array_of_statuses);

		for (int outcount = 2; outcount > 0; --outcount) {
			int recv_count =
array_of_statuses[outcount-1].Get_count(array_of_types[outcount-1]);
			debug("MPI_GET_COUNT %d", recv_count);
		}
#endif
		debug("client after: a %d, b %d", a, b);
	}

	debug("done, waiting....");
	MPI::COMM_WORLD.Barrier();
	debug("finalize");
	MPI::Finalize();
	return 0;
}
---CODE---


More information about the mpich-discuss mailing list