We are implementing MPI on a module
of a multi-module code. We created MPI enhanced module, created its object file and linked it with the existing object files from other modules. The MPI enhanced modules is performing as expected. We are also getting good speed-up for the module. But the problem we are facing is that the slave nodes are not stopping the computation after the MPI enhanced module work is finished. We were expecting that MPI_FINALIZE(err) command will stop all the nodes expect the master node.<br>
<br>The Call MPI_FINALIZE(err) does not stop the additional processors. What is the solution for this problem?<br><br>Gulshan<br><br><div class="gmail_quote">On Tue, Dec 14, 2010 at 4:51 AM, Nicolas Rosner <span dir="ltr"><<a href="mailto:nrosner@gmail.com" target="_blank">nrosner@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">Hello Gulshan,<br>
<br>
I'm assuming you meant more or less the following:<br>
<br>
- you have a program spanning multiple compilation units and/or libs<br>
and<br>
- you would like to MPI-enable just one of said units or modules<br>
and<br>
- any necessary IPC/comm/sync between the MPI-ized unit and the rest<br>
is addressed by separate mechanisms (other than MPI)<br>
<br>
If so, sure, as long as you have the source code for that particular unit<br>
or module, you should be able to limit any MPI compile/link-dependency of<br>
your system to the one module that actually uses MPI calls.<br>
<div><br>
<br>
> There are more than 10 modules in the software we are trying to modify to<br>
> improve the speed. We know that the most time is taken by only one module.<br>
<br>
</div>Do these modules map to sequential stages of a workflow/pipeline? Is that<br>
one module one such stage? Is it currently invoked in a call-return way?<br>
<div><br>
<br>
> So MPI starts when the particular module is called and<br>
> the MPI collapses as soon as the computation in the module is finished.<br>
<br>
</div>What you describe sounds essentially like a sequential program that, at<br>
some point --or perhaps every now and then-- needs to launch a parallel,<br>
MPI-enhanced subroutine/subprocess, wait until it's done, then move on..?<br>
<br>
If so, this is certainly possible, assuming your resource allocation<br>
plays along (ie the machines are always there for you when you need them, etc).<br>
<br>
HTH, just guessing; good luck! N.<br>
_______________________________________________<br>
mpich-discuss mailing list<br>
<a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
</blockquote></div><br>