[mpich-discuss] Integrating parallel and serial code
Reuti
reuti at staff.uni-marburg.de
Mon Jul 12 17:17:50 CDT 2010
Hi,
Am 12.07.2010 um 23:52 schrieb Michael Morrison:
> Hi All, I hope i'm posting this in the correct forum. I'm extremely
> new to MPI so forgive me if my question comes off as silly. I've
> only been dabbling around with parallel computing for a few weeks
> now but I'm wondering about the possibilities. So far with each
> example that I've been through, I create an MPI application and then
> run this application using the mpiexec command. What I'm wondering
> about is if it would be possible to say have a C function that
> executes some algorithm in a parallel manner that could be called by
> another C function. For example suppose i had two C functions, the
> first called function A and the second called function B. Function A
> is just a normal function that runs serially, function B executes
> its algorithm parallelly using MPI constucts. Is it possible to have
> a set up like this?
>
> From the examples I've seen it appears that the MPI code runs as
> standalone units and couldn't be integrated with serially executing
> code. Is this true or is there someway to make this set up work? I
> may not have given enough information to answer the question, if
> there's any confusion please ask and i'll clarify.
do you want to run this code just local on one machine or in a cluster
with an installed job scheduler? I'm not aware of any scheduler, which
allows you to specify varying resource requests for the lifetime of a
job. E.g. I need:
- 1 hr - 1 core
- 4 hrs - 2 cores
- 1 hr - 4 cores
If you submit such a job with the highest amount i.e. 4 cores, you
will have unused resources. You could split the job to three steps,
which just wait for the end of the predecessor, but then you are left
again with pure serial and pure parallel steps.
If you are just in the beginning of a project, it would be best to
design the application in such a way, that all requested resources are
used to their maximum and the offcut is minimized.
-- Reuti
> Thanks for your time,
>
> Mike
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
More information about the mpich-discuss
mailing list