[petsc-users] question on MPI usage for PETSC + petsc for multicores
Barry Smith
bsmith at mcs.anl.gov
Tue Oct 11 16:35:47 CDT 2011
On Oct 11, 2011, at 4:24 PM, Ravi Kannan wrote:
> Dear All,
>
> This is Ravi Kannan from CFD Research Corporation. We have been using PETSc as the main driver for our computational suites for the last decade. Recently there has been a surge in the multicore type architectures for scientific computing. I have a few questions in this regard:
>
> 1. Does PETSc’s communication use the MPI which is installed on the host machine? In other words, do the transfers performed by PETSC use exactly the MPI installed on the host machine?
That depends on the MPI you indicated when ./configure is run for PETSc. If you use --with-mpi-dir=/directoryofyourmachinesmpi then it will use that MPI
>
> 2. How does PETSc handle the data transfer for inside the processor (between cores) for multicore architectures?
By default it uses all MPI. We have started to add support for using pthreads/shared memory for communication within the node. We are currently working with early users on this feature, it is not really ready for prime time yet. Likely the next release of PETSc will have a strong support for this.
Barry
>
> Thanks,
> Ravi.
>
> _________________________________________
>
> Dr. Ravi Kannan
> Associate Editor of Scientific Journals International
> Editorial Board, Journal of Aerospace Engineering and Technology
> Who’s Who in Thermal Fluids(https://www.thermalfluidscentral.org/who/browse-entry.php?e=9560)
> Research Engineer, CFD Research Corporation
> 256.726.4851
> rxk at cfdrc.com
> http://ravikannan.jimdo.com/ ,
> https://www.msu.edu/~kannanra/homepage.html
> _________________________________________
>
More information about the petsc-users
mailing list