[petsc-users] consistence of PETSC/SLEPC with MPI, BLACS, SCALAPACK calls...

Satish Balay balay at mcs.anl.gov
Mon Nov 27 12:09:43 CST 2017


These questions are more pertinant to MKL - i.e what interface does
MKL ilp64 blacs library provide for Cblacs_get() etc..

The following url has some info - and some references to ilp64 MPI
https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/474149

[PETSc is not tested with ilp64 MPI]

Satish

On Mon, 27 Nov 2017, Giacomo Mulas wrote:

> On Mon, 27 Nov 2017, Jose E. Roman wrote:
> 
> > You have PetscInt, PetscBLASInt and PetscMPIInt.
> 
> I will try to work my way through it. In most cases it looks very clear.
> There are some borderline cases in which things are not so clearly cut
> though.
> I.e. if I call Cblacs_get( -1, 0, &ictxt ) to get a context, I would guess
> that the context should be PetscMPIInt even if I get it via a blacs call.
> Similarly for Cblacs_gridinit() or Cblacs_gridinfo() they all deal with
> strictly MPI stuff so I bet they should all get and return PetscMPIInt
> variables. On the other hand, if I use lnr = numroc_(&n, &nbr, &myrow, &ione,
> &nprows);
> to get the number of rows (in this case) locally allocated to a distributed
> blacs array, I would bet that lnr, n, nbr, ione should be PetscBlasInt and
> myrow, nprows should be PetscMPIInt. Would anyone proficient with both
> blacs/scalapack and petsc care to confirm
> or correct if wrong? Possibly just pointing me to where to look to find the
> answers without bothering him/her further?
> 
> Thanks
> Giacomo
> 
> > 
> > > El 27 nov 2017, a las 10:12, Giacomo Mulas <gmulas at oa-cagliari.inaf.it>
> > > escribió:
> > > 
> > > Hello.
> > > 
> > > I am using, within the same C code, both SLEPC/PETSC and Scalapack/Blacs.
> > > On the big parallel machine on which I do production runs, I compiled
> > > SLEPC/PETSC with the "--known-64-bit-blas-indices" and
> > > "--with-64-bit-indices" options, linking them with the ilp64 version of
> > > the
> > > Intel MKL libraries, while on the workstation on which I do the
> > > development
> > > I use the standard libraries provided by the (debian, in my case)
> > > packaging
> > > system.  For Slepc/Petsc themselves I just use the PETSC data types and
> > > this
> > > automagically defines integers of the appropriate size on both machines.
> > > 
> > > However, when using BLACS, Scalapack and MPI directly in the same code, I
> > > will obviously need to use consistent function definitions for them as
> > > well. Do I need to set up some complicated independent #ifdef machinery
> > > for this
> > > or are there some appropriate PETSC data types that I can use that will
> > > ensure this consistency?  Of course I am including slepc/petsc include
> > > files, so all PETSC data types are defined according to the local
> > > PETSC/SLEPC options.  Can some PETSC developer give me some hint on how to
> > > make my MPI, BLACS, SCALAPACK (and PBLAS etc.) calls clean and consistent
> > > with this? Perhaps even referring to some examples in the PETSC source
> > > code
> > > that I can read and take as a reference for this.
> > > 
> > > Thanks in advance
> > > Giacomo
> > > 
> > > --
> > > _________________________________________________________________
> > > 
> > > Giacomo Mulas <gmulas at oa-cagliari.inaf.it>
> > > _________________________________________________________________
> > > 
> > > INAF - Osservatorio Astronomico di Cagliari
> > > via della scienza 5 - 09047 Selargius (CA)
> > > 
> > > tel.   +39 070 71180255
> > > mob. : +39 329  6603810
> > > _________________________________________________________________
> > > 
> > > "When the storms are raging around you, stay right where you are"
> > >                         (Freddy Mercury)
> > > _________________________________________________________________
> > 
> > 
> 
> 


More information about the petsc-users mailing list