[petsc-users] Documentation for different parallelization options

Moritz Cygorek mcygorek at uottawa.ca
Wed Jun 6 13:20:15 CDT 2018


Thank you very much for your response.


I have tested the --download-openblas option and it did not do what I expected.

The total cpu-usage only moved to something like 105%, so it did not significantly make use of parallelization.

I did not test MKL yet, because I'll first have to install it.


However, compiling PETSc with MUMPS works very well and significantly speeds up the caluclation for my full-MPI code.


I will have to do some more testing, but MPI with MUMPS support seems to be the way to go for me.


Thanks again,

Moritz



________________________________
From: Jose E. Roman <jroman at dsic.upv.es>
Sent: Tuesday, June 5, 2018 5:43:37 PM
To: Moritz Cygorek
Cc: petsc-users at mcs.anl.gov
Subject: Re: [petsc-users] Documentation for different parallelization options

For multi-threaded parallelism you have to use a multi-threaded BLAS such as MKL or OpenBLAS:
$ ./configure --with-blaslapack-dir=$MKLROOT
or
$ ./configure --download-openblas

For MPI parallelism, if you are solving linear systems within EPS you most probably need PETSc be configured with a parallel linear solver such as MUMPS, see section 3.4.1 of SLEPc's user manual.

Jose


> El 5 jun 2018, a las 19:00, Moritz Cygorek <mcygorek at uottawa.ca> escribió:
>
> Hi everyone,
>
> I'm looking for a document/tutorial/howto that describes the different options to compile PETSc with parallelization.
>
> My problem is the following:
> I'm trying to solve a large sparse eigenvalue problem using the Krylov-Schur method implemented in SLEPc
> When I install SLEPc/PETSc on my Ubuntu laptop via apt-get, everything works smoothly and parallelization works automatically.
> I see this by the fact that the CPU-load of the process (only one process, not using mpiexec) is close to 400% according to "top"
> Therefore, it seems that OpenMP is used.
>
> I have access to better computers and I would like to install SLEPc/PETSc there, but I have to configure it manually.
> I have tried different options, none of the satisfactory:
>
> When I compile PETSc with the --with-openmp flag, I see that the program never runs with cpu load above 100%.
> I use the same command to call the program as on my laptop where everything works. So it seems that openmp is somehow not activated.
> An old mailing list entry says that I am supposed to configure PETSc using --with-threadcomm --with-openmp, which I did, but it also didn't help.
> However that entry was from 2014 and I found in the list of changes for PETSc in version 3.6:
> "Removed all threadcomm support including --with-pthreadclasses and --with-openmpclasses configure arguments"
>
> Does that mean that openmp is no longer supported in newer versions?
>
>
> Given my resources, I would prefer OpenMP over MPI. Nevertheless, I then spent some time to go full MPI without openmp and to split up the sparse matrix across several processes. When I start the program using mpiexec,
> I see indeed that multiple processes are started, but even when I use 12 processes, the computation time is about the same as with only 1 process.
> Is there anything I have to tell the EPS solver to activate parallelization?
>
>
> So, all in all, I can't get to run anything faster on a large multi-core computer than on my old crappy laptop.
>
>
> I have no idea how to start debugging and assessing the performance and the documentation on this issue on the website is not very verbose.
> Can you give me a few hints?
>
> Regards,
> Moritz
>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180606/10d1dd9f/attachment.html>


More information about the petsc-users mailing list