<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<meta name="Generator" content="Microsoft Exchange Server">
<!-- converted from text --><style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
</head>
<body>
<meta content="text/html; charset=UTF-8">
<style type="text/css" style="">
<!--
p
{margin-top:0;
margin-bottom:0}
-->
</style>
<div dir="ltr">
<div id="x_divtagdefaultwrapper" style="font-size:12pt; color:#000000; font-family:Calibri,Arial,Helvetica,sans-serif">
<p>Thank you very much for your response.</p>
<p><br>
</p>
<p>I have tested the --download-openblas option and it did not do what I expected.</p>
<p>The total cpu-usage only moved to something like 105%, so it did not significantly make use of parallelization.</p>
<p>I did not test MKL yet, because I'll first have to install it.</p>
<p><br>
</p>
<p>However, compiling PETSc with MUMPS works very well and significantly speeds up the caluclation for my full-MPI code.
<br>
</p>
<p><br>
</p>
<p>I will have to do some more testing, but MPI with MUMPS support seems to be the way to go for me.</p>
<p><br>
</p>
<p>Thanks again, <br>
</p>
<p>Moritz<br>
</p>
<p><br>
</p>
<p><br>
</p>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="x_divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> Jose E. Roman <jroman@dsic.upv.es><br>
<b>Sent:</b> Tuesday, June 5, 2018 5:43:37 PM<br>
<b>To:</b> Moritz Cygorek<br>
<b>Cc:</b> petsc-users@mcs.anl.gov<br>
<b>Subject:</b> Re: [petsc-users] Documentation for different parallelization options</font>
<div> </div>
</div>
</div>
<font size="2"><span style="font-size:10pt;">
<div class="PlainText">For multi-threaded parallelism you have to use a multi-threaded BLAS such as MKL or OpenBLAS:<br>
$ ./configure --with-blaslapack-dir=$MKLROOT<br>
or<br>
$ ./configure --download-openblas<br>
<br>
For MPI parallelism, if you are solving linear systems within EPS you most probably need PETSc be configured with a parallel linear solver such as MUMPS, see section 3.4.1 of SLEPc's user manual.<br>
<br>
Jose<br>
<br>
<br>
> El 5 jun 2018, a las 19:00, Moritz Cygorek <mcygorek@uottawa.ca> escribió:<br>
> <br>
> Hi everyone,<br>
> <br>
> I'm looking for a document/tutorial/howto that describes the different options to compile PETSc with parallelization.<br>
> <br>
> My problem is the following: <br>
> I'm trying to solve a large sparse eigenvalue problem using the Krylov-Schur method implemented in SLEPc<br>
> When I install SLEPc/PETSc on my Ubuntu laptop via apt-get, everything works smoothly and parallelization works automatically.<br>
> I see this by the fact that the CPU-load of the process (only one process, not using mpiexec) is close to 400% according to "top"<br>
> Therefore, it seems that OpenMP is used.<br>
> <br>
> I have access to better computers and I would like to install SLEPc/PETSc there, but I have to configure it manually.<br>
> I have tried different options, none of the satisfactory:<br>
> <br>
> When I compile PETSc with the --with-openmp flag, I see that the program never runs with cpu load above 100%.<br>
> I use the same command to call the program as on my laptop where everything works. So it seems that openmp is somehow not activated.<br>
> An old mailing list entry says that I am supposed to configure PETSc using --with-threadcomm --with-openmp, which I did, but it also didn't help.<br>
> However that entry was from 2014 and I found in the list of changes for PETSc in version 3.6:<br>
> "Removed all threadcomm support including --with-pthreadclasses and --with-openmpclasses configure arguments"<br>
> <br>
> Does that mean that openmp is no longer supported in newer versions?<br>
> <br>
> <br>
> Given my resources, I would prefer OpenMP over MPI. Nevertheless, I then spent some time to go full MPI without openmp and to split up the sparse matrix across several processes. When I start the program using mpiexec,<br>
> I see indeed that multiple processes are started, but even when I use 12 processes, the computation time is about the same as with only 1 process.<br>
> Is there anything I have to tell the EPS solver to activate parallelization?<br>
> <br>
> <br>
> So, all in all, I can't get to run anything faster on a large multi-core computer than on my old crappy laptop.
<br>
> <br>
> <br>
> I have no idea how to start debugging and assessing the performance and the documentation on this issue on the website is not very verbose.
<br>
> Can you give me a few hints?<br>
> <br>
> Regards,<br>
> Moritz<br>
> <br>
> <br>
> <br>
<br>
</div>
</span></font>
</body>
</html>