<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">I tried to rebuild the optimized PETSc
library by changing several options and ran:<br>
<br>
mpirun -np 2 ./ex19 -cuda_show_devices -dm_mat_type aijcusp
-dm_vec_type cusp<br>
-ksp_type fgmres -ksp_view -log_summary -pc_type none <br>
-snes_monitor_short -snes_rtol 1.e-5<br>
<br>
Options used:<br>
--with-pthread=1 -O3 -> crash<br>
--with-pthread=0 -O2 -> crash<br>
--with-debugging=1 --with-pthread=1 -O2 -> OK<br>
<br>
So --with-debugging=1 is the key to avoid the crash. Not<br>
good for the performance of course...<br>
<br>
If it can helps,<br>
<br>
Pierre<br>
<br>
</div>
<blockquote
cite="mid:CAMJ8fwrsJYOHsvs9cPwhTeH_OinaKHMSTMo0RrJco7Qi2qWknQ@mail.gmail.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=ISO-8859-1">
<div dir="ltr">
<div>Previously, I had noticed strange behaviour when running
the GPU code with the threadComm package. It might be worth
trying to disable that code in the build to see if the problem
persists?<br>
</div>
-Paul<br>
</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">On Tue, Jan 14, 2014 at 9:19 AM, Karl
Rupp <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:rupp@mcs.anl.gov" target="_blank">rupp@mcs.anl.gov</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Pierre,
<div class="im"><br>
<br>
>> I could reproduce the problem and also get some
uninitialized variable<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
warnings in Valgrind. The debug version detects these
errors, hence<br>
you only see the errors in the debug build. For the
optimized build,<br>
chances are good that the computed values are either
wrong or may<br>
become wrong in other environments. I'll see what I
can do when I'm<br>
again at GPU machine tomorrow (parallel GPU debugging
via SSH is not<br>
great...)<br>
</blockquote>
Sorry, I mean:<br>
<br>
Parallel calculation on CPU or GPU run well with PETSc
non optimized library<br>
Parallel calculation on GPU crashes with PETSc optimized
library (on CPU<br>
it is OK)<br>
</blockquote>
<br>
</div>
The fact that it happens to run in one mode out of {debug,
optimized} but not in the other is at most a lucky
coincidence, but it still means that this is a bug we need
to solve :-)
<div class="im"><br>
<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
I could add that the "mpirun -np 1 ex19" runs well for
all builds on CPU<br>
and GPU.<br>
</blockquote>
<br>
</div>
I see valgrind warnings in the vector scatter routines,
which is likely the reason why it doesn't work with multiple
MPI ranks.<br>
<br>
Best regards,<br>
Karli<br>
<br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
<br>
<div class="moz-signature">-- <br>
<b>Trio_U support team</b>
<br>
Marthe ROUX (Saclay)
<br>
Pierre LEDAC (Grenoble)
</div>
</body>
</html>