<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Sorry for the late reply, it took
longer than I thought.<br>
So a little update on my situation: <br>
I have to use a custom version of valgrind on CETUS which has to
be linked to my code using -Wl,-e,_start_valgrind at compilation
(I also add object file and libraries).<br>
After that I can run my code with the following arguments:<br>
<blockquote>feap
--ignore-ranges=0x4000000000000-0x4063000000000,0x003fdc0000000-0x003fe00000000
--suppressions=/soft/perftools/valgrind/cnk-baseline.supp<br>
</blockquote>
but I can't use the usual petsc argument (-ksp_type -pc_type ...)
since valgrind does not recognize them.<br>
So I decided to use the PETSC_OPTIONS='-ksp_type preonly -pc_type
lu -pc_factor_mat_solver_package mumps -ksp_diagonal_scale', to
pass my arguments to PETSc.<br>
However I am a little worried that Petsc still receive the command
line arguments that are used by valgrind since I get the following
stderr from valgrind:<br>
<br>
stderr[0]: ==1== by 0x38BEA3F: handle_SCSS_change
(m_signals.c:963)<br>
stderr[0]: ==1== by 0x38C13A7: vgPlain_do_sys_sigaction
(m_signals.c:1114)<br>
stderr[0]: ==1== by 0x3962FBF:
vgSysWrap_linux_sys_rt_sigaction_before (syswrap-linux.c:3073)<br>
stderr[0]: ==1== by 0x3928BF7: vgPlain_client_syscall
(syswrap-main.c:1464)<br>
stderr[0]: ==1== by 0x3925F4B: vgPlain_scheduler
(scheduler.c:1061)<br>
stderr[0]: ==1== by 0x3965EB3: run_a_thread_NORETURN
(syswrap-linux.c:103)<br>
stderr[0]: <br>
stderr[0]: sched status:<br>
stderr[0]: running_tid=1<br>
stderr[0]: <br>
stderr[0]: Thread 1: status = VgTs_Runnable<br>
stderr[0]: ==1== at 0x34290E4: __libc_sigaction
(sigaction.c:80)<br>
stderr[0]: ==1== by 0x3BFF3A7: signal (signal.c:49)<br>
stderr[0]: ==1== by 0x1E710DF: PetscPushSignalHandler (in
/projects/shearbands/ShearBands/parfeap/feap)<br>
stderr[0]: ==1== by 0x18BEA87: PetscOptionsCheckInitial_Private
(in /projects/shearbands/ShearBands/parfeap/feap)<br>
stderr[0]: ==1== by 0x18E132F: petscinitialize (in
/projects/shearbands/ShearBands/parfeap/feap)<br>
stderr[0]: ==1== by 0x1027557: pstart (in
/projects/shearbands/ShearBands/parfeap/feap)<br>
stderr[0]: ==1== by 0x1000B1F: MAIN__ (feap83.f:213)<br>
stderr[0]: ==1== by 0x342ABD7: main (fmain.c:21)<br>
stderr[0]: <br>
stderr[0]: <br>
stderr[0]: Note: see also the FAQ in the source distribution.<br>
stderr[0]: It contains workarounds to several common problems.<br>
stderr[0]: In particular, if Valgrind aborted or crashed after<br>
stderr[0]: identifying problems in your program, there's a good
chance<br>
stderr[0]: that fixing those problems will prevent Valgrind
aborting or<br>
stderr[0]: crashing, especially if it happened in m_mallocfree.c.<br>
stderr[0]: <br>
stderr[0]: If that doesn't help, please report this bug to:
<a class="moz-txt-link-abbreviated" href="http://www.valgrind.org">www.valgrind.org</a><br>
stderr[0]: <br>
stderr[0]: In the bug report, send all the above text, the
valgrind<br>
stderr[0]: version, and what OS and version you are using. Thank<br>
stderr[0]: s.<br>
stderr[0]: <br>
<br>
I am only showing the output of rank[0] but it seems that all
ranks have about the same error message.<br>
Since my problem happens in petscinitialize I have little
possibilities to check what's wrong...<br>
Any ideas?<br>
<pre class="moz-signature" cols="72">Best,
Luc</pre>
On 10/28/2014 02:53 PM, Barry Smith wrote:<br>
</div>
<blockquote
cite="mid:7955DA16-318F-4E9B-B563-B8FBAB623095@mcs.anl.gov"
type="cite">
<pre wrap="">
You don’t care about checking for leaks. I use
-q --tool=memcheck --num-callers=20 --track-origins=yes
</pre>
<blockquote type="cite">
<pre wrap="">On Oct 28, 2014, at 1:50 PM, Luc Berger-Vergiat <a class="moz-txt-link-rfc2396E" href="mailto:lb2653@columbia.edu"><lb2653@columbia.edu></a> wrote:
Yes, I am running with --leak-check=full
Reconfiguring and recompiling the whole library and my code in debug mode does take quite some time on CETUS/MIRA...
Hopefully the queue will go up fast and I can give you some details about the issue.
Best,
Luc
On 10/28/2014 02:25 PM, Barry Smith wrote:
</pre>
<blockquote type="cite">
<pre wrap=""> You need to pass some options to valgrind telling it to check for memory corruption issues
</pre>
<blockquote type="cite">
<pre wrap="">On Oct 28, 2014, at 12:30 PM, Luc Berger-Vergiat <a class="moz-txt-link-rfc2396E" href="mailto:lb2653@columbia.edu"><lb2653@columbia.edu></a> wrote:
Ok, I'm recompiling PETSc in debug mode then.
Do you know what the call sequence should be on CETUS to get valgrind attached to PETSc?
Would this work for example:
runjob --np 32 -p 8 --block $COBALT_PARTNAME --cwd /projects/shearbands/job1/200/4nodes_32cores/LU --verbose=INFO --envs FEAPHOME8_3=/projects/shearbands/ShearBands352 PETSC_DIR=/projects/shearbands/petsc-3.5.2 PETSC_ARCH=arch-linux2-c-opt : /usr/bin/valgrind --log-file=valgrind.log.%p /projects/shearbands/ShearBands352/parfeap/feap -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -ksp_diagonal_scale < /projects/shearbands/job1/yesfile
Best,
Luc
On 10/28/2014 12:33 PM, Barry Smith wrote:
</pre>
<blockquote type="cite">
<pre wrap=""> Hmm, this should never happen. In the code
ierr = PetscTableCreate(aij->B->rmap->n,mat->cmap->N+1,&gid1_lid1);CHKERRQ(ierr);
for (i=0; i<aij->B->rmap->n; i++) {
for (j=0; j<B->ilen[i]; j++) {
PetscInt data,gid1 = aj[B->i[i] + j] + 1;
ierr = PetscTableFind(gid1_lid1,gid1,&data);CHKERRQ(ierr);
Now mat->cmap->N+1 is the total number of columns in the matrix and gid1 are column entries which must always be smaller. Most likely there has been memory corruption somewhere before this point. Can you run with valgrind?
<a class="moz-txt-link-freetext" href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind">http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a>
Barry
</pre>
<blockquote type="cite">
<pre wrap="">On Oct 28, 2014, at 10:04 AM, Luc Berger-Vergiat <a class="moz-txt-link-rfc2396E" href="mailto:lb2653@columbia.edu"><lb2653@columbia.edu></a>
wrote:
Hi,
I am running a code on CETUS and I use PETSc for as a linear solver.
Here is my submission command:
qsub -A shearbands -t 60 -n 4 -O 4nodes_32cores_Mult --mode script 4nodes_32cores_LU
Here is "4nodes_32cores_LU":
#!/bin/sh
LOCARGS="--block $COBALT_PARTNAME ${COBALT_CORNER:+--corner} $COBALT_CORNER ${COBALT_SHAPE:+--shape} $COBALT_SHAPE"
echo "Cobalt location args: $LOCARGS" >&2
################################
# 32 cores on 4 nodes jobs #
################################
runjob --np 32 -p 8 --block $COBALT_PARTNAME --cwd /projects/shearbands/job1/200/4nodes_32cores/LU --verbose=INFO --envs FEAPHOME8_3=/projects/shearbands/ShearBands352 PETSC_DIR=/projects/shearbands/petsc-3.5.2 PETSC_ARCH=arch-linux2-c-opt : /projects/shearbands/ShearBands352/parfeap/feap -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -ksp_diagonal_scale -malloc_log mlog -log_summary time.log < /projects/shearbands/job1/yesfile
I get the following error message:
[7]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[7]PETSC ERROR: Argument out of range
[7]PETSC ERROR: Petsc Release Version 3.5.2, unknown
[7]PETSC ERROR: key 532150 is greater than largest key allowed 459888
[7]PETSC ERROR: Configure options --known-mpi-int64_t=1 --download-cmake=1 --download-hypre=1 --download-metis=1 --download-parmetis=1 --download-plapack=1 --download-superlu_dist=1 --download-mumps=1 --download-ml=1 --known-bits-per-byte=8 --known-level1-dcache-assoc=0 --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 --known-memcmp-ok=1 --known-mpi-c-double-complex=1 --known-mpi-long-double=1 --known-mpi-shared-libraries=0 --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 --known-sizeof-char=1 --known-sizeof-double=8 --known-sizeof-float=4 --known-sizeof-int=4 --known-sizeof-long-long=8 --known-sizeof-long=8 --known-sizeof-short=2 --known-sizeof-size_t=8 --known-sizeof-void-p=8 --with-batch=1 --with-blacs-include=/soft/libraries/alcf/current/gcc/SCALAPACK/ --with-blacs-lib=/soft/libraries/alcf/current/gcc/SCALAPACK/lib/libscalapack.a --with-blas-lapack-lib="-L/soft/libraries/alcf/current/gcc/LAPACK/lib -llapack -L/soft/libraries/alcf/current/gcc/BLAS/lib
-lblas" -
-with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-fortran-kernels=1 --with-is-color-value-type=short --with-scalapack-include=/soft/libraries/alcf/current/gcc/SCALAPACK/ --with-scalapack-lib=/soft/libraries/alcf/current/gcc/SCALAPACK/lib/libscalapack.a --with-shared-libraries=0 --with-x=0 -COPTFLAGS=" -O3 -qhot=level=0 -qsimd=auto -qmaxmem=-1 -qstrict -qstrict_induction" -CXXOPTFLAGS=" -O3 -qhot=level=0 -qsimd=auto -qmaxmem=-1 -qstrict -qstrict_induction" -FOPTFLAGS=" -O3 -qhot=level=0 -qsimd=auto -qmaxmem=-1 -qstrict -qstrict_induction"
[7]PETSC ERROR: #1 PetscTableFind() line 126 in /gpfs/mira-fs1/projects/shearbands/petsc-3.5.2/include/petscctable.h
[7]PETSC ERROR: #2 MatSetUpMultiply_MPIAIJ() line 33 in /gpfs/mira-fs1/projects/shearbands/petsc-3.5.2/src/mat/impls/aij/mpi/mmaij.c
[7]PETSC ERROR: #3 MatAssemblyEnd_MPIAIJ() line 702 in /gpfs/mira-fs1/projects/shearbands/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c
[7]PETSC ERROR: #4 MatAssemblyEnd() line 4900 in /gpfs/mira-fs1/projects/shearbands/petsc-3.5.2/src/mat/interface/matrix.c
Well at least that is what I think comes out after I read all the jammed up messages from my MPI processes...
I would guess that I am trying to allocate more memory than I should which seems strange since the same problem runs fine on 2 nodes with 16 cores/node
Thanks for the help
Best,
Luc
</pre>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<pre wrap="">
</pre>
</blockquote>
<pre wrap="">
</pre>
</blockquote>
<br>
</body>
</html>