[petsc-users] MPI-FFTW example crashes

Smith, Barry F. bsmith at mcs.anl.gov
Sun Jun 2 23:31:24 CDT 2019


  I assume the example runs fine with --download-fftw on theta?

   Is the cray-fftw-3.3.8.1 compatible with the MPI you are using? 

   Perhaps the cray-fftw-3.3.8.1 assumes extra padding in the array lengths then standard fftw. You could add some extra length to the arrays allocated by PETSc and see if the problem goes away

   Barry


> On Jun 2, 2019, at 11:14 PM, Sajid Ali via petsc-users <petsc-users at mcs.anl.gov> wrote:
> 
> Hi PETSc-developers, 
> 
> I'm trying to run ex143 on a cluster (alcf-theta). I compiled PETSc on login node with cray-fftw-3.3.8.1 and there was no error in either configure or make. 
> 
> When I try running ex143 with 1 MPI rank on compute node, everything works fine but with 2 MPI ranks, it crashes due to illegal instruction due to memory corruption. I tried running it with valgrind but the available valgrind module on theta gives the error `valgrind: failed to start tool 'memcheck' for platform 'amd64-linux': No such file or directory`. 
> 
> To get around this, I tried running it with gdb4hpc and I attached the backtrace which shows that there is some error with mpi-fftw being called. I also attach the output with -start_in_debugger command option.
> 
> What could possibly cause this error and how do I fix it ? 
> 
> Thank You, 
> Sajid Ali
> Applied Physics
> Northwestern University
> <error.txt><gdb4hpc.txt>



More information about the petsc-users mailing list