[petsc-users] Error with VecDestroy_MPIFFTW+0x61
Matthew Knepley
knepley at gmail.com
Sun Apr 14 20:28:45 CDT 2019
On Sun, Apr 14, 2019 at 9:12 PM Sajid Ali <sajidsyed2021 at u.northwestern.edu>
wrote:
> Just to confirm, there's no error when running with one rank. The error
> occurs only with mpirun -np x (x>1).
>
This is completely broken. I attached a version that will work in parallel,
but its ugly.
PETSc People:
The MatCreateVecsFFT() calls fftw_malloc()!!! in parallel. What possible
motivation could there be?
This causes a failure because the custom destroy calls fftw_free().
VecDuplicate calls PetscMalloc(),
but then the custom destroy calls fftw_free() on that thing and chokes on
the header we put on all
allocated blocks. Its not easy to see who wrote the fftw_malloc() lines,
but it seems to be at least 8 years
ago. I can convert them to PetscMalloc(), but do we have any tests that
would make us confident that
this is not wrecking something? Is anyone familiar with this part of the
code?
Matt
> Attaching the error log.
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190414/d543ab49/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ex_modify.c
Type: application/octet-stream
Size: 1407 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190414/d543ab49/attachment.obj>
More information about the petsc-users
mailing list