[petsc-users] Memory corruption with two-dimensional array and PetscMemzero
Christophe Ortiz
christophe.ortiz at ciemat.es
Wed May 14 06:53:04 CDT 2014
Ok, I just found the answer regarding the memory corruption with the
two-dimensional array and PetscMemzero.
Instead of
ierr = PetscMemzero(diagbl,dof*dofsizeof(PetscScalar));CHKERRQ(ierr);
One must do the following:
for (k = 0; k < dof; k++) {
ierr = PetscMemzero(diagbl[k],dof*sizeof(PetscScalar));CHKERRQ(ierr);
}
Indeed, due to the ** notation, the two-dimensional array is made of dof
rows of dof columns. You cannot set dof*dof values to just one row but you
must iterate through the rows and set dof values each time.
Now it works fine.
Christophe
--
Q
Por favor, piense en el medio ambiente antes de imprimir este mensaje.
Please consider the environment before printing this email.
On Wed, May 14, 2014 at 1:29 PM, <petsc-users-request at mcs.anl.gov> wrote:
> Send petsc-users mailing list submissions to
> petsc-users at mcs.anl.gov
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users
> or, via email, send a message with subject or body 'help' to
> petsc-users-request at mcs.anl.gov
>
> You can reach the person managing the list at
> petsc-users-owner at mcs.anl.gov
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of petsc-users digest..."
>
>
> Today's Topics:
>
> 1. Re: Problem with MatZeroRowsColumnsIS() (Barry Smith)
> 2. Re: Problem with MatZeroRowsColumnsIS() (Jed Brown)
> 3. Re: PetscLayoutCreate for Fortran (Hossein Talebi)
> 4. Re: Memory usage during matrix factorization
> (De Groof, Vincent Frans Maria)
> 5. petsc 3.4, mat_view and prefix problem (Klaij, Christiaan)
> 6. Memory corruption with two-dimensional array and
> PetscMemzero (Christophe Ortiz)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 13 May 2014 21:00:50 -0500
> From: Barry Smith <bsmith at mcs.anl.gov>
> To: "P?s?k, Adina-Erika" <puesoek at uni-mainz.de>
> Cc: "<petsc-users at mcs.anl.gov>" <petsc-users at mcs.anl.gov>
> Subject: Re: [petsc-users] Problem with MatZeroRowsColumnsIS()
> Message-ID: <1C839F22-8ADF-4904-B136-D1461AE38187 at mcs.anl.gov>
> Content-Type: text/plain; charset="iso-8859-1"
>
>
> Given that Jed wrote MatZeroRowsColumns_MPIAIJ() it is unlikely to be
> wrong.
>
> You wrote
>
> Calculating the norm2 of the residuals defined above in each case gives:
> MatZeroRowsIS() 1cpu: norm(res,2) = 0
> MatZeroRowsIS() 4cpu: norm(res,2) = 0
> MatZeroRowsColumnsIS() 1cpu: norm(res,2) = 1.6880e-10
> MatZeroRowsColumnsIS() 4cpu: norm(res,2) = 7.3786e+06
>
> why do you conclude this is wrong? MatZeroRowsColumnsIS() IS suppose to
> change the right hand side in a way different than MatZeroRowsIS().
>
> Explanation. For simplicity reorder the matrix rows/columns so that
> zeroed ones come last and the matrix is symmetric. Then you have
>
> ( A B ) (x_A) = (b_A)
> ( B D ) (x_B) (b_B)
>
> with MatZeroRows the new system is
>
> ( A B ) (x_A) = (b_A)
> ( 0 I ) (x_B) (x_B)
>
> it has the same solution as the original problem with the give x_B
>
> with MatZeroRowsColumns the new system is
>
> ( A 0 ) (x_A) = (b_A) - B*x_B
> ( 0 I ) (x_B) (x_B)
>
> note the right hand side needs to be changed so that the new problem has
> the same solution.
>
> Barry
>
>
> On May 9, 2014, at 9:50 AM, P?s?k, Adina-Erika <puesoek at uni-mainz.de>
> wrote:
>
> > Yes, I tested the implementation with both MatZeroRowsIS() and
> MatZeroRowsColumnsIS(). But first, I will be more explicit about the
> problem I was set to solve:
> >
> > We have a Dirichlet block of size (L,W,H) and centered (xc,yc,zc), which
> is much smaller than the model domain, and we set Vx = Vpush, Vy=0 within
> the block (Vz is let free for easier convergence).
> > As I said before, since the code does not have a monolithic matrix, but
> 4 submatrices (VV VP; PV PP), and the rhs has 2 sub vectors rhs=(f; g), my
> approach is to modify only (VV, VP, f) for the Dirichlet BC.
> >
> > The way I tested the implementation:
> > 1) Output (VV, VP, f, Dirichlet dofs) - unmodified (no Dirichlet BC)
> > 2) Output (VV, VP, f, Dirichlet dofs) - a) modified with MatZeroRowsIS(),
> > - b) modified with MatZeroRowsColumnsIS() -> S_PETSc
> > Again, the only difference between a) and b) is:
> > //
> > ierr = MatZeroRowsColumnsIS(VV_MAT,isx,v_vv,x_push,rhs); CHKERRQ(ierr);
> > // ierr = MatZeroRowsColumnsIS(VV_MAT,isy,v_vv,x_push,rhs);
> CHKERRQ(ierr);
> >
> > ierr = MatZeroRowsIS(VV_MAT,isx,v_vv,x_push,rhs); CHKERRQ(ierr);
> > ierr = MatZeroRowsIS(VV_MAT,isy,v_vv,x_push,rhs); CHKERRQ(ierr);
> > 3) Read them in Matlab and perform the exact same operations on the
> unmodified matrices and f vector. -> S_Matlab
> > 4) Compare S_PETSc with S_Matlab. If the implementation is correct, they
> should be equal (VV, VP, f).
> > 5) Check for 1 cpu and 4 cpus.
> >
> > Now to answer your questions:
> >
> > a,b,d) Yes, matrix modification is done correctly (check the spy
> diagrams below) in all cases: MatZeroRowsIS() and MatZeroRowsColumnsIS()
> on 1 and 4 cpus.
> >
> > I should have said that in the piece of code above:
> > v_vv = 1.0;
> > v_vp = 0.0;
> > The vector x_push is a duplicate of rhs, with zero elements except the
> values for the Dirichlet dofs.
> >
> > c) The rhs is a different matter. With MatZeroRows() there is no
> problem. The rhs is equivalent with the one in Matlab, sequential and
> parallel.
> > However, with MatZeroRowsColumns(), the residual contains nonzero
> elements, and in parallel the nonzero pattern is even bigger (1 cpu - 63, 4
> cpu - 554). But if you look carefully, the values of the nonzero residuals
> are very small < +/- 1e-10.
> > So, I did a tolerance filter:
> >
> > tol = 1e-10;
> > res = f_petsc - f_mod_matlab;
> > for i=1:length(res)
> > if abs(res(i))>0 & abs(res(i))<tol
> > res(i)=0;
> > end
> > end
> >
> > and then the f_petsc and f_mod_matlab are equivalent on 1 and 4 cpus
> (figure 5). So it seems that MatZeroRowsColumnsIS() might give some nonzero
> residuals.
> >
> > Calculating the norm2 of the residuals defined above in each case gives:
> > MatZeroRowsIS() 1cpu: norm(res,2) = 0
> > MatZeroRowsIS() 4cpu: norm(res,2) = 0
> > MatZeroRowsColumnsIS() 1cpu: norm(res,2) = 1.6880e-10
> > MatZeroRowsColumnsIS() 4cpu: norm(res,2) = 7.3786e+06
> >
> > Since this is purely a problem of matrix and vector
> assembly/manipulation, I think the nonzero residuals of the rhs with
> MatZeroRowsColumnsIS() give the parallel artefacts that I showed last time.
> > If you need the raw data and the matlab scripts that I used for testing
> for your consideration, please let me know.
> >
> > Thanks,
> > Adina
> >
> > When performing the manual operations on the unmodified matrices and rhs
> vector in Matlab, I took into account:
> > - matlab indexing = petsc indexing +1;
> > - the vectors written to file for matlab (PETSC_VIEWER_BINARY_MATLAB)
> have the natural ordering, rather than the petsc ordering. On 1 cpu, they
> are equivalent, but on 4 cpus, the Dirichlet BC indices had to be converted
> to natural indices in order to perform the correct operations on the rhs.
> >
> > <spy_zerorows_1cpu.png>
> > <spy_zerorows_4cpu.png>
> > <spy_zerorowscol_1cpu.png>
> > <spy_zerorowscol_4cpu.png>
> > <residual_tol.png>
> >
> > On May 6, 2014, at 4:22 PM, Matthew Knepley wrote:
> >
> >> On Tue, May 6, 2014 at 7:23 AM, P?s?k, Adina-Erika <
> puesoek at uni-mainz.de> wrote:
> >> Hello!
> >>
> >> I was trying to implement some internal Dirichlet boundary conditions
> into an aij matrix of the form: A=( VV VP; PV PP ). The idea was to
> create an internal block (let's say Dirichlet block) that moves with
> constant velocity within the domain (i.e. check all the dofs within the
> block and set the values accordingly to the desired motion).
> >>
> >> Ideally, this means to zero the rows and columns in VV, VP, PV
> corresponding to the dirichlet dofs and modify the corresponding rhs
> values. However, since we have submatrices and not a monolithic matrix A,
> we can choose to modify only VV and PV matrices.
> >> The global indices of the velocity points within the Dirichlet block
> are contained in the arrays rowid_array.
> >>
> >> What I want to point out is that the function MatZeroRowsColumnsIS()
> seems to create parallel artefacts, compared to MatZeroRowsIS() when run on
> more than 1 processor. Moreover, the results on 1 cpu are identical.
> >> See below the results of the test (the Dirichlet block is outlined in
> white) and the piece of the code involved where the 1) - 2) parts are the
> only difference.
> >>
> >> I am assuming that you are showing the result of solving the equations.
> It would be more useful, and presumably just as easy
> >> to say:
> >>
> >> a) Are the correct rows zeroed out?
> >>
> >> b) Is the diagonal element correct?
> >>
> >> c) Is the rhs value correct?
> >>
> >> d) Are the columns zeroed correctly?
> >>
> >> If we know where the problem is, its easier to fix. For example, if the
> rhs values are
> >> correct and the rows are zeroed, then something is wrong with the
> solution procedure.
> >> Since ZeroRows() works and ZeroRowsColumns() does not, this is a
> distinct possibility.
> >>
> >> Thanks,
> >>
> >> Matt
> >>
> >> Thanks,
> >> Adina Pusok
> >>
> >> // Create an IS required by MatZeroRows()
> >> ierr =
> ISCreateGeneral(PETSC_COMM_WORLD,numRowsx,rowidx_array,PETSC_COPY_VALUES,&isx);
> CHKERRQ(ierr);
> >> ierr =
> ISCreateGeneral(PETSC_COMM_WORLD,numRowsy,rowidy_array,PETSC_COPY_VALUES,&isy);
> CHKERRQ(ierr);
> >> ierr =
> ISCreateGeneral(PETSC_COMM_WORLD,numRowsz,rowidz_array,PETSC_COPY_VALUES,&isz);
> CHKERRQ(ierr);
> >>
> >> 1) /*
> >> ierr = MatZeroRowsColumnsIS(VV_MAT,isx,v_vv,x_push,rhs); CHKERRQ(ierr);
> >> ierr = MatZeroRowsColumnsIS(VV_MAT,isy,v_vv,x_push,rhs); CHKERRQ(ierr);
> >> ierr = MatZeroRowsColumnsIS(VV_MAT,isz,v_vv,x_push,rhs);
> CHKERRQ(ierr);*/
> >>
> >> 2) ierr = MatZeroRowsIS(VV_MAT,isx,v_vv,x_push,rhs); CHKERRQ(ierr);
> >> ierr = MatZeroRowsIS(VV_MAT,isy,v_vv,x_push,rhs); CHKERRQ(ierr);
> >> ierr = MatZeroRowsIS(VV_MAT,isz,v_vv,x_push,rhs); CHKERRQ(ierr);
> >>
> >> ierr = MatZeroRowsIS(VP_MAT,isx,v_vp,PETSC_NULL,PETSC_NULL);
> CHKERRQ(ierr);
> >> ierr = MatZeroRowsIS(VP_MAT,isy,v_vp,PETSC_NULL,PETSC_NULL);
> CHKERRQ(ierr);
> >> ierr = MatZeroRowsIS(VP_MAT,isz,v_vp,PETSC_NULL,PETSC_NULL);
> CHKERRQ(ierr);
> >>
> >> ierr = ISDestroy(&isx); CHKERRQ(ierr);
> >> ierr = ISDestroy(&isy); CHKERRQ(ierr);
> >> ierr = ISDestroy(&isz); CHKERRQ(ierr);
> >>
> >>
> >> Results (velocity) with MatZeroRowsColumnsIS().
> >> 1cpu<r01_1cpu_rows_columns.png> 4cpu<r01_rows_columns.png>
> >>
> >> Results (velocity) with MatZeroRowsIS():
> >> 1cpu<r01_1cpu_rows.png> 4cpu<r01_rows.png>
> >>
> >>
> >>
> >> --
> >> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> >> -- Norbert Wiener
> >
>
>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 13 May 2014 22:28:19 -0600
> From: Jed Brown <jed at jedbrown.org>
> To: Barry Smith <bsmith at mcs.anl.gov>, P?s?k, Adina-Erika
> <puesoek at uni-mainz.de>
> Cc: "<petsc-users at mcs.anl.gov>" <petsc-users at mcs.anl.gov>
> Subject: Re: [petsc-users] Problem with MatZeroRowsColumnsIS()
> Message-ID: <87k39p7zmk.fsf at jedbrown.org>
> Content-Type: text/plain; charset="us-ascii"
>
> Barry Smith <bsmith at mcs.anl.gov> writes:
>
> > Given that Jed wrote MatZeroRowsColumns_MPIAIJ() it is unlikely to
> be wrong.
>
> Haha. Though MatZeroRowsColumns_MPIAIJ uses PetscSF, the implementation
> was written by Matt. I think it's correct, however, at least as of
> Matt's January changes in 'master'.
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/pgp-signature
> Size: 835 bytes
> Desc: not available
> URL: <
> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140513/4f5868b4/attachment-0001.pgp
> >
>
> ------------------------------
>
> Message: 3
> Date: Wed, 14 May 2014 07:43:21 +0200
> From: Hossein Talebi <talebi.hossein at gmail.com>
> To: Barry Smith <bsmith at mcs.anl.gov>
> Cc: petsc-users <petsc-users at mcs.anl.gov>
> Subject: Re: [petsc-users] PetscLayoutCreate for Fortran
> Message-ID:
> <CAEjaKwBc7X_jYw1K+wV0wMXJOdWAiwo1W=-
> 4b3hoO7_b3DwESQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Thank you.
>
> Well, only the first part. I move around the elements and identify the Halo
> nodes etc. However, I do not renumber the vertices to be contiguous on the
> CPUs like what you said.
>
> BUT, I just noticed: I partition the domain based on the computational
> wight of the elements which is different to that of Mat-Vec calculation.
> This means my portioning may not be efficient for the solution process.
>
> I think I will then go with the copy-in, solve, copy-out option.
>
>
>
>
> On Wed, May 14, 2014 at 3:06 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> >
> > On May 13, 2014, at 11:42 AM, Hossein Talebi <talebi.hossein at gmail.com>
> > wrote:
> >
> > >
> > > I have already decomposed the Finite Element system using Metis. I just
> > need to have the global rows exactly like how I define and I like to have
> > the answer in the same layout so I don't have to move things around the
> > processes again.
> >
> > Metis tells you a good partitioning IT DOES NOT MOVE the elements to
> > form a good partitioning. Do you move the elements around based on what
> > metis told you and similarly do you renumber the elements (and vertices)
> to
> > be contiquously numbered on each process with the first process getting
> the
> > first set of numbers, the second process the second set of numbers etc?
> >
> > If you do all that then when you create Vec and Mat you should simply
> > set the local size (based on the number of local vertices on each
> process).
> > You never need to use PetscLayoutCreate and in fact if your code was in C
> > you would never use PetscLayoutCreate()
> >
> > If you do not do all that then you need to do that first before you
> > start calling PETSc.
> >
> > Barry
> >
> > >
> > > No, I don't need it for something else.
> > >
> > > Cheers
> > > Hossein
> > >
> > >
> > >
> > >
> > > On Tue, May 13, 2014 at 6:36 PM, Matthew Knepley <knepley at gmail.com>
> > wrote:
> > > On Tue, May 13, 2014 at 11:07 AM, Hossein Talebi <
> > talebi.hossein at gmail.com> wrote:
> > > Hi All,
> > >
> > >
> > > I am using PETSC from Fortran. I would like to define my own layout
> i.e.
> > which row belongs to which CPU since I have already done the domain
> > decomposition. It appears that "PetscLayoutCreate" and the other
> routine
> > do this. But in the manual it says it is not provided in Fortran.
> > >
> > > Is there any way that I can do this using Fortran? Anyone has an
> example?
> > >
> > > You can do this for Vec and Mat directly. Do you want it for something
> > else?
> > >
> > > Thanks,
> > >
> > > Matt
> > >
> > > Cheers
> > > Hossein
> > >
> > >
> > >
> > >
> > > --
> > > What most experimenters take for granted before they begin their
> > experiments is infinitely more interesting than any results to which
> their
> > experiments lead.
> > > -- Norbert Wiener
> > >
> > >
> > >
> > > --
> > > www.permix.org
> >
> >
>
>
> --
> www.permix.org
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140514/fcae21ab/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 4
> Date: Wed, 14 May 2014 08:29:47 +0000
> From: "De Groof, Vincent Frans Maria" <Vincent.De-Groof at uibk.ac.at>
> To: Jed Brown <jed at jedbrown.org>, Barry Smith <bsmith at mcs.anl.gov>
> Cc: "petsc-users at mcs.anl.gov" <petsc-users at mcs.anl.gov>
> Subject: Re: [petsc-users] Memory usage during matrix factorization
> Message-ID:
> <17A78B9D13564547AC894B88C159674720382707 at XMBX4.uibk.ac.at>
> Content-Type: text/plain; charset="us-ascii"
>
>
> Thanks. I made a new function based on the PetscGetCurrentUsage which does
> what I want. It seems like I am being lucky as the numbers returned by the
> OS seem to be reasonable.
>
>
> thanks again,
> Vincent
> ________________________________________
> Von: Jed Brown [jed at jedbrown.org]
> Gesendet: Mittwoch, 14. Mai 2014 00:59
> An: Barry Smith; De Groof, Vincent Frans Maria
> Cc: petsc-users at mcs.anl.gov
> Betreff: Re: [petsc-users] Memory usage during matrix factorization
>
> Barry Smith <bsmith at mcs.anl.gov> writes:
> > Here is the code: it is only as reliable as the OS is at reporting
> the values.
>
> HPC vendors have a habit of implementing these functions to return
> nonsense. Sometimes they provide non-standard functions to return
> useful information.
>
>
> ------------------------------
>
> Message: 5
> Date: Wed, 14 May 2014 09:02:39 +0000
> From: "Klaij, Christiaan" <C.Klaij at marin.nl>
> To: "petsc-users at mcs.anl.gov" <petsc-users at mcs.anl.gov>
> Subject: [petsc-users] petsc 3.4, mat_view and prefix problem
> Message-ID: <dd17e553056f42e8aa5795d825e3b4e0 at MAR190N1.marin.local>
> Content-Type: text/plain; charset="utf-8"
>
> I'm having problems using mat_view in petsc 3.4.3 in combination
> with a prefix. For example in ../snes/examples/tutorials/ex70:
>
> mpiexec -n 2 ./ex70 -nx 16 -ny 24 -ksp_type fgmres -pc_type
> fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_schur_fact_type lower
> -user_ksp -a00_mat_view
>
> does not print the matrix a00 to screen. This used to work in 3.3
> versions before the single consistent -xxx_view scheme.
>
> Similarly, if I add this at line 105 of
> ../ksp/ksp/examples/tutorials/ex1f.F:
>
> call MatSetOptionsPrefix(A,"a_",ierr)
>
> then running with -mat_view still prints the matrix to screen but
> running with -a_mat_view doesn't. I expected the opposite.
>
> The problem only occurs with mat, not with ksp. For example, if I
> add this at line 184 of ../ksp/ksp/examples/tutorials/ex1f.F:
>
> call KSPSetOptionsPrefix(ksp,"a_",ierr)
>
> then running with -a_ksp_monitor does print the residuals to
> screen and -ksp_monitor doesn't, as expected.
>
>
> dr. ir. Christiaan Klaij
> CFD Researcher
> Research & Development
> E mailto:C.Klaij at marin.nl
> T +31 317 49 33 44
>
>
> MARIN
> 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
> T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl
>
>
> ------------------------------
>
> Message: 6
> Date: Wed, 14 May 2014 13:29:55 +0200
> From: Christophe Ortiz <christophe.ortiz at ciemat.es>
> To: petsc-users at mcs.anl.gov
> Subject: [petsc-users] Memory corruption with two-dimensional array
> and PetscMemzero
> Message-ID:
> <
> CANBrw+7qxvvZbX2oLmniiOo1zb3-waGW+04Q3wepONkz-4DbVg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi all,
>
> I am experiencing some problems of memory corruption with PetscMemzero().
>
> I set the values of the Jacobian by blocks using MatSetValuesBlocked(). To
> do so, I use some temporary two-dimensional arrays[dof][dof] that I must
> reset at each loop.
>
> Inside FormIJacobian, for instance, I declare the following two-dimensional
> array:
>
> PetscScalar diag[dof][dof];
>
> and then, to zero the array diag[][] I do
>
> ierr = PetscMemzero(diag,dof*dof*sizeof(PetscScalar));
>
> So far no problem. It works fine.
>
> Now, what I want is to have diag[][] as a global array so all functions can
> have access to it. Therefore, I declare it outside main().
> Since outside the main() I still do not know dof, which is determined later
> inside main(), I declare the two-dimensional array diag as follows:
>
> PetscScalar **diag;
>
> Then, inside main(), once dof is determined, I allocate memory for diag as
> follows:
>
> diag = (PetscScalar**)malloc(sizeof(PetscScalar*) * dof);
>
> for (k = 0; k < dof; k++){
> diag[k] = (PetscScalar*)malloc(sizeof(PetscScalar) * dof);
> }
> That is, the classical way to allocate memory using the pointer notation.
>
> Then, when it comes to zero the two-dimensional array diag[][] inside
> FormIJacobian, I do as before:
>
> ierr = PetscMemzero(diag,dof*dof*sizeof(PetscScalar));
>
> Compilation goes well but when I launch the executable, after few timesteps
> I get the following memory corruption message:
>
> TSAdapt 'basic': step 0 accepted t=0 + 1.000e-16
> wlte=8.5e-05 family='arkimex' scheme=0:'1bee' dt=1.100e-16
> TSAdapt 'basic': step 1 accepted t=1e-16 + 1.100e-16
> wlte=4.07e-13 family='arkimex' scheme=0:'3' dt=1.210e-16
> TSAdapt 'basic': step 2 accepted t=2.1e-16 + 1.210e-16
> wlte=1.15e-13 family='arkimex' scheme=0:'3' dt=1.331e-16
> TSAdapt 'basic': step 3 accepted t=3.31e-16 + 1.331e-16
> wlte=1.14e-13 family='arkimex' scheme=0:'3' dt=1.464e-16
> [0]PETSC ERROR: PetscMallocValidate: error detected at TSComputeIJacobian()
> line 719 in src/ts/interface/ts.c
> [0]PETSC ERROR: Memory [id=0(0)] at address 0x243c260 is corrupted
> (probably write past end of array)
> [0]PETSC ERROR: Memory originally allocated in (null)() line 0 in
> src/mat/impls/aij/seq/(null)
> [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> [0]PETSC ERROR: Memory corruption!
> [0]PETSC ERROR: !
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013
> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [0]PETSC ERROR: See docs/index.html for manual pages.
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: ./diffusion on a icc-nompi-double-blas-debug named
> mazinger.ciemat.es by u5751 Wed May 14 13:23:26 2014
> [0]PETSC ERROR: Libraries linked from
> /home/u5751/petsc-3.4.1/icc-nompi-double-blas-debug/lib
> [0]PETSC ERROR: Configure run at Wed Apr 2 14:01:51 2014
> [0]PETSC ERROR: Configure options --with-mpi=0 --with-cc=icc --with-cxx=icc
> --with-clanguage=cxx --with-debugging=1 --with-scalar-type=real
> --with-precision=double --download-f-blas-lapack
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: PetscMallocValidate() line 149 in src/sys/memory/mtr.c
> [0]PETSC ERROR: TSComputeIJacobian() line 719 in src/ts/interface/ts.c
> [0]PETSC ERROR: SNESTSFormJacobian_ARKIMEX() line 995 in
> src/ts/impls/arkimex/arkimex.c
> [0]PETSC ERROR: SNESTSFormJacobian() line 3397 in src/ts/interface/ts.c
> [0]PETSC ERROR: SNESComputeJacobian() line 2152 in
> src/snes/interface/snes.c
> [0]PETSC ERROR: SNESSolve_NEWTONLS() line 218 in src/snes/impls/ls/ls.c
> [0]PETSC ERROR: SNESSolve() line 3636 in src/snes/interface/snes.c
> [0]PETSC ERROR: TSStep_ARKIMEX() line 765 in src/ts/impls/arkimex/arkimex.c
> [0]PETSC ERROR: TSStep() line 2458 in src/ts/interface/ts.c
> [0]PETSC ERROR: TSSolve() line 2583 in src/ts/interface/ts.c
> [0]PETSC ERROR: main() line 2690 in src/ts/examples/tutorials/diffusion.cxx
> ./compile_diffusion: line 25: 17061 Aborted ./diffusion
> -ts_adapt_monitor -ts_adapt_basic_clip 0.01,1.10 -draw_pause -2
> -ts_arkimex_type 3 -ts_max_snes_failures -1 -snes_type newtonls
> -snes_linesearch_type basic -ksp_type gmres -pc_type ilu
>
> Did I do something wrong ? Or is it due to the pointer notation to declare
> the two-dimensional array that conflicts with PetscMemzero ?
>
> Many thanks in advance for your help.
>
> Christophe
>
> --
> Q
> Por favor, piense en el medio ambiente antes de imprimir este mensaje.
> Please consider the environment before printing this email.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140514/2f259965/attachment.html
> >
>
> ------------------------------
>
> _______________________________________________
> petsc-users mailing list
> petsc-users at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users
>
>
> End of petsc-users Digest, Vol 65, Issue 49
> *******************************************
>
> ----------------------------
> Confidencialidad:
> Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su
> destinatario y puede contener información privilegiada o confidencial. Si
> no es vd. el destinatario indicado, queda notificado de que la utilización,
> divulgación y/o copia sin autorización está prohibida en virtud de la
> legislación vigente. Si ha recibido este mensaje por error, le rogamos que
> nos lo comunique inmediatamente respondiendo al mensaje y proceda a su
> destrucción.
>
> Disclaimer:
> This message and its attached files is intended exclusively for its
> recipients and may contain confidential information. If you received this
> e-mail in error you are hereby notified that any dissemination, copy or
> disclosure of this communication is strictly prohibited and may be
> unlawful. In this case, please notify us by a reply and delete this email
> and its contents immediately.
> ----------------------------
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140514/a2025596/attachment-0001.html>
More information about the petsc-users
mailing list