<div dir="ltr">Ok, I just found the answer regarding the memory corruption with the two-dimensional array and PetscMemzero.<div><br></div><div>Instead of </div><div><div> ierr = PetscMemzero(diagbl,dof*dofsizeof(PetscScalar));CHKERRQ(ierr);</div>
</div><div><br></div><div><div>One must do the following:</div><div><br></div><div> for (k = 0; k < dof; k++) {</div><div> ierr = PetscMemzero(diagbl[k],dof*sizeof(PetscScalar));CHKERRQ(ierr);</div><div> }</div></div>
<div><br></div><div>Indeed, due to the ** notation, the two-dimensional array is made of dof rows of dof columns. You cannot set dof*dof values to just one row but you must iterate through the rows and set dof values each time.</div>
<div><br></div><div><br></div><div>Now it works fine.</div><div><br></div><div>Christophe</div><div class="gmail_extra"><div><div>--<br><span style="color:rgb(0,128,0);font-family:Webdings;font-size:x-large">Q</span> <div>
<span style="background-color:rgb(255,255,255);white-space:pre-wrap">Por favor, piense en el medio ambiente antes de imprimir este mensaje.</span><div><span style="white-space:pre-wrap">Please consider the environment before printing this email.</span></div>
</div></div></div>
<br><br><div class="gmail_quote">On Wed, May 14, 2014 at 1:29 PM, <span dir="ltr"><<a href="mailto:petsc-users-request@mcs.anl.gov" target="_blank">petsc-users-request@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Send petsc-users mailing list submissions to<br>
<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/petsc-users" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/petsc-users</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:petsc-users-request@mcs.anl.gov">petsc-users-request@mcs.anl.gov</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:petsc-users-owner@mcs.anl.gov">petsc-users-owner@mcs.anl.gov</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of petsc-users digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: Problem with MatZeroRowsColumnsIS() (Barry Smith)<br>
2. Re: Problem with MatZeroRowsColumnsIS() (Jed Brown)<br>
3. Re: PetscLayoutCreate for Fortran (Hossein Talebi)<br>
4. Re: Memory usage during matrix factorization<br>
(De Groof, Vincent Frans Maria)<br>
5. petsc 3.4, mat_view and prefix problem (Klaij, Christiaan)<br>
6. Memory corruption with two-dimensional array and<br>
PetscMemzero (Christophe Ortiz)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Tue, 13 May 2014 21:00:50 -0500<br>
From: Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>><br>
To: "P?s?k, Adina-Erika" <<a href="mailto:puesoek@uni-mainz.de">puesoek@uni-mainz.de</a>><br>
Cc: "<<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>>" <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>
Subject: Re: [petsc-users] Problem with MatZeroRowsColumnsIS()<br>
Message-ID: <<a href="mailto:1C839F22-8ADF-4904-B136-D1461AE38187@mcs.anl.gov">1C839F22-8ADF-4904-B136-D1461AE38187@mcs.anl.gov</a>><br>
Content-Type: text/plain; charset="iso-8859-1"<br>
<br>
<br>
Given that Jed wrote MatZeroRowsColumns_MPIAIJ() it is unlikely to be wrong.<br>
<br>
You wrote<br>
<br>
Calculating the norm2 of the residuals defined above in each case gives:<br>
MatZeroRowsIS() 1cpu: norm(res,2) = 0<br>
MatZeroRowsIS() 4cpu: norm(res,2) = 0<br>
MatZeroRowsColumnsIS() 1cpu: norm(res,2) = 1.6880e-10<br>
MatZeroRowsColumnsIS() 4cpu: norm(res,2) = 7.3786e+06<br>
<br>
why do you conclude this is wrong? MatZeroRowsColumnsIS() IS suppose to change the right hand side in a way different than MatZeroRowsIS().<br>
<br>
Explanation. For simplicity reorder the matrix rows/columns so that zeroed ones come last and the matrix is symmetric. Then you have<br>
<br>
( A B ) (x_A) = (b_A)<br>
( B D ) (x_B) (b_B)<br>
<br>
with MatZeroRows the new system is<br>
<br>
( A B ) (x_A) = (b_A)<br>
( 0 I ) (x_B) (x_B)<br>
<br>
it has the same solution as the original problem with the give x_B<br>
<br>
with MatZeroRowsColumns the new system is<br>
<br>
( A 0 ) (x_A) = (b_A) - B*x_B<br>
( 0 I ) (x_B) (x_B)<br>
<br>
note the right hand side needs to be changed so that the new problem has the same solution.<br>
<br>
Barry<br>
<br>
<br>
On May 9, 2014, at 9:50 AM, P?s?k, Adina-Erika <<a href="mailto:puesoek@uni-mainz.de">puesoek@uni-mainz.de</a>> wrote:<br>
<br>
> Yes, I tested the implementation with both MatZeroRowsIS() and MatZeroRowsColumnsIS(). But first, I will be more explicit about the problem I was set to solve:<br>
><br>
> We have a Dirichlet block of size (L,W,H) and centered (xc,yc,zc), which is much smaller than the model domain, and we set Vx = Vpush, Vy=0 within the block (Vz is let free for easier convergence).<br>
> As I said before, since the code does not have a monolithic matrix, but 4 submatrices (VV VP; PV PP), and the rhs has 2 sub vectors rhs=(f; g), my approach is to modify only (VV, VP, f) for the Dirichlet BC.<br>
><br>
> The way I tested the implementation:<br>
> 1) Output (VV, VP, f, Dirichlet dofs) - unmodified (no Dirichlet BC)<br>
> 2) Output (VV, VP, f, Dirichlet dofs) - a) modified with MatZeroRowsIS(),<br>
> - b) modified with MatZeroRowsColumnsIS() -> S_PETSc<br>
> Again, the only difference between a) and b) is:<br>
> //<br>
> ierr = MatZeroRowsColumnsIS(VV_MAT,isx,v_vv,x_push,rhs); CHKERRQ(ierr);<br>
> // ierr = MatZeroRowsColumnsIS(VV_MAT,isy,v_vv,x_push,rhs); CHKERRQ(ierr);<br>
><br>
> ierr = MatZeroRowsIS(VV_MAT,isx,v_vv,x_push,rhs); CHKERRQ(ierr);<br>
> ierr = MatZeroRowsIS(VV_MAT,isy,v_vv,x_push,rhs); CHKERRQ(ierr);<br>
> 3) Read them in Matlab and perform the exact same operations on the unmodified matrices and f vector. -> S_Matlab<br>
> 4) Compare S_PETSc with S_Matlab. If the implementation is correct, they should be equal (VV, VP, f).<br>
> 5) Check for 1 cpu and 4 cpus.<br>
><br>
> Now to answer your questions:<br>
><br>
> a,b,d) Yes, matrix modification is done correctly (check the spy diagrams below) in all cases: MatZeroRowsIS() and MatZeroRowsColumnsIS() on 1 and 4 cpus.<br>
><br>
> I should have said that in the piece of code above:<br>
> v_vv = 1.0;<br>
> v_vp = 0.0;<br>
> The vector x_push is a duplicate of rhs, with zero elements except the values for the Dirichlet dofs.<br>
><br>
> c) The rhs is a different matter. With MatZeroRows() there is no problem. The rhs is equivalent with the one in Matlab, sequential and parallel.<br>
> However, with MatZeroRowsColumns(), the residual contains nonzero elements, and in parallel the nonzero pattern is even bigger (1 cpu - 63, 4 cpu - 554). But if you look carefully, the values of the nonzero residuals are very small < +/- 1e-10.<br>
> So, I did a tolerance filter:<br>
><br>
> tol = 1e-10;<br>
> res = f_petsc - f_mod_matlab;<br>
> for i=1:length(res)<br>
> if abs(res(i))>0 & abs(res(i))<tol<br>
> res(i)=0;<br>
> end<br>
> end<br>
><br>
> and then the f_petsc and f_mod_matlab are equivalent on 1 and 4 cpus (figure 5). So it seems that MatZeroRowsColumnsIS() might give some nonzero residuals.<br>
><br>
> Calculating the norm2 of the residuals defined above in each case gives:<br>
> MatZeroRowsIS() 1cpu: norm(res,2) = 0<br>
> MatZeroRowsIS() 4cpu: norm(res,2) = 0<br>
> MatZeroRowsColumnsIS() 1cpu: norm(res,2) = 1.6880e-10<br>
> MatZeroRowsColumnsIS() 4cpu: norm(res,2) = 7.3786e+06<br>
><br>
> Since this is purely a problem of matrix and vector assembly/manipulation, I think the nonzero residuals of the rhs with MatZeroRowsColumnsIS() give the parallel artefacts that I showed last time.<br>
> If you need the raw data and the matlab scripts that I used for testing for your consideration, please let me know.<br>
><br>
> Thanks,<br>
> Adina<br>
><br>
> When performing the manual operations on the unmodified matrices and rhs vector in Matlab, I took into account:<br>
> - matlab indexing = petsc indexing +1;<br>
> - the vectors written to file for matlab (PETSC_VIEWER_BINARY_MATLAB) have the natural ordering, rather than the petsc ordering. On 1 cpu, they are equivalent, but on 4 cpus, the Dirichlet BC indices had to be converted to natural indices in order to perform the correct operations on the rhs.<br>
><br>
> <spy_zerorows_1cpu.png><br>
> <spy_zerorows_4cpu.png><br>
> <spy_zerorowscol_1cpu.png><br>
> <spy_zerorowscol_4cpu.png><br>
> <residual_tol.png><br>
><br>
> On May 6, 2014, at 4:22 PM, Matthew Knepley wrote:<br>
><br>
>> On Tue, May 6, 2014 at 7:23 AM, P?s?k, Adina-Erika <<a href="mailto:puesoek@uni-mainz.de">puesoek@uni-mainz.de</a>> wrote:<br>
>> Hello!<br>
>><br>
>> I was trying to implement some internal Dirichlet boundary conditions into an aij matrix of the form: A=( VV VP; PV PP ). The idea was to create an internal block (let's say Dirichlet block) that moves with constant velocity within the domain (i.e. check all the dofs within the block and set the values accordingly to the desired motion).<br>
>><br>
>> Ideally, this means to zero the rows and columns in VV, VP, PV corresponding to the dirichlet dofs and modify the corresponding rhs values. However, since we have submatrices and not a monolithic matrix A, we can choose to modify only VV and PV matrices.<br>
>> The global indices of the velocity points within the Dirichlet block are contained in the arrays rowid_array.<br>
>><br>
>> What I want to point out is that the function MatZeroRowsColumnsIS() seems to create parallel artefacts, compared to MatZeroRowsIS() when run on more than 1 processor. Moreover, the results on 1 cpu are identical.<br>
>> See below the results of the test (the Dirichlet block is outlined in white) and the piece of the code involved where the 1) - 2) parts are the only difference.<br>
>><br>
>> I am assuming that you are showing the result of solving the equations. It would be more useful, and presumably just as easy<br>
>> to say:<br>
>><br>
>> a) Are the correct rows zeroed out?<br>
>><br>
>> b) Is the diagonal element correct?<br>
>><br>
>> c) Is the rhs value correct?<br>
>><br>
>> d) Are the columns zeroed correctly?<br>
>><br>
>> If we know where the problem is, its easier to fix. For example, if the rhs values are<br>
>> correct and the rows are zeroed, then something is wrong with the solution procedure.<br>
>> Since ZeroRows() works and ZeroRowsColumns() does not, this is a distinct possibility.<br>
>><br>
>> Thanks,<br>
>><br>
>> Matt<br>
>><br>
>> Thanks,<br>
>> Adina Pusok<br>
>><br>
>> // Create an IS required by MatZeroRows()<br>
>> ierr = ISCreateGeneral(PETSC_COMM_WORLD,numRowsx,rowidx_array,PETSC_COPY_VALUES,&isx); CHKERRQ(ierr);<br>
>> ierr = ISCreateGeneral(PETSC_COMM_WORLD,numRowsy,rowidy_array,PETSC_COPY_VALUES,&isy); CHKERRQ(ierr);<br>
>> ierr = ISCreateGeneral(PETSC_COMM_WORLD,numRowsz,rowidz_array,PETSC_COPY_VALUES,&isz); CHKERRQ(ierr);<br>
>><br>
>> 1) /*<br>
>> ierr = MatZeroRowsColumnsIS(VV_MAT,isx,v_vv,x_push,rhs); CHKERRQ(ierr);<br>
>> ierr = MatZeroRowsColumnsIS(VV_MAT,isy,v_vv,x_push,rhs); CHKERRQ(ierr);<br>
>> ierr = MatZeroRowsColumnsIS(VV_MAT,isz,v_vv,x_push,rhs); CHKERRQ(ierr);*/<br>
>><br>
>> 2) ierr = MatZeroRowsIS(VV_MAT,isx,v_vv,x_push,rhs); CHKERRQ(ierr);<br>
>> ierr = MatZeroRowsIS(VV_MAT,isy,v_vv,x_push,rhs); CHKERRQ(ierr);<br>
>> ierr = MatZeroRowsIS(VV_MAT,isz,v_vv,x_push,rhs); CHKERRQ(ierr);<br>
>><br>
>> ierr = MatZeroRowsIS(VP_MAT,isx,v_vp,PETSC_NULL,PETSC_NULL); CHKERRQ(ierr);<br>
>> ierr = MatZeroRowsIS(VP_MAT,isy,v_vp,PETSC_NULL,PETSC_NULL); CHKERRQ(ierr);<br>
>> ierr = MatZeroRowsIS(VP_MAT,isz,v_vp,PETSC_NULL,PETSC_NULL); CHKERRQ(ierr);<br>
>><br>
>> ierr = ISDestroy(&isx); CHKERRQ(ierr);<br>
>> ierr = ISDestroy(&isy); CHKERRQ(ierr);<br>
>> ierr = ISDestroy(&isz); CHKERRQ(ierr);<br>
>><br>
>><br>
>> Results (velocity) with MatZeroRowsColumnsIS().<br>
>> 1cpu<r01_1cpu_rows_columns.png> 4cpu<r01_rows_columns.png><br>
>><br>
>> Results (velocity) with MatZeroRowsIS():<br>
>> 1cpu<r01_1cpu_rows.png> 4cpu<r01_rows.png><br>
>><br>
>><br>
>><br>
>> --<br>
>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
>> -- Norbert Wiener<br>
><br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Tue, 13 May 2014 22:28:19 -0600<br>
From: Jed Brown <<a href="mailto:jed@jedbrown.org">jed@jedbrown.org</a>><br>
To: Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>>, P?s?k, Adina-Erika<br>
<<a href="mailto:puesoek@uni-mainz.de">puesoek@uni-mainz.de</a>><br>
Cc: "<<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>>" <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>
Subject: Re: [petsc-users] Problem with MatZeroRowsColumnsIS()<br>
Message-ID: <<a href="mailto:87k39p7zmk.fsf@jedbrown.org">87k39p7zmk.fsf@jedbrown.org</a>><br>
Content-Type: text/plain; charset="us-ascii"<br>
<br>
Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> writes:<br>
<br>
> Given that Jed wrote MatZeroRowsColumns_MPIAIJ() it is unlikely to be wrong.<br>
<br>
Haha. Though MatZeroRowsColumns_MPIAIJ uses PetscSF, the implementation<br>
was written by Matt. I think it's correct, however, at least as of<br>
Matt's January changes in 'master'.<br>
-------------- next part --------------<br>
A non-text attachment was scrubbed...<br>
Name: not available<br>
Type: application/pgp-signature<br>
Size: 835 bytes<br>
Desc: not available<br>
URL: <<a href="http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140513/4f5868b4/attachment-0001.pgp" target="_blank">http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140513/4f5868b4/attachment-0001.pgp</a>><br>
<br>
------------------------------<br>
<br>
Message: 3<br>
Date: Wed, 14 May 2014 07:43:21 +0200<br>
From: Hossein Talebi <<a href="mailto:talebi.hossein@gmail.com">talebi.hossein@gmail.com</a>><br>
To: Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>><br>
Cc: petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>
Subject: Re: [petsc-users] PetscLayoutCreate for Fortran<br>
Message-ID:<br>
<CAEjaKwBc7X_jYw1K+wV0wMXJOdWAiwo1W=-<a href="mailto:4b3hoO7_b3DwESQ@mail.gmail.com">4b3hoO7_b3DwESQ@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Thank you.<br>
<br>
Well, only the first part. I move around the elements and identify the Halo<br>
nodes etc. However, I do not renumber the vertices to be contiguous on the<br>
CPUs like what you said.<br>
<br>
BUT, I just noticed: I partition the domain based on the computational<br>
wight of the elements which is different to that of Mat-Vec calculation.<br>
This means my portioning may not be efficient for the solution process.<br>
<br>
I think I will then go with the copy-in, solve, copy-out option.<br>
<br>
<br>
<br>
<br>
On Wed, May 14, 2014 at 3:06 AM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
<br>
><br>
> On May 13, 2014, at 11:42 AM, Hossein Talebi <<a href="mailto:talebi.hossein@gmail.com">talebi.hossein@gmail.com</a>><br>
> wrote:<br>
><br>
> ><br>
> > I have already decomposed the Finite Element system using Metis. I just<br>
> need to have the global rows exactly like how I define and I like to have<br>
> the answer in the same layout so I don't have to move things around the<br>
> processes again.<br>
><br>
> Metis tells you a good partitioning IT DOES NOT MOVE the elements to<br>
> form a good partitioning. Do you move the elements around based on what<br>
> metis told you and similarly do you renumber the elements (and vertices) to<br>
> be contiquously numbered on each process with the first process getting the<br>
> first set of numbers, the second process the second set of numbers etc?<br>
><br>
> If you do all that then when you create Vec and Mat you should simply<br>
> set the local size (based on the number of local vertices on each process).<br>
> You never need to use PetscLayoutCreate and in fact if your code was in C<br>
> you would never use PetscLayoutCreate()<br>
><br>
> If you do not do all that then you need to do that first before you<br>
> start calling PETSc.<br>
><br>
> Barry<br>
><br>
> ><br>
> > No, I don't need it for something else.<br>
> ><br>
> > Cheers<br>
> > Hossein<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > On Tue, May 13, 2014 at 6:36 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>><br>
> wrote:<br>
> > On Tue, May 13, 2014 at 11:07 AM, Hossein Talebi <<br>
> <a href="mailto:talebi.hossein@gmail.com">talebi.hossein@gmail.com</a>> wrote:<br>
> > Hi All,<br>
> ><br>
> ><br>
> > I am using PETSC from Fortran. I would like to define my own layout i.e.<br>
> which row belongs to which CPU since I have already done the domain<br>
> decomposition. It appears that "PetscLayoutCreate" and the other routine<br>
> do this. But in the manual it says it is not provided in Fortran.<br>
> ><br>
> > Is there any way that I can do this using Fortran? Anyone has an example?<br>
> ><br>
> > You can do this for Vec and Mat directly. Do you want it for something<br>
> else?<br>
> ><br>
> > Thanks,<br>
> ><br>
> > Matt<br>
> ><br>
> > Cheers<br>
> > Hossein<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > What most experimenters take for granted before they begin their<br>
> experiments is infinitely more interesting than any results to which their<br>
> experiments lead.<br>
> > -- Norbert Wiener<br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > <a href="http://www.permix.org" target="_blank">www.permix.org</a><br>
><br>
><br>
<br>
<br>
--<br>
<a href="http://www.permix.org" target="_blank">www.permix.org</a><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140514/fcae21ab/attachment-0001.html" target="_blank">http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140514/fcae21ab/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 4<br>
Date: Wed, 14 May 2014 08:29:47 +0000<br>
From: "De Groof, Vincent Frans Maria" <<a href="mailto:Vincent.De-Groof@uibk.ac.at">Vincent.De-Groof@uibk.ac.at</a>><br>
To: Jed Brown <<a href="mailto:jed@jedbrown.org">jed@jedbrown.org</a>>, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>><br>
Cc: "<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>" <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>
Subject: Re: [petsc-users] Memory usage during matrix factorization<br>
Message-ID:<br>
<<a href="mailto:17A78B9D13564547AC894B88C159674720382707@XMBX4.uibk.ac.at">17A78B9D13564547AC894B88C159674720382707@XMBX4.uibk.ac.at</a>><br>
Content-Type: text/plain; charset="us-ascii"<br>
<br>
<br>
Thanks. I made a new function based on the PetscGetCurrentUsage which does what I want. It seems like I am being lucky as the numbers returned by the OS seem to be reasonable.<br>
<br>
<br>
thanks again,<br>
Vincent<br>
________________________________________<br>
Von: Jed Brown [<a href="mailto:jed@jedbrown.org">jed@jedbrown.org</a>]<br>
Gesendet: Mittwoch, 14. Mai 2014 00:59<br>
An: Barry Smith; De Groof, Vincent Frans Maria<br>
Cc: <a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>
Betreff: Re: [petsc-users] Memory usage during matrix factorization<br>
<br>
Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> writes:<br>
> Here is the code: it is only as reliable as the OS is at reporting the values.<br>
<br>
HPC vendors have a habit of implementing these functions to return<br>
nonsense. Sometimes they provide non-standard functions to return<br>
useful information.<br>
<br>
<br>
------------------------------<br>
<br>
Message: 5<br>
Date: Wed, 14 May 2014 09:02:39 +0000<br>
From: "Klaij, Christiaan" <<a href="mailto:C.Klaij@marin.nl">C.Klaij@marin.nl</a>><br>
To: "<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>" <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>
Subject: [petsc-users] petsc 3.4, mat_view and prefix problem<br>
Message-ID: <dd17e553056f42e8aa5795d825e3b4e0@MAR190N1.marin.local><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
I'm having problems using mat_view in petsc 3.4.3 in combination<br>
with a prefix. For example in ../snes/examples/tutorials/ex70:<br>
<br>
mpiexec -n 2 ./ex70 -nx 16 -ny 24 -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_schur_fact_type lower -user_ksp -a00_mat_view<br>
<br>
does not print the matrix a00 to screen. This used to work in 3.3<br>
versions before the single consistent -xxx_view scheme.<br>
<br>
Similarly, if I add this at line 105 of<br>
../ksp/ksp/examples/tutorials/ex1f.F:<br>
<br>
call MatSetOptionsPrefix(A,"a_",ierr)<br>
<br>
then running with -mat_view still prints the matrix to screen but<br>
running with -a_mat_view doesn't. I expected the opposite.<br>
<br>
The problem only occurs with mat, not with ksp. For example, if I<br>
add this at line 184 of ../ksp/ksp/examples/tutorials/ex1f.F:<br>
<br>
call KSPSetOptionsPrefix(ksp,"a_",ierr)<br>
<br>
then running with -a_ksp_monitor does print the residuals to<br>
screen and -ksp_monitor doesn't, as expected.<br>
<br>
<br>
dr. ir. Christiaan Klaij<br>
CFD Researcher<br>
Research & Development<br>
E mailto:<a href="mailto:C.Klaij@marin.nl">C.Klaij@marin.nl</a><br>
T +31 317 49 33 44<br>
<br>
<br>
MARIN<br>
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands<br>
T +31 317 49 39 11, F +31 317 49 32 45, I <a href="http://www.marin.nl" target="_blank">www.marin.nl</a><br>
<br>
<br>
------------------------------<br>
<br>
Message: 6<br>
Date: Wed, 14 May 2014 13:29:55 +0200<br>
From: Christophe Ortiz <<a href="mailto:christophe.ortiz@ciemat.es">christophe.ortiz@ciemat.es</a>><br>
To: <a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>
Subject: [petsc-users] Memory corruption with two-dimensional array<br>
and PetscMemzero<br>
Message-ID:<br>
<<a href="mailto:CANBrw%2B7qxvvZbX2oLmniiOo1zb3-waGW%2B04Q3wepONkz-4DbVg@mail.gmail.com">CANBrw+7qxvvZbX2oLmniiOo1zb3-waGW+04Q3wepONkz-4DbVg@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Hi all,<br>
<br>
I am experiencing some problems of memory corruption with PetscMemzero().<br>
<br>
I set the values of the Jacobian by blocks using MatSetValuesBlocked(). To<br>
do so, I use some temporary two-dimensional arrays[dof][dof] that I must<br>
reset at each loop.<br>
<br>
Inside FormIJacobian, for instance, I declare the following two-dimensional<br>
array:<br>
<br>
PetscScalar diag[dof][dof];<br>
<br>
and then, to zero the array diag[][] I do<br>
<br>
ierr = PetscMemzero(diag,dof*dof*sizeof(PetscScalar));<br>
<br>
So far no problem. It works fine.<br>
<br>
Now, what I want is to have diag[][] as a global array so all functions can<br>
have access to it. Therefore, I declare it outside main().<br>
Since outside the main() I still do not know dof, which is determined later<br>
inside main(), I declare the two-dimensional array diag as follows:<br>
<br>
PetscScalar **diag;<br>
<br>
Then, inside main(), once dof is determined, I allocate memory for diag as<br>
follows:<br>
<br>
diag = (PetscScalar**)malloc(sizeof(PetscScalar*) * dof);<br>
<br>
for (k = 0; k < dof; k++){<br>
diag[k] = (PetscScalar*)malloc(sizeof(PetscScalar) * dof);<br>
}<br>
That is, the classical way to allocate memory using the pointer notation.<br>
<br>
Then, when it comes to zero the two-dimensional array diag[][] inside<br>
FormIJacobian, I do as before:<br>
<br>
ierr = PetscMemzero(diag,dof*dof*sizeof(PetscScalar));<br>
<br>
Compilation goes well but when I launch the executable, after few timesteps<br>
I get the following memory corruption message:<br>
<br>
TSAdapt 'basic': step 0 accepted t=0 + 1.000e-16<br>
wlte=8.5e-05 family='arkimex' scheme=0:'1bee' dt=1.100e-16<br>
TSAdapt 'basic': step 1 accepted t=1e-16 + 1.100e-16<br>
wlte=4.07e-13 family='arkimex' scheme=0:'3' dt=1.210e-16<br>
TSAdapt 'basic': step 2 accepted t=2.1e-16 + 1.210e-16<br>
wlte=1.15e-13 family='arkimex' scheme=0:'3' dt=1.331e-16<br>
TSAdapt 'basic': step 3 accepted t=3.31e-16 + 1.331e-16<br>
wlte=1.14e-13 family='arkimex' scheme=0:'3' dt=1.464e-16<br>
[0]PETSC ERROR: PetscMallocValidate: error detected at TSComputeIJacobian()<br>
line 719 in src/ts/interface/ts.c<br>
[0]PETSC ERROR: Memory [id=0(0)] at address 0x243c260 is corrupted<br>
(probably write past end of array)<br>
[0]PETSC ERROR: Memory originally allocated in (null)() line 0 in<br>
src/mat/impls/aij/seq/(null)<br>
[0]PETSC ERROR: --------------------- Error Message<br>
------------------------------------<br>
[0]PETSC ERROR: Memory corruption!<br>
[0]PETSC ERROR: !<br>
[0]PETSC ERROR:<br>
------------------------------------------------------------------------<br>
[0]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013<br>
[0]PETSC ERROR: See docs/changes/index.html for recent updates.<br>
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.<br>
[0]PETSC ERROR: See docs/index.html for manual pages.<br>
[0]PETSC ERROR:<br>
------------------------------------------------------------------------<br>
[0]PETSC ERROR: ./diffusion on a icc-nompi-double-blas-debug named<br>
<a href="http://mazinger.ciemat.es" target="_blank">mazinger.ciemat.es</a> by u5751 Wed May 14 13:23:26 2014<br>
[0]PETSC ERROR: Libraries linked from<br>
/home/u5751/petsc-3.4.1/icc-nompi-double-blas-debug/lib<br>
[0]PETSC ERROR: Configure run at Wed Apr 2 14:01:51 2014<br>
[0]PETSC ERROR: Configure options --with-mpi=0 --with-cc=icc --with-cxx=icc<br>
--with-clanguage=cxx --with-debugging=1 --with-scalar-type=real<br>
--with-precision=double --download-f-blas-lapack<br>
[0]PETSC ERROR:<br>
------------------------------------------------------------------------<br>
[0]PETSC ERROR: PetscMallocValidate() line 149 in src/sys/memory/mtr.c<br>
[0]PETSC ERROR: TSComputeIJacobian() line 719 in src/ts/interface/ts.c<br>
[0]PETSC ERROR: SNESTSFormJacobian_ARKIMEX() line 995 in<br>
src/ts/impls/arkimex/arkimex.c<br>
[0]PETSC ERROR: SNESTSFormJacobian() line 3397 in src/ts/interface/ts.c<br>
[0]PETSC ERROR: SNESComputeJacobian() line 2152 in src/snes/interface/snes.c<br>
[0]PETSC ERROR: SNESSolve_NEWTONLS() line 218 in src/snes/impls/ls/ls.c<br>
[0]PETSC ERROR: SNESSolve() line 3636 in src/snes/interface/snes.c<br>
[0]PETSC ERROR: TSStep_ARKIMEX() line 765 in src/ts/impls/arkimex/arkimex.c<br>
[0]PETSC ERROR: TSStep() line 2458 in src/ts/interface/ts.c<br>
[0]PETSC ERROR: TSSolve() line 2583 in src/ts/interface/ts.c<br>
[0]PETSC ERROR: main() line 2690 in src/ts/examples/tutorials/diffusion.cxx<br>
./compile_diffusion: line 25: 17061 Aborted ./diffusion<br>
-ts_adapt_monitor -ts_adapt_basic_clip 0.01,1.10 -draw_pause -2<br>
-ts_arkimex_type 3 -ts_max_snes_failures -1 -snes_type newtonls<br>
-snes_linesearch_type basic -ksp_type gmres -pc_type ilu<br>
<br>
Did I do something wrong ? Or is it due to the pointer notation to declare<br>
the two-dimensional array that conflicts with PetscMemzero ?<br>
<br>
Many thanks in advance for your help.<br>
<br>
Christophe<br>
<br>
--<br>
Q<br>
Por favor, piense en el medio ambiente antes de imprimir este mensaje.<br>
Please consider the environment before printing this email.<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140514/2f259965/attachment.html" target="_blank">http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140514/2f259965/attachment.html</a>><br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
petsc-users mailing list<br>
<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/petsc-users" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/petsc-users</a><br>
<br>
<br>
End of petsc-users Digest, Vol 65, Issue 49<br>
*******************************************<br>
<br>
----------------------------<br>
Confidencialidad:<br>
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario y puede contener información privilegiada o confidencial. Si no es vd. el destinatario indicado, queda notificado de que la utilización, divulgación y/o copia sin autorización está prohibida en virtud de la legislación vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente respondiendo al mensaje y proceda a su destrucción.<br>
<br>
Disclaimer:<br>
This message and its attached files is intended exclusively for its recipients and may contain confidential information. If you received this e-mail in error you are hereby notified that any dissemination, copy or disclosure of this communication is strictly prohibited and may be unlawful. In this case, please notify us by a reply and delete this email and its contents immediately.<br>
----------------------------<br>
<br>
</blockquote></div><br></div></div>