Hi Satish,<br><br>Thanks for your suggestion. <br><br>I just tried both of these methods, but it seems that they did not work neither. After adding matstash_initial_size and vecstash_initial_size, stash uses 0 mallocs in mat_assembly stage. I cannot see much difference when I use MAT_FLUSH_ASSEMBLY after every element stiffness matrix are added. I list the last a few -info information where my codes gets stuck.<br>
<br>[5] MatAssemblyBegin_MPIAIJ(): Stash has 4806656 entries, uses 0 mallocs.<br>[4] MatAssemblyBegin_MPIAIJ(): Stash has 5964288 entries, uses 0 mallocs.<br>[6] MatAssemblyBegin_MPIAIJ(): Stash has 5727744 entries, uses 0 mallocs.<br>
[3] MatAssemblyBegin_MPIAIJ(): Stash has 8123904 entries, uses 0 mallocs.<br>[7] MatAssemblyBegin_MPIAIJ(): Stash has 7408128 entries, uses 0 mallocs.<br>[2] MatAssemblyBegin_MPIAIJ(): Stash has 11544576 entries, uses 0 mallocs.<br>
[0] MatStashScatterBegin_Private(): No of messages: 1 <br><div id=":7r">[0] MatStashScatterBegin_Private(): Mesg_to: 1: size: 107888648 <br>[0] MatAssemblyBegin_MPIAIJ(): Stash has 13486080 entries, uses 1 mallocs.<br>[1] MatAssemblyBegin_MPIAIJ(): Stash has 16386048 entries, uses 1 mallocs.</div>
<br><br><br><br><div class="gmail_quote">On Wed, Jan 18, 2012 at 1:00 PM, <span dir="ltr"><<a href="mailto:petsc-users-request@mcs.anl.gov">petsc-users-request@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Send petsc-users mailing list submissions to<br>
<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/petsc-users" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/petsc-users</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:petsc-users-request@mcs.anl.gov">petsc-users-request@mcs.anl.gov</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:petsc-users-owner@mcs.anl.gov">petsc-users-owner@mcs.anl.gov</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of petsc-users digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: generate entries on 'wrong' process (Satish Balay)<br>
2. Re: Multiple output using one viewer (Jed Brown)<br>
3. Re: DMGetMatrix segfault (Jed Brown)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Wed, 18 Jan 2012 11:07:21 -0600 (CST)<br>
From: Satish Balay <<a href="mailto:balay@mcs.anl.gov">balay@mcs.anl.gov</a>><br>
Subject: Re: [petsc-users] generate entries on 'wrong' process<br>
To: PETSc users list <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>
Message-ID: <alpine.LFD.2.02.1201181103120.2351@asterix><br>
Content-Type: TEXT/PLAIN; charset=US-ASCII<br>
<br>
You can do 2 things.<br>
<br>
1. allocate sufficient stash space to avoid mallocs.<br>
You can do this with the following runtime command line options<br>
-vecstash_initial_size<br>
-matstash_initial_size<br>
<br>
2. flush stashed values in stages instead of doing a single<br>
large communication at the end.<br>
<br>
<add values to matrix><br>
MatAssemblyBegin/End(MAT_FLUSH_ASSEMBLY)<br>
<add values to matrix><br>
MatAssemblyBegin/End(MAT_FLUSH_ASSEMBLY)<br>
...<br>
...<br>
<br>
<add values to matrix><br>
MatAssemblyBegin/End(MAT_FINAL_ASSEMBLY)<br>
<br>
Satish<br>
<br>
<br>
On Wed, 18 Jan 2012, Wen Jiang wrote:<br>
<br>
> Hi,<br>
><br>
> I am working on FEM codes with spline-based element type. For 3D case, one<br>
> element has 64 nodes and every two neighboring elements share 48 nodes.<br>
> Thus regardless how I partition a mesh, there are still very large number<br>
> of entries that have to write on the 'wrong' processor. And my code is<br>
> running on clusters, the processes are sending between 550 and 620 Million<br>
> packets per second across the network. My code seems IO-bound at this<br>
> moment and just get stuck at the matrix assembly stage. A -info file is<br>
> attached. Do I have other options to optimize my codes to be less<br>
> io-intensive?<br>
><br>
> Thanks in advance.<br>
><br>
> [0] VecAssemblyBegin_MPI(): Stash has 210720 entries, uses 12 mallocs.<br>
> [0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.<br>
> [5] MatAssemblyBegin_MPIAIJ(): Stash has 4806656 entries, uses 8 mallocs.<br>
> [6] MatAssemblyBegin_MPIAIJ(): Stash has 5727744 entries, uses 9 mallocs.<br>
> [4] MatAssemblyBegin_MPIAIJ(): Stash has 5964288 entries, uses 9 mallocs.<br>
> [7] MatAssemblyBegin_MPIAIJ(): Stash has 7408128 entries, uses 9 mallocs.<br>
> [3] MatAssemblyBegin_MPIAIJ(): Stash has 8123904 entries, uses 9 mallocs.<br>
> [2] MatAssemblyBegin_MPIAIJ(): Stash has 11544576 entries, uses 10 mallocs.<br>
> [0] MatStashScatterBegin_Private(): No of messages: 1<br>
> [0] MatStashScatterBegin_Private(): Mesg_to: 1: size: 107888648<br>
> [0] MatAssemblyBegin_MPIAIJ(): Stash has 13486080 entries, uses 10 mallocs.<br>
> [1] MatAssemblyBegin_MPIAIJ(): Stash has 16386048 entries, uses 10 mallocs.<br>
> [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0<br>
> unneeded,2514537 used<br>
> [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0<br>
> [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294<br>
> [0] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode<br>
> routines<br>
> [5] MatAssemblyEnd_SeqAIJ(): Matrix size: 11390 X 11390; storage space: 0<br>
> unneeded,2525390 used<br>
> [5] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0<br>
> [5] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294<br>
> [5] Mat_CheckInode(): Found 11390 nodes out of 11390 rows. Not using Inode<br>
> routines<br>
> [3] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0<br>
> unneeded,2500281 used<br>
> [3] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0<br>
> [3] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294<br>
> [3] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode<br>
> routines<br>
> [1] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0<br>
> unneeded,2500281 used<br>
> [1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0<br>
> [1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294<br>
> [1] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode<br>
> routines<br>
> [4] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0<br>
> unneeded,2500281 used<br>
> [4] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0<br>
> [4] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294<br>
> [4] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode<br>
> routines<br>
> [2] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0<br>
> unneeded,2525733 used<br>
> [2] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0<br>
> [2] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294<br>
> [2] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode<br>
> routines<br>
><br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Wed, 18 Jan 2012 11:39:49 -0600<br>
From: Jed Brown <<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>><br>
Subject: Re: [petsc-users] Multiple output using one viewer<br>
To: PETSc users list <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>
Message-ID:<br>
<CAM9tzSnwWoKa+GJP=<a href="mailto:po17coXG4VYmSry5H0%2BPPXyYTRfogW6Gw@mail.gmail.com">po17coXG4VYmSry5H0+PPXyYTRfogW6Gw@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
On Thu, Jan 5, 2012 at 18:17, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
<br>
> On Jan 5, 2012, at 9:40 AM, Jed Brown wrote:<br>
><br>
> > On Thu, Jan 5, 2012 at 09:36, Alexander Grayver <<a href="mailto:agrayver@gfz-potsdam.de">agrayver@gfz-potsdam.de</a>><br>
> wrote:<br>
> > Maybe this should be noted in the documentation?<br>
> ><br>
> > Yes, I think the old file should be closed (if it exists), but I'll wait<br>
> for comment.<br>
><br>
> I never thought about the case where someone called<br>
> PetscViewerFileSetName() twice. I'm surprised that it works at all.<br>
><br>
> Yes, it should (IMHO) be changed to close the old file if used twice.<br>
<br>
<br>
It works this way now.<br>
<br>
<a href="http://petsc.cs.iit.edu/petsc/petsc-dev/rev/3a98e6a0994d" target="_blank">http://petsc.cs.iit.edu/petsc/petsc-dev/rev/3a98e6a0994d</a><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120118/32ccd7d5/attachment-0001.htm" target="_blank">http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120118/32ccd7d5/attachment-0001.htm</a>><br>
<br>
------------------------------<br>
<br>
Message: 3<br>
Date: Wed, 18 Jan 2012 11:40:28 -0600<br>
From: Jed Brown <<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>><br>
Subject: Re: [petsc-users] DMGetMatrix segfault<br>
To: PETSc users list <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>
Message-ID:<br>
<<a href="mailto:CAM9tzSnQgb0_ypNTZzTsnXiUseA1k98h3hUPPTap1YNNqNUHXw@mail.gmail.com">CAM9tzSnQgb0_ypNTZzTsnXiUseA1k98h3hUPPTap1YNNqNUHXw@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
On Tue, Jan 17, 2012 at 06:32, Jed Brown <<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>> wrote:<br>
<br>
> I'll update petsc-dev to call DMSetUp() automatically when it is needed.<br>
><br>
<br>
<a href="http://petsc.cs.iit.edu/petsc/petsc-dev/rev/56deb0e7db8b" target="_blank">http://petsc.cs.iit.edu/petsc/petsc-dev/rev/56deb0e7db8b</a><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120118/ffb21c3f/attachment-0001.htm" target="_blank">http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120118/ffb21c3f/attachment-0001.htm</a>><br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
petsc-users mailing list<br>
<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/petsc-users" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/petsc-users</a><br>
<br>
<br>
End of petsc-users Digest, Vol 37, Issue 40<br>
*******************************************<br>
</blockquote></div><br>