[petsc-users] generate entries on 'wrong' process (Satish Balay)
Barry Smith
bsmith at mcs.anl.gov
Wed Jan 18 15:08:10 CST 2012
On Jan 18, 2012, at 2:55 PM, Wen Jiang wrote:
> Hi Satish,
>
> Thanks for your suggestion.
>
> I just tried both of these methods, but it seems that they did not work neither. After adding matstash_initial_size and vecstash_initial_size, stash uses 0 mallocs in mat_assembly stage. I cannot see much difference when I use MAT_FLUSH_ASSEMBLY after every element stiffness matrix are added. I list the last a few -info information where my codes gets stuck.
>
> [5] MatAssemblyBegin_MPIAIJ(): Stash has 4806656 entries, uses 0 mallocs.
> [4] MatAssemblyBegin_MPIAIJ(): Stash has 5964288 entries, uses 0 mallocs.
> [6] MatAssemblyBegin_MPIAIJ(): Stash has 5727744 entries, uses 0 mallocs.
> [3] MatAssemblyBegin_MPIAIJ(): Stash has 8123904 entries, uses 0 mallocs.
> [7] MatAssemblyBegin_MPIAIJ(): Stash has 7408128 entries, uses 0 mallocs.
> [2] MatAssemblyBegin_MPIAIJ(): Stash has 11544576 entries, uses 0 mallocs.
> [0] MatStashScatterBegin_Private(): No of messages: 1
> [0] MatStashScatterBegin_Private(): Mesg_to: 1: size: 107888648
> [0] MatAssemblyBegin_MPIAIJ(): Stash has 13486080 entries, uses 1 mallocs.
> [1] MatAssemblyBegin_MPIAIJ(): Stash has 16386048 entries, uses 1 mallocs.
I am now 99.9% sure your problem is incorrect preallocation. See my previous email to determine the problem.
Barry
>
>
>
>
> On Wed, Jan 18, 2012 at 1:00 PM, <petsc-users-request at mcs.anl.gov> wrote:
> Send petsc-users mailing list submissions to
> petsc-users at mcs.anl.gov
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users
> or, via email, send a message with subject or body 'help' to
> petsc-users-request at mcs.anl.gov
>
> You can reach the person managing the list at
> petsc-users-owner at mcs.anl.gov
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of petsc-users digest..."
>
>
> Today's Topics:
>
> 1. Re: generate entries on 'wrong' process (Satish Balay)
> 2. Re: Multiple output using one viewer (Jed Brown)
> 3. Re: DMGetMatrix segfault (Jed Brown)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 18 Jan 2012 11:07:21 -0600 (CST)
> From: Satish Balay <balay at mcs.anl.gov>
> Subject: Re: [petsc-users] generate entries on 'wrong' process
> To: PETSc users list <petsc-users at mcs.anl.gov>
> Message-ID: <alpine.LFD.2.02.1201181103120.2351 at asterix>
> Content-Type: TEXT/PLAIN; charset=US-ASCII
>
> You can do 2 things.
>
> 1. allocate sufficient stash space to avoid mallocs.
> You can do this with the following runtime command line options
> -vecstash_initial_size
> -matstash_initial_size
>
> 2. flush stashed values in stages instead of doing a single
> large communication at the end.
>
> <add values to matrix>
> MatAssemblyBegin/End(MAT_FLUSH_ASSEMBLY)
> <add values to matrix>
> MatAssemblyBegin/End(MAT_FLUSH_ASSEMBLY)
> ...
> ...
>
> <add values to matrix>
> MatAssemblyBegin/End(MAT_FINAL_ASSEMBLY)
>
> Satish
>
>
> On Wed, 18 Jan 2012, Wen Jiang wrote:
>
> > Hi,
> >
> > I am working on FEM codes with spline-based element type. For 3D case, one
> > element has 64 nodes and every two neighboring elements share 48 nodes.
> > Thus regardless how I partition a mesh, there are still very large number
> > of entries that have to write on the 'wrong' processor. And my code is
> > running on clusters, the processes are sending between 550 and 620 Million
> > packets per second across the network. My code seems IO-bound at this
> > moment and just get stuck at the matrix assembly stage. A -info file is
> > attached. Do I have other options to optimize my codes to be less
> > io-intensive?
> >
> > Thanks in advance.
> >
> > [0] VecAssemblyBegin_MPI(): Stash has 210720 entries, uses 12 mallocs.
> > [0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
> > [5] MatAssemblyBegin_MPIAIJ(): Stash has 4806656 entries, uses 8 mallocs.
> > [6] MatAssemblyBegin_MPIAIJ(): Stash has 5727744 entries, uses 9 mallocs.
> > [4] MatAssemblyBegin_MPIAIJ(): Stash has 5964288 entries, uses 9 mallocs.
> > [7] MatAssemblyBegin_MPIAIJ(): Stash has 7408128 entries, uses 9 mallocs.
> > [3] MatAssemblyBegin_MPIAIJ(): Stash has 8123904 entries, uses 9 mallocs.
> > [2] MatAssemblyBegin_MPIAIJ(): Stash has 11544576 entries, uses 10 mallocs.
> > [0] MatStashScatterBegin_Private(): No of messages: 1
> > [0] MatStashScatterBegin_Private(): Mesg_to: 1: size: 107888648
> > [0] MatAssemblyBegin_MPIAIJ(): Stash has 13486080 entries, uses 10 mallocs.
> > [1] MatAssemblyBegin_MPIAIJ(): Stash has 16386048 entries, uses 10 mallocs.
> > [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0
> > unneeded,2514537 used
> > [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
> > [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294
> > [0] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode
> > routines
> > [5] MatAssemblyEnd_SeqAIJ(): Matrix size: 11390 X 11390; storage space: 0
> > unneeded,2525390 used
> > [5] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
> > [5] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294
> > [5] Mat_CheckInode(): Found 11390 nodes out of 11390 rows. Not using Inode
> > routines
> > [3] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0
> > unneeded,2500281 used
> > [3] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
> > [3] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294
> > [3] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode
> > routines
> > [1] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0
> > unneeded,2500281 used
> > [1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
> > [1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294
> > [1] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode
> > routines
> > [4] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0
> > unneeded,2500281 used
> > [4] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
> > [4] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294
> > [4] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode
> > routines
> > [2] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0
> > unneeded,2525733 used
> > [2] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
> > [2] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294
> > [2] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode
> > routines
> >
>
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 18 Jan 2012 11:39:49 -0600
> From: Jed Brown <jedbrown at mcs.anl.gov>
> Subject: Re: [petsc-users] Multiple output using one viewer
> To: PETSc users list <petsc-users at mcs.anl.gov>
> Message-ID:
> <CAM9tzSnwWoKa+GJP=po17coXG4VYmSry5H0+PPXyYTRfogW6Gw at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Thu, Jan 5, 2012 at 18:17, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> > On Jan 5, 2012, at 9:40 AM, Jed Brown wrote:
> >
> > > On Thu, Jan 5, 2012 at 09:36, Alexander Grayver <agrayver at gfz-potsdam.de>
> > wrote:
> > > Maybe this should be noted in the documentation?
> > >
> > > Yes, I think the old file should be closed (if it exists), but I'll wait
> > for comment.
> >
> > I never thought about the case where someone called
> > PetscViewerFileSetName() twice. I'm surprised that it works at all.
> >
> > Yes, it should (IMHO) be changed to close the old file if used twice.
>
>
> It works this way now.
>
> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/3a98e6a0994d
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120118/32ccd7d5/attachment-0001.htm>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 18 Jan 2012 11:40:28 -0600
> From: Jed Brown <jedbrown at mcs.anl.gov>
> Subject: Re: [petsc-users] DMGetMatrix segfault
> To: PETSc users list <petsc-users at mcs.anl.gov>
> Message-ID:
> <CAM9tzSnQgb0_ypNTZzTsnXiUseA1k98h3hUPPTap1YNNqNUHXw at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Tue, Jan 17, 2012 at 06:32, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>
> > I'll update petsc-dev to call DMSetUp() automatically when it is needed.
> >
>
> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/56deb0e7db8b
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120118/ffb21c3f/attachment-0001.htm>
>
> ------------------------------
>
> _______________________________________________
> petsc-users mailing list
> petsc-users at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users
>
>
> End of petsc-users Digest, Vol 37, Issue 40
> *******************************************
>
More information about the petsc-users
mailing list