sparsity pattern setup
Fredrik Bengzon
fredrik.bengzon at math.umu.se
Tue Sep 22 07:51:59 CDT 2009
Hi,
This is the problem with the allocation algorithm in 3d I was talking
about. I guess i should have read the whole thread before asking stuff :)
https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2008-May/003022.html
/Fredrik
Matthew Knepley wrote:
> On Mon, Sep 21, 2009 at 4:53 PM, Fredrik Bengzon
> <fredrik.bengzon at math.umu.se <mailto:fredrik.bengzon at math.umu.se>> wrote:
>
> Hi,
> Ryan, I'm aware of Barry's post
>
> https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2008-May/003020.html
>
> and it workes fine for triangle meshes. However, I do not see how
> this can be used for tetrahedral meshes.
>
>
> It is the same for tetrahedra. In fact, this algorithm can be
> generalized to work
> for any topology:
>
> http://arxiv.org/abs/0908.4427
>
> Matt
>
>
> /Fredrik
>
>
> Ryan Yan wrote:
>
> Hi Fredrik,
> If I understand correctly, I have the same issue as what you
> have here.
>
> I do not have the code yet(It is also depends on how your
> store your matrix data.). But I can forward Barry's idea to
> you. Hope this is helpful to you.
>
> Yan,
>
> Simply read through the ASCII file(s) twice. The first time
> count the number of blocks per row, then preallocate then read
> through the ASCII file again reading and setting the values.
> This will be very fast.
>
> Barry
> - Hide quoted text -
>
>
> On Sep 20, 2009, at 10:20 AM, Ryan Yan wrote:
>
> Hi All,
> I have a large size application. Mesh size is 30000 nodes, with
> dof 25 on each vertex. And it's 3d unstructured, Tet, and Hex
> mesh. In the following I will denote blksize=25
>
> I am testing how to build up a PETSc matrix object quick
> and fast.
>
> The data I have is Block Compressed Sparse Row(BCSR) files.
> And my
> objective is to read BCSR files and generate PETSc Binaries
>
> Firstly, I choose the MATMPIAIJ, I opened the BCSR data
> files on
> each processor, and set up the preallocation use
>
> MatMPIAIJSetPreallocation(A,blksize,PETSC_NULL,blksize,PETSC_NULL);
> The reason why I choose 25 as the number for d_nz and o_nz
> is that
> I do not have access to the ordering of the vertices. So
> it's the
> worst case set up, and it takes about 7 minutes on 30 MIPS
> node(180 processors) to write the output into PETSc binaries.
>
> Secondly, I choose the MatMPIBAIJ, and same procedure as above,
> but also set up
>
> MatMPIBAIJSetPreallocation(A,blksize,blksize,PETSC_NULL,blksize,PETSC_NULL),
> here blksize = 25 and it's also the worst case; This
> experiments
> takes forever and could not generate the PETSc binaries.
>
> I guess the reason why it takes so long in the MATMPIBAIJ
> case is
> that I did not set up the preallocation accurately. Alougth I
> think the preallocation is also not accurate in the MATMPIAIJ
> case, but it seems like the preallocation effect is not as
> serious
> as for the MPIBAIJ. Correct me please, if there are other
> reasons.
>
> Can I anyone please give a hint on how to set up the
> preallocation
> right for a unstructured mesh without knowing the mesh
> ordering?
>
> Thank you very much in advance,
>
> Yan
>
>
>
>
>
> On Mon, Sep 21, 2009 at 4:24 PM, Fredrik Bengzon
> <fredrik.bengzon at math.umu.se
> <mailto:fredrik.bengzon at math.umu.se>
> <mailto:fredrik.bengzon at math.umu.se
> <mailto:fredrik.bengzon at math.umu.se>>> wrote:
>
> Hi,
> This is probably the wrong forum to ask, but does anyone have a
> piece of code for computing the correct ndnz and nodnz vectors
> needed for assembly of the stiffness (MPIAIJ) matrix on an
> unstructured tetrahedral mesh given the node-to-element
> adjacency?
> Thanks,
> Fredrik
>
>
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener
More information about the petsc-users
mailing list