[petsc-users] MatAssembly Cost Jump
Jed Brown
jed at jedbrown.org
Sat Mar 24 13:47:32 CDT 2018
Have you preallocated? Everything you are doing should take less than 1
second. I know this is about MatSetValues, but unpacking from the stash
also takes time.
https://www.mcs.anl.gov/petsc/documentation/faq.html#efficient-assembly
You can also try running with -matstash_bts, which will use a different
algorithm to communicate the stash.
Let us know how this works for you.
Ali Berk Kahraman <aliberkkahraman at yahoo.com> writes:
> Dear All,
>
> I have a sequential algorithm to determine the nonzero structure of my
> Jacobian on my unstructured grid. The overall code goes like following,
>
> 1.Do other MPI stuff regarding the problem, irrelevant of this mail
> 2. If you are rank 0, set all of the nonzeros of the MPI Jacobian Matrix
> 3. MatAssemblyBegin/End, send the nonzero locations to other processes
> from rank 0
>
> I have been testing the algorithm to see how long it takes on a
> structured grid for which I know the Jacobian non-zero structure. Here
> is where I have a question. For a square grid, 129x129 nodes making
> 16641x16641 Jacobian Matrix, having 16641x13 nonzeros in it, step 2
> takes 1 second and step 3 takes 2 seconds. For a square grid, 257x257
> nodes making a 66049x66049 Jacobian matrix, having 66049x13 nonzeros in
> it, step 2 takes 9 seconds, step 3 takes around 50 seconds.
>
> My question is, why does the time get multiplied by a factor of 25 while
> the data that should be transferred only quadruples in step 3. Is this
> expected behavior?
>
> Best Regards,
>
> Ali Berk Kahraman
> M.Sc. Student, Mechanical Engineering
> Boğaziçi Uni, Istanbul, Turkey
More information about the petsc-users
mailing list