assembly

Thomas Geenen geenen at gmail.com
Sat Feb 2 12:30:49 CST 2008


On Saturday 02 February 2008 18:33, Hong Zhang wrote:
> On Sat, 2 Feb 2008, Thomas Geenen wrote:
> > Dear Petsc users,
> >
> > I would like to understand what is slowing down the assembly phase of my
> > matrix. I create a matrix with MatCreateMPIAIJ i make a rough guess of
> > the number of off diagonal entries and then use a conservative value to
> > make sure I do not need extra mallocs. (the number of diagonal entries is
> > exact)
> > next i call MatSetValues and MatAssemblyBegin, MatAssemblyEnd.
> > The first time i call MatSetValues and MatAssemblyBegin,
> > MatAssemblyEnd it takes about 170 seconds
> > the second time 0.3 seconds.
> > I run it on 6 cpu's and I do fill quit a number of row-entries on the
> > "wrong" cpu. However thats also the case the second run. I checked
> > that there are no additional mallocs
> > MatGetInfo info.mallocs=0 both after MatSetValues and after
> > MatAssemblyBegin, MatAssemblyEnd.
>
> Run your code with the option '-log_summary' and check which function
> call dominates the execution time.

the time is spend in MatStashScatterGetMesg_Private

>
> > I run it on 6 cpu's and I do fill quit a number of row-entries on the
> > "wrong" cpu.
>
> Likely, the communication that sending the entries to the
> corrected cpu consume the time. Can you fill the entries in the
> correct cpu?

the second time the entries are filled on the wrong CPU as well.
i am curious about the difference in time between run 1 and 2.

>
> Hong
>
> > cheers
> > Thomas




More information about the petsc-users mailing list