assembly

Barry Smith bsmith at mcs.anl.gov
Sun Feb 3 13:51:51 CST 2008


    Hmmm, are you saying the first round of setting values still
takes much longer then the second round? Or is it the time
in MatAssemblyBegin() much longer the first time?

   The MatAssembly process has one piece of code that's
work is order n*size; where n is the stash size and size is the
number of processes, all other work is only order n.

    Could you send the -log_summary output?

    Barry


    The a
On Feb 3, 2008, at 6:44 AM, Thomas Geenen wrote:

> i call
> ierr = MatStashSetInitialSize(A[*seqsolve],stash_size,
> stash_size);CHKERRQ(ierr);
> with 100 000 000 for the stash size to make sure that's not the  
> bottleneck
>
> the assemble time remains unchanged however.
>
> nstash in MatAssemblyBegin_MPIAIJ (CPU=0) = 109485
> reallocs in MatAssemblyBegin_MPIAIJ =  0
>
> cheers
> Thomas
>
> On Saturday 02 February 2008 23:19, Barry Smith wrote:
>>    The matstash has a concept of preallocation also. During the first
>> setvalues
>> it is allocating more and more memory for the stash. In the second
>> setvalues
>> the stash is large enough so does not require any addition  
>> allocation.
>>
>>    You can use the option -matstash_initial_size <size> to allocate
>> enough space
>> initially so that the first setvalues is also fast. It does not look
>> like there is a way
>> coded to get the <size> that you should use. It should be set to the
>> maximum nonzeros
>> any process has that belongs to other processes. The stash handling
>> code is
>> in src/mat/utils/matstash.c, perhaps you can figure out how to
>> printout with PetscInfo()
>> the sizes needed?
>>
>>
>>    Barry
>>
>> On Feb 2, 2008, at 12:30 PM, Thomas Geenen wrote:
>>> On Saturday 02 February 2008 18:33, Hong Zhang wrote:
>>>> On Sat, 2 Feb 2008, Thomas Geenen wrote:
>>>>> Dear Petsc users,
>>>>>
>>>>> I would like to understand what is slowing down the assembly phase
>>>>> of my
>>>>> matrix. I create a matrix with MatCreateMPIAIJ i make a rough
>>>>> guess of
>>>>> the number of off diagonal entries and then use a conservative
>>>>> value to
>>>>> make sure I do not need extra mallocs. (the number of diagonal
>>>>> entries is
>>>>> exact)
>>>>> next i call MatSetValues and MatAssemblyBegin, MatAssemblyEnd.
>>>>> The first time i call MatSetValues and MatAssemblyBegin,
>>>>> MatAssemblyEnd it takes about 170 seconds
>>>>> the second time 0.3 seconds.
>>>>> I run it on 6 cpu's and I do fill quit a number of row-entries on
>>>>> the
>>>>> "wrong" cpu. However thats also the case the second run. I checked
>>>>> that there are no additional mallocs
>>>>> MatGetInfo info.mallocs=0 both after MatSetValues and after
>>>>> MatAssemblyBegin, MatAssemblyEnd.
>>>>
>>>> Run your code with the option '-log_summary' and check which  
>>>> function
>>>> call dominates the execution time.
>>>
>>> the time is spend in MatStashScatterGetMesg_Private
>>>
>>>>> I run it on 6 cpu's and I do fill quit a number of row-entries on
>>>>> the
>>>>> "wrong" cpu.
>>>>
>>>> Likely, the communication that sending the entries to the
>>>> corrected cpu consume the time. Can you fill the entries in the
>>>> correct cpu?
>>>
>>> the second time the entries are filled on the wrong CPU as well.
>>> i am curious about the difference in time between run 1 and 2.
>>>
>>>> Hong
>>>>
>>>>> cheers
>>>>> Thomas
>




More information about the petsc-users mailing list