VecView behaviour

Barry Smith bsmith at mcs.anl.gov
Wed Jun 3 17:38:32 CDT 2009


On Jun 3, 2009, at 5:29 PM, Andreas Grassl wrote:

> Barry Smith schrieb:
>>
>>  When properly running nn-cg (are you sure everything is symmetric?)
>> should require 10-30 iterations (certainly for model problems)
>
> ok, this was the number I expected.
>
>>
>>> nn-cg on 2 nodes 229 iterations, condition 6285
>>> nn-cg on 4 nodes 331 iterations, condition 13312
>>
>>  Are you sure that your operator has the null space of only  
>> constants?
>
> no, I didn't touch anything regarding the null space since I thought  
> it would be
> done inside the NN-preconditioner. Does this mean I have to set up a  
> null space
> of the size of the Schur complement system, i.e. the number of  
> interface DOF's?

   No, I don't think you need to do anything about the null space. The  
code in PETSc for NN is for (and only for) a null space of constants.
BTW: with 2 or 4 subdomains they all touch the boundary and likely  
don't have a null space anyways.

Run with -ksp_view and make sure the local solves are being done with LU

    Barry

>
>
> cheers,
>
> ando
>
>>
>>    Barry
>>
>>
>> On Jun 3, 2009, at 5:29 AM, Andreas Grassl wrote:
>>
>>> Barry Smith schrieb:
>>>>  Hmm, it sounds like the difference between local "ghosted" vectors
>>>> and the global parallel vectors. But I do not understand why any  
>>>> of the
>>>> local vector entries would be zero.
>>>> Doesn't the vector X that is passed into KSP (or SNES) have the  
>>>> global
>>>> entries and uniquely define the solution? Why is viewing that not  
>>>> right?
>>>>
>>>
>>> I still don't understand fully the underlying processes of the  
>>> whole PCNN
>>> solution procedure, but trying around I substituted
>>>
>>> MatCreateIS(commw, ind_length, ind_length, PETSC_DECIDE,  
>>> PETSC_DECIDE,
>>> gridmapping, &A);
>>>
>>> by
>>>
>>> MatCreateIS(commw, PETSC_DECIDE, PETSC_DECIDE, actdof, actdof,
>>> gridmapping, &A);
>>>
>>> and received the needed results.
>>>
>>> Furthermore it seems, that the load balance is now better,  
>>> although I
>>> still
>>> don't reach the expected values, e.g.
>>> ilu-cg 320 iterations, condition 4601
>>> cg only 1662 iterations, condition 84919
>>>
>>> nn-cg on 2 nodes 229 iterations, condition 6285
>>> nn-cg on 4 nodes 331 iterations, condition 13312
>>>
>>> or is it not to expect, that nn-cg is faster than ilu-cg?
>>>
>>> cheers,
>>>
>>> ando
>>>
>>> -- 
>>> /"\                               Grassl Andreas
>>> \ /    ASCII Ribbon Campaign      Uni Innsbruck Institut f.  
>>> Mathematik
>>> X      against HTML email        Technikerstr. 13 Zi 709
>>> / \                               +43 (0)512 507 6091
>>
>
> -- 
> /"\
> \ /      ASCII Ribbon
>  X    against HTML email
> / \
>
>



More information about the petsc-users mailing list