[petsc-users] Question on local vec to global vec for dof > 1

Barry Smith bsmith at mcs.anl.gov
Thu May 22 19:34:52 CDT 2014


   DMDA does not work that way. Local and global vectors associated with DA’s are always “interlaced”.

  Barry

On May 22, 2014, at 6:33 PM, Danyang Su <danyang.su at gmail.com> wrote:

> Hi Barry,
> 
> I use the following routine to reorder from the local rhs to global rhs. 
> 
>               PetscScalar, pointer :: vecpointer1d(:,:)
> 
>               call DMDAVecGetArrayF90(da,x_vec_gbl,vecpointer1d,ierr)                  !x_vec_gbl is a global vector created by DMCreateGlobalVector
>               do i = nvzls,nvzle                                                                                          !local node number without ghost node
>                 vecpointer1d(0,i-1) = x_array_loc(i-nvzgls+1)                                         !x_array_loc is local rhs
>                 vecpointer1d(1,i-1) = x_array_loc(i-nvzgls+1+nvz)                                 !nvz = 6 for the present 1d example
>               end do                
>               call DMDAVecRestoreArrayF90(da,x_vec_gbl,vecpointer1d,ierr)
> 
> Now Vecview gives the same rhs for 1 processor and 2 processors, but rhs order is not what I expected. I want global rhs vector hold the values of dof=1 first and then dof=2 as the local matrix and rhs hold value in this order.
> 
>                              x_vec_gbl                                            x_vec_gbl
> dof       node       VecView(Current)   dof      node        VecView (Expected)
> 1          1          1.39598e-021             1         1          1.39598e-021 
> 2          1          0                                   1         2          0            
> 1          2          -0                                  1         3          0            
> 2          2          -0                                  1         4          5.64237e-037 
> 1          3          -0                                  1         5          0            
> 2          3          -0                                  1         6          -7.52316e-037
> 1          4          5.64237e-037             1         7          7.52316e-037 
> 2          4          4.81482e-035             1         8          0            
> 1          5          -0                                  1         9          1.68459e-016 
> 2          5          -0                                  1         10         0.1296   
>                                                                    
> 1          6          -7.52316e-037            2         1          0
> 2          6          -7.22224e-035            2         2          0
> 1          7          7.52316e-037             2         3          0
> 2          7          7.22224e-035             2         4          4.81482e-035
> 1          8          -0                                  2         5          0
> 2          8          -0                                  2         6          -7.22224e-035
> 1          9          1.68459e-016             2         7          7.22224e-035
> 2          9          128623                        2         8          0
> 1          10         0.1296                        2         9          128623
> 2          10         0                                  2         10         0
> 
> Thanks and regards,
> 
> Danyang
> 
> 
> On 22/05/2014 4:03 PM, Barry Smith wrote:
>>   Always do things one small step at at time. On one process what is x_vec_loc (use VecView again on it). Is it what you expect? Then run on two processes but call VecView on 
>> x_vec_loc only on the first process. Is it what you expect?
>> 
>>   Also what is vecpointer1d declared to be?
>> 
>> 
>>   Barry
>> 
>> On May 22, 2014, at 4:44 PM, Danyang Su 
>> <danyang.su at gmail.com>
>>  wrote:
>> 
>> 
>>> On 22/05/2014 12:01 PM, Matthew Knepley wrote:
>>> 
>>>> On Thu, May 22, 2014 at 1:58 PM, Danyang Su <danyang.su at gmail.com>
>>>>  wrote:
>>>> Hi All,
>>>> 
>>>> I have a 1D transient flow problem (1 dof) coupled with energy balance (1 dof), so the total dof per node is 2.
>>>> 
>>>> The whole domain has 10 nodes in z direction. 
>>>> 
>>>> The program runs well with 1 processor but failed in 2 processors. The matrix is the same for 1 processor and 2 processor but the rhs are different. 
>>>> 
>>>> The following is used to set the rhs value.
>>>> 
>>>> call VecGetArrayF90(x_vec_loc, vecpointer, ierr)
>>>> vecpointer = (calculate the rhs value here)
>>>> call VecRestoreArrayF90(x_vec_loc,vecpointer,ierr)
>>>> call DMLocalToGlobalBegin(da,x_vec_loc,INSERT_VALUES, x_vec_gbl,ierr)
>>>> call DMLocalToGlobalEnd(da,x_vec_loc,INSERT_VALUES, x_vec_gbl,ierr)
>>>> 
>>>>                                                                                              Vecview  Correct             Vecview  Wrong
>>>> dof     local node           Process [0]                                  Process [0]                      Process [0] 
>>>> 1            1              1.395982780116148E-021               1.39598e-021                 1.39598e-021
>>>> 1            2              0.000000000000000E+000               0                                       0
>>>> 1            3              0.000000000000000E+000               0                                       0
>>>> 1            4              5.642372883946980E-037               5.64237e-037                 5.64237e-037
>>>> 1            5              0.000000000000000E+000                0                                       0
>>>> 1            6             -1.395982780116148E-021               -7.52316e-037                -1.39598e-021                       Line A
>>>> 2            1              0.000000000000000E+000               7.52316e-037                 0
>>>> 2            2              0.000000000000000E+000                0                                       0
>>>> 2            3              0.000000000000000E+000               1.68459e-016                  0
>>>> 2            4              4.814824860968090E-035                0.1296                             4.81482e-035
>>>> 2            5              0.000000000000000E+000                                                        Process [1]                             Line B
>>>> 2            6             -1.371273884908092E-019               0                                        7.52316e-037                       Line C
>>>>                                                                                              0                                        0
>>>>                                        Process [1]                                   0                                       1.68459e-016
>>>> 1            1              1.395982780116148E-021               4.81482e-035                 0.1296                                     Line D
>>>> 1            2             -7.523163845262640E-037                0                                      1.37127e-019                         Line E 
>>>> 1            3              7.523163845262640E-037               -7.22224e-035                -7.22224e-035
>>>> 1            4              0.000000000000000E+000               7.22224e-035                 7.22224e-035
>>>> 1            5              1.684590875336239E-016                0                                       0
>>>> 1            6              0.129600000000000                         128623                            128623
>>>> 2            1              1.371273884908092E-019               0                                       0                                                Line F
>>>> 2            2             -7.222237291452134E-035            
>>>> 2            3              7.222237291452134E-035            
>>>> 2            4              0.000000000000000E+000            
>>>> 2            5               128623.169844761                
>>>> 2            6              0.000000000000000E+000    
>>>> 
>>>> The red line (Line A, C, D and F) is the ghost values for 2 subdomains, but when run with 2 processor, the program treates Line B, C, D, and E as ghost values.
>>>> How can I handle this kind of local vector to global vector assembly?
>>>> 
>>>> Why are you not using DMDAVecGetArrayF90()? This is exactly what it is for.
>>>> 
>>> Thanks, Matthew. 
>>> 
>>> I tried the following codes but still cannot get the correct global rhs vector
>>>  
>>> call DMDAVecGetArrayF90(da,x_vec_loc,vecpointer1d,ierr)
>>> do i = 1,nvz                                                                         !nvz is local node amount, here is 6
>>>   vecpointer1d(0,i-1) = x_array_loc(i)                              !assume x_array_loc is the local rhs (the third column in the above mentioned data)
>>>   vecpointer1d(1,i-1) = x_array_loc(i+nvz)
>>> end do
>>> call DMDAVecRestoreArrayF90(da,x_vec_loc,vecpointer1d,ierr)
>>> call DMLocalToGlobalBegin(da,x_vec_loc,INSERT_VALUES, x_vec_gbl,ierr)
>>> call DMLocalToGlobalEnd(da,x_vec_loc,INSERT_VALUES, x_vec_gbl,ierr)
>>> 
>>> 
>>> Now the rhs for 1 processor is as follows. It is not what I want.
>>> 
>>> 1.39598e-021
>>> 0
>>> -0
>>> -0
>>> -0
>>> -0
>>> 5.64237e-037
>>> 4.81482e-035
>>> -0
>>> -0
>>> -7.52316e-037
>>> -7.22224e-035
>>> 7.52316e-037
>>> 7.22224e-035
>>> -0
>>> -0
>>> 1.68459e-016
>>> 128623
>>> 0.1296
>>> 0
>>> 
>>>>    Matt
>>>>  
>>>> 
>>>> In fact, the codes can work if the dof and local node is as follows.
>>>> dof     local node   
>>>> 1            1       
>>>> 2            1       
>>>> 1            2       
>>>> 2            2       
>>>> 1            3       
>>>> 2            3   
>>>> 
>>>> Thanks and regards,
>>>> 
>>>> Danyang
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>>>> -- Norbert Wiener
>>>> 
> 



More information about the petsc-users mailing list