[Nek5000-users] Using an external file for forcing in 'ffx'

nek5000-users at lists.mcs.anl.gov nek5000-users at lists.mcs.anl.gov
Mon Jul 23 14:54:08 CDT 2018


Hi,
- userdat2 looks ok. Just some comments: use real instead of double 
precision as we compile with -r8. Then you could define your parameter 
right in the beginning and just use that one, instead of hardcoded 651. 
But these are details, it should work, and just read and broadcast your 
data to all processors.

- in userf: again, use real instead. Also, you might read the FD 
coordinates in userdat2 as well. You can do as you do, but that is most 
inefficient (reading every time step from every processor).

- you do not need to reshape the array, you can just define my_array 
with the proper size in userf. (my_array(31,21)). In fact, you do not 
even need that as you can simply pass my_array to pwl_intepr_2d.

- the whole interpolation can be done once in the beginning anyway, or 
do you change the forcing field as a function of time?

- you do not need separate fo and new_array (see above)

- then the red bit: You have now interpolated your forcing on the gll 
points. So ffx simply needs to be the entry of this forcing field at the 
current ix,iy,iz,e. No need for a loop over the (local) elements, and no 
need for local-to-global mapping. just take the local index iel. 
Everything is local, as your interpolation was local as well (i.e. only 
interpolated to those elements that a specific processor had because you 
used xm1 and ym1.

Sorry if I am misunderstanding. It is not easy to debug code without a 
running case.

Hope this helps,
philipp

On 2018-07-23 16:57, nek5000-users at lists.mcs.anl.gov wrote:
> Hello,
> 
> Following up on this question again. Would appreciate any input.
> 
> Thanks,
> Saikat
> 
> 
> 
> On Mon, Jul 16, 2018 at 10:17 AM, <nek5000-users at lists.mcs.anl.gov 
> <mailto:nek5000-users at lists.mcs.anl.gov>> wrote:
> 
>     Hey everyone,
> 
>     Just following up on this question. Could really use some suggestion.
> 
>     Thanks,
>     Saikat
> 
>     Saikat Mukherjee,
>     PhD Student,
>     Paul Research Group - http://www.me.vt.edu/mpaul/
>     Engineering Science and Mechanics,
>     Virginia Tech.
> 
>     On Wed, Jul 11, 2018 at 11:34 AM, Saikat Mukherjee <msaikat6 at vt.edu
>     <mailto:msaikat6 at vt.edu>> wrote:
> 
>         Hey All,
> 
>         I have been stuck in this problem for a while now but I think it
>         should be simple and probably you can tell where I am going wrong.
> 
>         - So I have a forcing function on a finite difference X-Y grid,
>         which I am reading in as a text file in the 'userdat2' subroutine.
> 
>         - I am then parallelizing it using a previous suggestion by
>         Paul, here is a snippet of the routine.
> 
>         --------------------------------------------------------------------------------------------------------
> 
>               subroutine usrdat2
> 
> 
>                include 'SIZE'
> 
>                include 'TOTAL'
> 
>                 double precision :: my_array(651)
> 
>                 integer m
> 
>                  common /myarray/ my_array
> 
> 
>                  parameter (my_array_size=651)
> 
> 
>                  real  work(my_array_size)
> 
> 
> 
>                  n = my_array_size
> 
> 
>                  call rzero(my_array,n)
> 
>                  call rzero(work,n)
> 
> 
>                  if (nid.eq.0) then
> 
>                    open (unit=49,file='fint41.txt')
> 
>                       do k = 1,n
> 
>                          read(49,*) my_array(k)
> 
>                       enddo
> 
>                    close(49)
> 
>                  endif
> 
> 
>                   call gop(my_array,work,'+  ',n)   !  Sum over all
>         processors
> 
> 
>                return
> 
>                end
> 
>         ---------------------------------------------------------------------------------------------------------
> 
>         - Next I am interpolating this forcing function to the GLL grid
>         points in the userf routine, by a piecewise linear interpolant,
>         here's how,
> 
>         ----------------------------------------------------------------------------------------------------------
> 
>               subroutine userf  (ix,iy,iz,ieg)
> 
> 
>                include 'SIZE'
> 
>                include 'TOTAL'
> 
>                include 'NEKUSE'
> 
>                EQUIVALENCE    (prandtl,param(2))
> 
> 
>                   double precision :: my_array(651)
> 
>                   double precision :: shaped_array(31,21)
> 
>                   real new_array(nx1,ny1,nz1,nelv)
> 
>                   double precision :: xin(21),yin(31)
> 
>                   double precision :: xn(nx1*ny1*nelv),yn(nx1*ny1*nelv)
> 
>                   double precision :: fo(nx1*ny1*nelv)
> 
>                   integer i,j,k,l,p,q
> 
>                   integer e,eg
> 
>                   common /myarray/ my_array
> 
> 
>                   open(16,file='xin.txt')
> 
>                   do p=1,21
> 
>                   read(16,*) xin(p)
> 
>                   end do
> 
>                   close(16)
> 
> 
>                   open(12,file='yin.txt')
> 
>                   do q=1,31
> 
>                   read(12,*) yin(q)
> 
>                   end do
> 
>                   close(12)
> 
> 
>                   shaped_array = reshape(my_array,(/31,21/))
> 
> 
>                   do i=1,nx1*ny1*nelv
> 
>                   xn(i) = xm1(i,1,1,1)
> 
>                   yn(i) = ym1(i,1,1,1)
> 
>                   end do
> 
>                   call
>         pwl_interp_2d(31,21,yin,xin,shaped_array,nx1*ny1*nelv,yn,xn,fo)
> 
> 
>                  new_array=reshape(fo,(/nx1,ny1,nz1,nelv/))
> 
> 
>           do e=1,nelv
> 
>                                eg = lglel(e)
> 
>                                ffx = new_array(ix,iy,iz,eg)
> 
>                           end do
> 
> 
>                   ffy = 0.0
> 
>                   ffz = 0.0
> 
> 
>                return
> 
>                end
> 
> 
> 
>         ---------------------------------------------------------------------------------------------------------------------------------------
> 
>         This is a test domain I made, with 6 elements. It seems to be
>         working when I am using only 1 core, i.e., in series. In
>         parallel, it seems to be giving me a different flowfield. My
>         actual domain has 252 elements which I intend to parallelize
>         with 48 cores. So would like to ask you how to proceed. I think
>         I am close but I am slightly confused about part of the code
>         that I have made red.
> 
>         When I am trying this in my actual domain, I get an error,
>         "Segmentation fault occurred." I think the key here is
>         parallelization and identifying the elements correctly which I
>         am not sure I am doing right. Thanks for reading this lengthy
>         email. Will be eager to hear from you.
> 
>         Thanks,
>         Saikat
> 
> 
> 
> 
> 
> 
> 
>     _______________________________________________
>     Nek5000-users mailing list
>     Nek5000-users at lists.mcs.anl.gov <mailto:Nek5000-users at lists.mcs.anl.gov>
>     https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
>     <https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users>
> 
> 
> 
> 
> _______________________________________________
> Nek5000-users mailing list
> Nek5000-users at lists.mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
> 


More information about the Nek5000-users mailing list