[Nek5000-users] Using an external file for forcing in 'ffx'
nek5000-users at lists.mcs.anl.gov
nek5000-users at lists.mcs.anl.gov
Mon Jul 23 09:57:32 CDT 2018
Hello,
Following up on this question again. Would appreciate any input.
Thanks,
Saikat
On Mon, Jul 16, 2018 at 10:17 AM, <nek5000-users at lists.mcs.anl.gov> wrote:
> Hey everyone,
>
> Just following up on this question. Could really use some suggestion.
>
> Thanks,
> Saikat
>
> Saikat Mukherjee,
> PhD Student,
> Paul Research Group - http://www.me.vt.edu/mpaul/
> Engineering Science and Mechanics,
> Virginia Tech.
>
> On Wed, Jul 11, 2018 at 11:34 AM, Saikat Mukherjee <msaikat6 at vt.edu>
> wrote:
>
>> Hey All,
>>
>> I have been stuck in this problem for a while now but I think it should
>> be simple and probably you can tell where I am going wrong.
>>
>> - So I have a forcing function on a finite difference X-Y grid, which I
>> am reading in as a text file in the 'userdat2' subroutine.
>>
>> - I am then parallelizing it using a previous suggestion by Paul, here is
>> a snippet of the routine.
>>
>> ------------------------------------------------------------
>> --------------------------------------------
>>
>> subroutine usrdat2
>>
>>
>> include 'SIZE'
>>
>> include 'TOTAL'
>>
>> double precision :: my_array(651)
>>
>> integer m
>>
>> common /myarray/ my_array
>>
>>
>> parameter (my_array_size=651)
>>
>>
>> real work(my_array_size)
>>
>>
>>
>> n = my_array_size
>>
>>
>> call rzero(my_array,n)
>>
>> call rzero(work,n)
>>
>>
>> if (nid.eq.0) then
>>
>> open (unit=49,file='fint41.txt')
>>
>> do k = 1,n
>>
>> read(49,*) my_array(k)
>>
>> enddo
>>
>> close(49)
>>
>> endif
>>
>>
>> call gop(my_array,work,'+ ',n) ! Sum over all processors
>>
>>
>> return
>>
>> end
>> ------------------------------------------------------------
>> ---------------------------------------------
>>
>> - Next I am interpolating this forcing function to the GLL grid points in
>> the userf routine, by a piecewise linear interpolant, here's how,
>>
>> ------------------------------------------------------------
>> ----------------------------------------------
>>
>> subroutine userf (ix,iy,iz,ieg)
>>
>>
>> include 'SIZE'
>>
>> include 'TOTAL'
>>
>> include 'NEKUSE'
>>
>> EQUIVALENCE (prandtl,param(2))
>>
>>
>> double precision :: my_array(651)
>>
>> double precision :: shaped_array(31,21)
>>
>> real new_array(nx1,ny1,nz1,nelv)
>>
>> double precision :: xin(21),yin(31)
>>
>> double precision :: xn(nx1*ny1*nelv),yn(nx1*ny1*nelv)
>>
>> double precision :: fo(nx1*ny1*nelv)
>>
>> integer i,j,k,l,p,q
>>
>> integer e,eg
>>
>> common /myarray/ my_array
>>
>>
>> open(16,file='xin.txt')
>>
>> do p=1,21
>>
>> read(16,*) xin(p)
>>
>> end do
>>
>> close(16)
>>
>>
>> open(12,file='yin.txt')
>>
>> do q=1,31
>>
>> read(12,*) yin(q)
>>
>> end do
>>
>> close(12)
>>
>>
>> shaped_array = reshape(my_array,(/31,21/))
>>
>>
>> do i=1,nx1*ny1*nelv
>>
>> xn(i) = xm1(i,1,1,1)
>>
>> yn(i) = ym1(i,1,1,1)
>>
>> end do
>>
>> call pwl_interp_2d(31,21,yin,xin,sh
>> aped_array,nx1*ny1*nelv,yn,xn,fo)
>>
>> new_array=reshape(fo,(/nx1,ny1,nz1,nelv/))
>>
>>
>> do e=1,nelv
>>
>> eg = lglel(e)
>>
>> ffx = new_array(ix,iy,iz,eg)
>>
>> end do
>>
>>
>> ffy = 0.0
>>
>> ffz = 0.0
>>
>>
>> return
>>
>> end
>>
>>
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> ---------------
>>
>> This is a test domain I made, with 6 elements. It seems to be working
>> when I am using only 1 core, i.e., in series. In parallel, it seems to be
>> giving me a different flowfield. My actual domain has 252 elements which I
>> intend to parallelize with 48 cores. So would like to ask you how to
>> proceed. I think I am close but I am slightly confused about part of the
>> code that I have made red.
>>
>> When I am trying this in my actual domain, I get an error, "Segmentation
>> fault occurred." I think the key here is parallelization and identifying
>> the elements correctly which I am not sure I am doing right. Thanks for
>> reading this lengthy email. Will be eager to hear from you.
>>
>> Thanks,
>> Saikat
>>
>>
>>
>>
>>
>>
>
> _______________________________________________
> Nek5000-users mailing list
> Nek5000-users at lists.mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/nek5000-users/attachments/20180723/3237d8e3/attachment-0001.html>
More information about the Nek5000-users
mailing list