[Nek5000-users] Parallel issue: Inflow conditions from external files
nek5000-users at lists.mcs.anl.gov
nek5000-users at lists.mcs.anl.gov
Tue Sep 30 06:38:23 CDT 2014
Thank you Paul. It works!
About your first proposal, I've saved the file in ascii .fld format, but
I cannot restart my simulation if the file is not .f0000. I've set
param(67)=0, but it seems not to recognize the header. It should work,
shouldn't it? Or is there anything that I'm missing in the restart
options, if I work in parallel...?
Best
SL
El 27-09-2014 16:21, nek5000-users at lists.mcs.anl.gov escribió:
> OK... regarding my suggestion about .f0000 .fld, it's actually not the
> correct approach
> for BCs (which I now see was the original request)... Sorry about that!
>
> The code snippet still holds, but again, some sophistication is
> required because you
> really need to understand which elements are to receive boundary
> conditions, etc.
> Even in serial, this is a nontrivial issue because the approach would
> vary from one
> mesh to the next. But hopefully the code I sent will point you in the
> right direction
> concerning the parallel apporach.
>
> Best, Paul
>
> ________________________________________
> From: nek5000-users-bounces at lists.mcs.anl.gov
> [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of
> nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov]
> Sent: Saturday, September 27, 2014 8:31 AM
> To: nek5000-users at lists.mcs.anl.gov
> Subject: Re: [Nek5000-users] Parallel issue: Inflow conditions from
> external files
>
> Dear All,
>
> You need to understand something about distributed memory parallel
> computing to
> be able to read files.
>
> My suggestion would be to write your file into the .fld or f0000
> format and restart from that,
> since Nek provides a parallel read.
>
> Otherwise, you have to think carefully about the reading process,
> whether you have
> enough memory per node for a since process to read the file, whether
> it will take so
> long that you need parallel I/O, etc. There is not a one-size-fits
> all solution (save for
> the above suggestion) because for small files it's possible to write
> something in 10 minutes
> whereas for large files much more sophistication is required in order
> to meet the
> time and memory constraints.
>
> If you want to read a small file, you would do something like:
>
> common /scrns/ u(lx1,ly1,lz1,lelg),work(lx1,ly1,lz1,lelg)
>
> if (istep.eq.0) then
>
> nxyz = nx1*ny1*nz1
> n = nelgv*nxyz
>
> call rzero(u,n) ! zero-out u on all processors
>
> if (nid.eq.0) then
> read(44,*) (u(k),k=1,n)
> endif
>
> call gop(u,work,'+ ',n) ! add result across all processors
>
> do e=1,nelv
> eg = lglel(e)
> call copy(vx(1,1,1,e),u(1,1,1,eg),nxyz)
> enddo
>
> ...The danger here is that it is slow (because it is serial) and that
> you run out memory because you are allocating memory that is
> the size of the entire problem (i.e., not size-of-problem divided by P,
> the number of processors).
>
> For small processor counts it might be ok.... depending on how
> big the simulation etc.
>
> If you take this approach, however, it will not scale when you start
> running
> serious simulations.
>
> Paul
>
> ________________________________________
> From: nek5000-users-bounces at lists.mcs.anl.gov
> [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of
> nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov]
> Sent: Friday, September 26, 2014 5:52 AM
> To: nek5000-users at lists.mcs.anl.gov
> Subject: Re: [Nek5000-users] Parallel issue: Inflow conditions from
> external files
>
> Hi,
>
> Did you manage to solve the problem? I also have the same problem... I
> am able to read velocity data and to set them as initial condition in
> serial, but in parallel it is not working... The velocity is mapped to
> nodes that are not the proper ones...
>
> What is wrong?
>
> Thanks!
> SL
>
>
> El 10-09-2014 13:08, nek5000-users at lists.mcs.anl.gov escribió:
>> Hi all,
>>
>> I have a question on how to read in information from external files as
>> an inflow boundary condition when running in parallel.
>>
>> Similar problem to this:
>> http://lists.mcs.anl.gov/pipermail/nek5000-users/2010-December/001151.html
>> [1]
>>
>> My geometry is a simple 3D rectangle prism with the inlet on the Y-Z
>> plane @ x=0.
>>
>> I have developed some code that reads in a 2D plane of velocity data
>> from another simulation (in this case fluent) as the boundary
>> condition in userbc(). I am happy to share the process and the code
>> (Fortran input/output from NEK, and MATLAB to do interpolation)
>> however here I am primarily focused on getting it working in parallel.
>>
>> The process works fine in serial - it reads in 3 files (01.dat,
>> 02.dat, 03.dat) that hold the x,y,z velocity (respectively) for each
>> node on the inlet.
>>
>> However it does not work correctly when running in parallel - looking
>> at the results after a few iterations shows that the velocities are
>> being mapped to nodes that are not all on the inlet. It does compile
>> and run though.
>>
>> I'd sincerely appreciate any the help in making it work.
>>
>> Warm regards,
>>
>> Tom
>>
>> I have included my .usr file below.
>>
>> C-----------------------------
>> ------------------------------------------
>> C nek5000 user-file template
>> C
>> C user specified routines:
>> C - userbc : boundary conditions
>> C - useric : initial conditions
>> C - uservp : variable properties
>> C - userf : local acceleration term for fluid
>> C - userq : local source term for scalars
>> C - userchk: general purpose routine for checking errors etc.
>> C
>> C-----------------------------------------------------------------------
>> subroutine uservp(ix,iy,iz,eg) ! set variable properties
>> include 'SIZE'
>> include 'TOTAL'
>> include 'NEKUSE'
>>
>> integer e,f,eg
>> c e = gllel(eg)
>>
>> udiff = 0.0
>> utrans = 0.0
>>
>> return
>> end
>> c-----------------------------------------------------------------------
>> subroutine userf(ix,iy,iz,eg) ! set acceleration term
>> c
>> c Note: this is an acceleration term, NOT a force!
>> c Thus, ffx will subsequently be multiplied by rho(x,t).
>> c
>> include 'SIZE'
>> include 'TOTAL'
>> include 'NEKUSE'
>>
>> integer e,f,eg
>>
>> ffx=0
>> ffy=0
>> ffz=0
>>
>> return
>> end
>> c-----------------------------------------------------------------------
>> subroutine userq(ix,iy,iz,eg) ! set source term
>> include 'SIZE'
>> include 'TOTAL'
>> include 'NEKUSE'
>>
>> integer e,f,eg
>> c e = gllel(eg)
>>
>> qvol = 0.0
>> source = 0.0
>>
>> return
>> end
>> c-----------------------------------------------------------------------
>> subroutine userbc(ix,iy,iz,iside,eg) ! set up boundary
>> conditions
>>
>> c NOTE: This routine may or may not be called by every
>> processor
>>
>> include 'SIZE'
>> include 'TOTAL'
>> include 'NEKUSE'
>> integer e,eg,ix,iy,iz,NELY,NELZ,egs
>>
>> common /myinlet/ Arrangedu(ly1,lz1,lelt),
>> > Arrangedv(ly1,lz1,lelt),
>> > Arrangedw(ly1,lz1,lelt)
>>
>> e = gllel(eg)
>> ux = Arrangedu(iy,iz,eg)
>> uy = Arrangedv(iy,iz,eg)
>> uz = Arrangedw(iy,iz,eg)
>> temp = 0.0
>>
>> return
>> end
>> c-----------------------------------------------------------------------
>> subroutine useric(ix,iy,iz,eg) ! set up initial conditions
>> include 'SIZE'
>> include 'TOTAL'
>> include 'NEKUSE'
>> integer e,eg
>>
>> ux = 7.235
>> uy = 0.0
>> uz = 0.0
>> temp = 0.0
>>
>> return
>> end
>> c-----------------------------------------------------------------------
>> subroutine userchk()
>> include 'SIZE'
>> include 'TOTAL'
>>
>> return
>> end
>> c-----------------------------------------------------------------------
>> subroutine usrdat() ! This routine to modify element
>> vertices
>> include 'SIZE'
>> include 'TOTAL'
>>
>> return
>> end
>> c-----------------------------------------------------------------------
>> subroutine usrdat2() ! This routine to modify mesh
>> coordinates
>> include 'SIZE'
>> include 'TOTAL'
>>
>> common /myinlet/ Arrangedu(ly1,lz1,lelt),
>> > Arrangedv(ly1,lz1,lelt),
>> > Arrangedw(ly1,lz1,lelt)
>>
>> call read_inlet(Arrangedu,Arrangedv,Arrangedw) ! Read in
>> inlet data for BCs
>>
>> return
>> end
>> c-----------------------------------------------------------------------
>> subroutine usrdat3()
>> include 'SIZE'
>> include 'TOTAL'
>>
>> return
>> end
>> C-------------------------------------------------------------------------------
>> subroutine read_inlet(Arrangedu,Arrangedv,Arrangedw) ! Read
>> in the inlet data
>>
>> include 'SIZE'
>> include 'TOTAL'
>>
>> Integer Max_Nodes, Lun_Dat
>> Parameter( Max_Nodes = 200000, Lun_Dat = 3)
>>
>> real Var_1( Max_Nodes,lx1), Var_2( Max_Nodes), Var_3(
>> Max_Nodes)
>> real Var_4( Max_Nodes), Var_5( Max_Nodes), Var_6(
>> Max_Nodes)
>> real Arrangedu(ly1,lz1,lelt)
>> real Arrangedv(ly1,lz1,lelt)
>> real Arrangedw(ly1,lz1,lelt)
>>
>> integer i_row( Max_Nodes), i_eg( Max_Nodes), i_y(
>> Max_Nodes)
>>
>> real junk
>>
>> integer i, j, k, l, m, n, imin, imax, N_Nodes,ze, iyz,
>> egs,egz
>> integer fcount
>>
>> Character Line* 72
>>
>> Character Directory* 57
>> Parameter( Directory=
>> >
>> '/home/tom/NEK/meshControl/MultiBox/MultiBox_10/NewInlet4/')
>> c
>> 1234567890123456789012345678901234567890123456789012345678
>>
>> Character Filename* 4
>> Parameter( Filename= '.dat')
>> c 12345
>> character(2) fc
>>
>> C +-----------+
>> C | Execution |
>> C +-----------+
>>
>> 1 format( 1x, a)
>>
>> do fcount=1,3 !for each file 01=u;02=v;03=w.
>>
>> 37 format (I2.2)
>> write (fc,37) fcount ! converting integer to string
>> print *,'Readining file #: ',fc
>>
>> open( unit = Lun_Dat, status = 'old', form='formatted', err
>> = 2,
>> > file = Directory//trim(fc)//Filename)
>> go to 3
>> 2 write( 6, 1) 'Error Opening file - EXIT'
>>
>> 3 read( Lun_Dat, 1) Line
>>
>> N_Nodes = 0
>> do i = 1, Max_Nodes
>> read( Lun_Dat, 11, end=5) i_row(i), i_eg(i),
>> i_y(i),
>> > Var_1(i,1),
>> Var_1(i,2), Var_1(i,3),
>> > Var_1(i,4),
>> Var_1(i,5), Var_1(i,6),
>> > Var_1(i,7),
>> Var_1(i,8), Var_1(i,9)
>> 11 format( i6, 1x, i5, 1x, i4, 9( 1x, 1P E16.9))
>> N_Nodes = N_Nodes + 1
>>
>> if( fcount .EQ. 1) THEN
>> iyz=i_y(i)
>> egz=i_eg(i)
>> do ze=1,lx1
>> Arrangedu(iyz,ze,egz)= Var_1(i,ze)
>> end do
>> else if( fcount .EQ. 2) THEN
>> iyz=i_y(i)
>> egz=i_eg(i)
>> do ze=1,lx1
>> Arrangedv(iyz,ze,egz)= Var_1(i,ze)
>> end do
>> else if( fcount .EQ. 3) THEN
>> iyz=i_y(i)
>> egz=i_eg(i)
>> do ze=1,lx1
>> Arrangedw(iyz,ze,egz)= Var_1(i,ze)
>> end do
>> end if
>>
>> end do
>> write( 6, 1) ' '
>> write( 6, 1) ' WARNING - Max nodes exceeeded'
>> write( 6, 1) ' '
>> 5 write( 6, 6) N_Nodes
>> 6 format( ' Number of nodes read = ', I8)
>>
>> end do
>>
>> end
>> C-------------------------------------------------------------------------------
>>
>> Links:
>> ------
>> [1]
>> http://lists.mcs.anl.gov/pipermail/nek5000-users/2010-December/001151.html
>>
>> _______________________________________________
>> Nek5000-users mailing list
>> Nek5000-users at lists.mcs.anl.gov
>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
>
>
> _______________________________________________
> Nek5000-users mailing list
> Nek5000-users at lists.mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
> _______________________________________________
> Nek5000-users mailing list
> Nek5000-users at lists.mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
> _______________________________________________
> Nek5000-users mailing list
> Nek5000-users at lists.mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
More information about the Nek5000-users
mailing list