[Nek5000-users] Writing data files with lower lx during runtim e

nek5000-users at lists.mcs.anl.gov nek5000-users at lists.mcs.anl.gov
Sat Jan 16 08:18:53 CST 2010


Hi Stefan,

Thanks!
the usrchk I posted would only dump the high res field once, at the end 
of the run. You are right, however, that when I go into production mode 
with the simulation, I will dump both high and low res.
Limited resources are the reason from my point of view. When doing the 
interpolation on our equipment, it would take an estimated two weeks for 
my simulation just to read in the high res data (one IO node, storage 
connected to cluster with ethernet). Doing that on the supercomputer, 
potentially using Nek's parallel IO capabilities, would certainly make 
it faster, but I did not want to use precious CPU time for something 
that can be done while the data is still in memory during run time.

Educating myself was another aspect - I'd like to get more familiar with 
Nek, so I wanted to see how it works. And after you said it would be 
straightforward to implement, I thought I'd give it a shot.

Markus


nek5000-users at lists.mcs.anl.gov wrote:
> Well done Markus!
> 
> Your implementation looks good to me. As I said the last time it's straightforward to implement it. 
> I will do something similar when I am going to implement it in the repo version. However my plan is to redesign the API of mfo_outfld().
> 
> What I don't understand is the following:
> You'll dump the high res AND the low res fld fields, right? If you do so why do you need low res GLL fld files for your postprocessing? You can always restart Nek using the high res fld files!
> 
> I am sure I am missing something?
> 
> 
> Cheers,
> Stefan
> 
> 
> 
> -----Original Message-----
> From: nek5000-users-bounces at lists.mcs.anl.gov on behalf of nek5000-users at lists.mcs.anl.gov
> Sent: Sat 1/16/2010 01:15
> To: nek5000-users at lists.mcs.anl.gov
> Subject: Re: [Nek5000-users] Writing data files with lower lx during runtim e
>  
> Hi,
> 
> To dump out files on a coarser GLL mesh during runtime, I modified subroutine
> mfo_outfld in prepost.f (find attached), and use map_m_to_n to interpolate.
> Then I add these lines to usrchk:
> 
>       common lxnew
>       integer useriostep,lxnew
>       lxnew=8
> c---  If lxnew is common, integer and > 2, and if param(66)=6, and
> ifreguo=.false., a modification in subroutine mfo_outfld(prefix)
> c---  in prepost.f will be activated when dumping data (either through iostep,
> outpost or prepost) that maps all fields onto a coarser
> c---  GLL mesh where lxnew-1 is the new polynomial order.
> 
>       useriostep=10 !Delta time step for interpolated output
>       param(66)=6
> 
> c-----Parameters for IO, etc.
>       if ((modulo(istep,useriostep).eq.0.).and.(istep.gt.1.)) then
>         if (istep.gt.useriostep) ifxyo = .false.  ! Turn off xyz output
>         ifreguo= .false. ! Maintain GLL mesh for later (spectral) post
> processing.
>         ! dump results into file
>          call prepost(.true.,'coa') !
>       endif
> c-----The following is done for the last dump
>       iostep = param(11) ! Dump on original grid at very last time step (to have
> a restart solution)
>       ifxyo = .true. !To ensure that geometry is dumped with restart file after
> last time step
>       param(66)=4. !To output last file on uncoarsened grid
> 
> The last 3 lines will ensure that at the end of the run, a data file is dumped
> with the "original"/fine resolution to serve as a restarting point.
> 
> The attached pictures show a comparison of the stagnation region of an internal
> flow (temperature contour and velocity vectors) on the original and a coarser
> (lx1=8 instead of 14) grid. The vectors are interpolated on an evenly spaced
> grid in both cases by VisIt.
> 
> As far as the motivation for this goes, I'd like to give some background info on
> what we intend to do ("we" is Dr. Andrew Duggleby's research group (FT2L) at
> Texas
> A&M). We are running simulations of high Reynolds number, high free stream
> turbulence intensity flow in complex geometries (turbine blades) with heat
> transfer. The grid size for my problem is about 250 Mio points. In order to
> analyze the data, we have a set of in house proper orthogonal decomposition
> routines. They all require time correlations of snapshots (up to 1000 data
> samples), to be read into memory multiple times. Since we only look at the lower
> order
> representation, a coarser grid than the simulation is running on supposedly
> suffices for
> that and makes these many file reading operations feasible on the resources we
> have.
> Since I intend to do the post processing with Nek, I wanted to maintain the GLL
> mesh distribution.
> 
> I would be happy to hear any comments,
> Markus Schwaenen
> 
> http://www1.mengr.tamu.edu/FT2L
> 
> _______________________________________________
> Nek5000-users mailing list
> Nek5000-users at lists.mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
> 



More information about the Nek5000-users mailing list