From nek5000-users at lists.mcs.anl.gov Mon Feb 1 19:28:14 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 01 Feb 2010 20:28:14 -0500 Subject: [Nek5000-users] Conjugate Heat Transfer Message-ID: <4B677FAE.4070501@vt.edu> Hi, I have experienced some difficulties with setting up a 2D conjugate heat transfer problem. When generating the .rea file with prenek, I create a mesh of, say 60 elements and define, say, 20 of those as solids (material group 1 instead of 0). Trying to run this .rea file in genmap (even when it is converted to re2) does not work since the fluid boundary condition section of the .rea file contains entries for elements (the solids) without connectivity/boundary information. Deleting those entries (4*20 lines for 2D) lets genmap accept the rea file and a map is created with one section for the fluids elements and one for the solids. Running these files with nek5000, however, resulted in element mismatch errors, no matter what combination of solid elements vs. number of processors I tried. The newest nek5000 svn was used. The solution I found is the following: when one takes the .rea file for conjugate heat transfer (as it came out of prenek) and changes all solid elements back to fluids, a map can be generated from this only-fluids-.rea file with genmap that is essentially left in ignorance of the conjugate heat transfer problem/solid elements. Running this "fluids-only" map with the original "conjugate heat"-.rea file (having deleted the solid elements boundary/connectivity information in its fluids section) will work and produces physically correct looking results. My questions now are the following: -Is this an appropriate way to go or am I missing some parameters that need to be set? The rea files come directly out of prenek, so I didn't manually change any parameters. -By generating a map file that does not resemble what's truly going on, do we loose parallel scalability and if so, is that a considerable penalty? -I had some trouble defining solid elements in 3D with prenek. Whenever I click on an element to change its material group, it does not do it. Is there a fix? -Expanding a 2D conjugate heat transfer problem with n2to3 does not work. All upper level elements become solids. Is there a fix for this? I apologize for my lengthy postings, Markus From nek5000-users at lists.mcs.anl.gov Mon Feb 1 20:40:59 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 1 Feb 2010 20:40:59 -0600 (CST) Subject: [Nek5000-users] Conjugate Heat Transfer In-Reply-To: <4B677FAE.4070501@vt.edu> References: <4B677FAE.4070501@vt.edu> Message-ID: Hi Markus, We're currently not using the group distinction to segragate fluid/solids. (Though perhaps we should...) It is simply imposed by the fact that the first nelv elements must be fluid, and that the sum total must be thermal (fluid+solid). Thus, nelv \le nelt in the .rea file. A further consequence is that nek will look for only nelv*2*ndim bcs for the fluid. At present, prenek is not set up to supply the correct number of bcs - it requires intervention by another code or editing by hand. This is all subject to change, but the truth is we just resurrected conjugate heat transfer for parallel processing in the past 12 months and, so far, have been the only customers. I believe, however, that if a .rea file is set up this way, that genmap and nek5000 will work properly. (I've used it both for 2D and 3D runs and all seems to be working ---- I never trust these things, however, until I've done it for at least 10 different cases...). We can do some work on prenek to get it into shape for you, but I'm presently not very happy with the interface and need to kick around a few ideas with users (like you) and developers. I hope this helps. Paul On Mon, 1 Feb 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hi, > > I have experienced some difficulties with setting up a 2D conjugate heat > transfer problem. When generating the .rea file with prenek, I create a mesh > of, say 60 elements and define, say, 20 of those as solids (material group 1 > instead of 0). Trying to run this .rea file in genmap (even when it is > converted to re2) does not work since the fluid boundary condition section of > the .rea file contains entries for elements (the solids) without > connectivity/boundary information. Deleting those entries (4*20 lines for 2D) > lets genmap accept the rea file and a map is created with one section for the > fluids elements and one for the solids. > Running these files with nek5000, however, resulted in element mismatch > errors, no matter what combination of solid elements vs. number of processors > I tried. The newest nek5000 svn was used. > > The solution I found is the following: when one takes the .rea file for > conjugate heat transfer (as it came out of prenek) and changes all solid > elements back to fluids, a map can be generated from this only-fluids-.rea > file with genmap that is essentially left in ignorance of the conjugate heat > transfer problem/solid elements. Running this "fluids-only" map with the > original "conjugate heat"-.rea file (having deleted the solid elements > boundary/connectivity information in its fluids section) will work and > produces physically correct looking results. > > My questions now are the following: > -Is this an appropriate way to go or am I missing some parameters that need > to be set? The rea files come directly out of prenek, so I didn't manually > change any parameters. > -By generating a map file that does not resemble what's truly going on, do we > loose parallel scalability and if so, is that a considerable penalty? > -I had some trouble defining solid elements in 3D with prenek. Whenever I > click on an element to change its material group, it does not do it. Is there > a fix? > -Expanding a 2D conjugate heat transfer problem with n2to3 does not work. All > upper level elements become solids. Is there a fix for this? > > I apologize for my lengthy postings, > Markus > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Tue Feb 2 08:07:55 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 2 Feb 2010 08:07:55 -0600 Subject: [Nek5000-users] Scalability Message-ID: <1776b6d81002020607sb833933jdf4faa13efa61cce@mail.gmail.com> Hi, When I am trying to run a big simulation (Nele > 120,000) , on certain clusters / supercomputers, Nek5000 crashes even though the physical memory used is very less. For example, on certain machines, with Nele = 120,000, we couldn't go more than lx1 of 7 while on different architecture we could go up to lx1 = 12. Is this issue concerned with compilers or the architecture of the machine ? Thanks Shriram Jagannathan Texas A&M University -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Feb 2 08:32:59 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 2 Feb 2010 15:32:59 +0100 Subject: [Nek5000-users] Scalability In-Reply-To: <1776b6d81002020607sb833933jdf4faa13efa61cce@mail.gmail.com> References: <1776b6d81002020607sb833933jdf4faa13efa61cce@mail.gmail.com> Message-ID: Hi Shriram, what happens if Nek is going to crash. Any chance to post a logfile? What the static size of your Nek executable (e.g. size nek5000)? Stefan On Feb 2, 2010, at 3:07 PM, nek5000-users at lists.mcs.anl.gov wrote: > Hi, > > When I am trying to run a big simulation (Nele > 120,000) , on certain clusters / supercomputers, Nek5000 crashes even though the physical memory used is very less. For example, on certain machines, with Nele = 120,000, we couldn't go more than lx1 of 7 while on different architecture we could go up to lx1 = 12. Is this issue concerned with compilers or the architecture of the machine ? > > Thanks > Shriram Jagannathan > Texas A&M University > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Tue Feb 2 08:42:01 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 2 Feb 2010 08:42:01 -0600 (CST) Subject: [Nek5000-users] Scalability In-Reply-To: <1776b6d81002020607sb833933jdf4faa13efa61cce@mail.gmail.com> References: <1776b6d81002020607sb833933jdf4faa13efa61cce@mail.gmail.com> Message-ID: Shriram, Total memory usage _per_processor_, scales as: memory/proc ~ [ C * lelt * lx1*ly1*lz1 + c * lelg ] * (8 bytes/word) where C is a moving target, but is somewhere between 200 and 400, and c ~ 2. (The code is a bit fat right now and needs a good clean-up, which is slated for next fall.) Also, C can be influenced by some parameters in the SIZEu file, namely, mxprev, lgmres, ldimt. If your problem has nelt=120,000 elements and you're running on 1000 processors, you should set lelt=120 and lelg=120000. This would give you the smallest possible memory footprint for that case. Note that some machines, such as BG/L or BG/P have very little memory per MPI process (e.g., 256 MB on BG/L and 512 MB on BG/P, of which only about 80% is available for the simulation). Does this help? Paul On Tue, 2 Feb 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hi, > > When I am trying to run a big simulation (Nele > 120,000) , on certain > clusters / supercomputers, Nek5000 crashes even though the physical memory > used is very less. For example, on certain machines, with Nele = 120,000, we > couldn't go more than lx1 of 7 while on different architecture we could go > up to lx1 = 12. Is this issue concerned with compilers or the architecture > of the machine ? > > Thanks > Shriram Jagannathan > Texas A&M University > From nek5000-users at lists.mcs.anl.gov Wed Feb 3 05:16:38 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 03 Feb 2010 12:16:38 +0100 Subject: [Nek5000-users] Number of processors Message-ID: <20100203121638.gakrshr8ggscggw4@webmail.uni-karlsruhe.de> Hallo, just a short quastion: How can I define the number of processors?? I only found in the SIZEu-file the parameter LP, for the maximum number of processors.. Thx a lot. Fred From nek5000-users at lists.mcs.anl.gov Wed Feb 3 05:35:54 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 3 Feb 2010 05:35:54 -0600 (CST) Subject: [Nek5000-users] Number of processors In-Reply-To: <20100203121638.gakrshr8ggscggw4@webmail.uni-karlsruhe.de> References: <20100203121638.gakrshr8ggscggw4@webmail.uni-karlsruhe.de> Message-ID: Fred, LP is the upper bound on number of processors. The actual number is determined at run-time by your mpirun or mpiexec command. Are you running on with a queuing system? or just on a local cluster. For a local cluster, I use the following script (nekbmpi, which runs the job in background..) echo $1 > SESSION.NAME echo `pwd`'/' >> SESSION.NAME touch $1.rea rm -f logfile rm -f ioinfo mv $1.log.$2 $1.log1.$2 mv $1.sch $1.sch1 mpiexec -np $2 nek5000 > $1.log.$2 & ln $1.log.$2 logfile Usage: nekbmpi myjob 8 would run myjob.rea on 8 processors and put the results into myjob.log.8 I have a similar script for a variety of queuing systems... I find they usually have to be tailored the particular queuing system but it makes it very easy to use. I can send you one of these if you'd like. Paul On Wed, 3 Feb 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hallo, > > just a short quastion: How can I define the number of processors?? I only > found in the SIZEu-file the parameter LP, for the maximum number of > processors.. > > Thx a lot. > > Fred > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Wed Feb 3 11:22:06 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 3 Feb 2010 11:22:06 -0600 Subject: [Nek5000-users] Nek5000-users Digest, Vol 12, Issue 1 In-Reply-To: References: Message-ID: <1776b6d81002030922s5cc592f7yab2e3e4fe9d5f0ca@mail.gmail.com> Hi Stefan, I don't have the log file with me as we later submitted the job to a supercomputer. I shall post it if I get to notice the same problem again. Thanks Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Feb 3 11:24:16 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 3 Feb 2010 18:24:16 +0100 Subject: [Nek5000-users] Nek5000-users Digest, Vol 12, Issue 1 In-Reply-To: <1776b6d81002030922s5cc592f7yab2e3e4fe9d5f0ca@mail.gmail.com> References: <1776b6d81002030922s5cc592f7yab2e3e4fe9d5f0ca@mail.gmail.com> Message-ID: <85A6EAE7-CEB6-4AF7-8F16-FA8983AD90BA@lav.mavt.ethz.ch> How about the static size of you exe? On Feb 3, 2010, at 6:22 PM, nek5000-users at lists.mcs.anl.gov wrote: > Hi Stefan, > > I don't have the log file with me as we later submitted the job to a supercomputer. I shall post it if I get to notice the same problem again. > > Thanks > Shriram > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Wed Feb 3 11:37:23 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 3 Feb 2010 11:37:23 -0600 Subject: [Nek5000-users] Nek5000-users Digest, Vol 12, Issue 1 In-Reply-To: References: Message-ID: <1776b6d81002030937r4bb711eeued1919e515e3595c@mail.gmail.com> Hi Paul, It certainly helps me in setting the parameters lelt and lelg. Thanks. Just curious, is there a minimum number of processors that I need to have for running a simulation with 120k elements ? For example, I was not able to go more than lx1 of 7 on cluster with 128 processors while on other machine I could go upto lx1 = 12 ( With 128 processors) . Thanks Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Feb 3 12:53:37 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 3 Feb 2010 12:53:37 -0600 (CST) Subject: [Nek5000-users] Nek5000-users Digest, Vol 12, Issue 1 In-Reply-To: <1776b6d81002030937r4bb711eeued1919e515e3595c@mail.gmail.com> References: <1776b6d81002030937r4bb711eeued1919e515e3595c@mail.gmail.com> Message-ID: Hi Shriram, It depends on the amount of memory on each processor. Do you know what that figure is? Paul On Wed, 3 Feb 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hi Paul, > > It certainly helps me in setting the parameters lelt and lelg. Thanks. > Just curious, is there a minimum number of processors that I need to have > for running a simulation with 120k elements ? For example, I was not able to > go more than lx1 of 7 on cluster with 128 processors while on other machine > I could go upto lx1 = 12 ( With 128 processors) . > > Thanks > Shriram > From nek5000-users at lists.mcs.anl.gov Wed Feb 3 14:46:00 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 3 Feb 2010 21:46:00 +0100 Subject: [Nek5000-users] Nek5000-users Digest, Vol 12, Issue 1 In-Reply-To: References: <1776b6d81002030937r4bb711eeued1919e515e3595c@mail.gmail.com> Message-ID: <8C97D21F-C4FE-41E1-A715-B1E4A8BF6B2E@lav.mavt.ethz.ch> Assuming 2GB/core are available a run with lx1=12 and lelg=120k using 128 processors might be steep. In fact it will not even compile with the default memory model (sum of static data items < 2GB)! In this case you have to lower lelt and hence use more processors. However an lx1=8 run (just did a testrun) should work with 2GB/core assuming {ldimt=1,mxprev=20,lgmres=40}. Stefan On Feb 3, 2010, at 7:53 PM, nek5000-users at lists.mcs.anl.gov wrote: > > Hi Shriram, > > It depends on the amount of memory on each processor. > Do you know what that figure is? > > Paul > > On Wed, 3 Feb 2010, nek5000-users at lists.mcs.anl.gov wrote: > >> Hi Paul, >> >> It certainly helps me in setting the parameters lelt and lelg. Thanks. >> Just curious, is there a minimum number of processors that I need to have >> for running a simulation with 120k elements ? For example, I was not able to >> go more than lx1 of 7 on cluster with 128 processors while on other machine >> I could go upto lx1 = 12 ( With 128 processors) . >> >> Thanks >> Shriram >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Thu Feb 4 14:42:08 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 4 Feb 2010 15:42:08 -0500 Subject: [Nek5000-users] Conjugate Heat Transfer In-Reply-To: References: <4B677FAE.4070501@vt.edu> Message-ID: <1265316128.4b6b312032c51@webmail.vt.edu> Dear Dr. Fischer, thank you for the reply. We are currently testing the following procedure that would not make it necessary to change prenek: 1) generate a mesh in prenek with all elements that will be solids 2) generate a mesh in prenek (matching the coordinate system of 1) with only the fluid elements 3) merge all elements from 1) into 2), which should ensure that the solid elements come behind the fluid ones in the order in the rea file. Then we would only need to change the number of elements in the header lines of the element section, correct? Michael Meador, another student in Dr. Duggleby's group, is currently investigating how to set up a basic conjugate heat transfer experiment (hot jet impinging on conducting plate with finite thickness). We would like to compare that data against a similar problem to be computed in nek5000. In case we'll get something worth sharing, it'll be posted. Thanks, Markus Quoting nek5000-users at lists.mcs.anl.gov: > > Hi Markus, > > We're currently not using the group distinction to segragate fluid/solids. > (Though perhaps we should...) > > It is simply imposed by the fact that the first nelv elements must be > fluid, and that the sum total must be thermal (fluid+solid). > > Thus, nelv \le nelt in the .rea file. > > A further consequence is that nek will look for only nelv*2*ndim bcs > for the fluid. > > At present, prenek is not set up to supply the correct number of bcs - > it requires intervention by another code or editing by hand. This is > all subject to change, but the truth is we just resurrected conjugate > heat transfer for parallel processing in the past 12 months and, so far, > have been the only customers. I believe, however, that if a .rea file > is set up this way, that genmap and nek5000 will work properly. (I've > used it both for 2D and 3D runs and all seems to be working ---- I never > trust these things, however, until I've done it for at least 10 > different cases...). > > We can do some work on prenek to get it into shape for you, but I'm > presently not very happy with the interface and need to kick around > a few ideas with users (like you) and developers. > > I hope this helps. > > Paul > > > > > On Mon, 1 Feb 2010, nek5000-users at lists.mcs.anl.gov wrote: > > > Hi, > > > > I have experienced some difficulties with setting up a 2D conjugate heat > > transfer problem. When generating the .rea file with prenek, I create a > mesh > > of, say 60 elements and define, say, 20 of those as solids (material group > 1 > > instead of 0). Trying to run this .rea file in genmap (even when it is > > converted to re2) does not work since the fluid boundary condition section > of > > the .rea file contains entries for elements (the solids) without > > connectivity/boundary information. Deleting those entries (4*20 lines for > 2D) > > lets genmap accept the rea file and a map is created with one section for > the > > fluids elements and one for the solids. > > Running these files with nek5000, however, resulted in element mismatch > > errors, no matter what combination of solid elements vs. number of > processors > > I tried. The newest nek5000 svn was used. > > > > The solution I found is the following: when one takes the .rea file for > > conjugate heat transfer (as it came out of prenek) and changes all solid > > elements back to fluids, a map can be generated from this only-fluids-.rea > > file with genmap that is essentially left in ignorance of the conjugate > heat > > transfer problem/solid elements. Running this "fluids-only" map with the > > original "conjugate heat"-.rea file (having deleted the solid elements > > boundary/connectivity information in its fluids section) will work and > > produces physically correct looking results. > > > > My questions now are the following: > > -Is this an appropriate way to go or am I missing some parameters that need > > to be set? The rea files come directly out of prenek, so I didn't manually > > change any parameters. > > -By generating a map file that does not resemble what's truly going on, do > we > > loose parallel scalability and if so, is that a considerable penalty? > > -I had some trouble defining solid elements in 3D with prenek. Whenever I > > click on an element to change its material group, it does not do it. Is > there > > a fix? > > -Expanding a 2D conjugate heat transfer problem with n2to3 does not work. > All > > upper level elements become solids. Is there a fix for this? > > > > I apologize for my lengthy postings, > > Markus > > _______________________________________________ > > Nek5000-users mailing list > > Nek5000-users at lists.mcs.anl.gov > > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Thu Feb 4 15:02:11 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 4 Feb 2010 16:02:11 -0500 Subject: [Nek5000-users] Domain boundary visualization in VisIt Message-ID: <1265317331.4b6b35d3c275e@webmail.vt.edu> Hi all, hoping not to have found a solution to a non-existing problem, I'd like to post my idea about how to visualize domain boundaries based on their type in VisIt. That'll come in handy when I want to generate a picture to describe with colors/gray levels what boundary conditions were used in the simulation. First, I add the following lines to the .usr file: " real bcscal(lx1,ly1,lz1,lelt) !Scalar for boundary condition visualization integer e, f ntot = nx1*ny1*nz1*nelv call rzero(t(1,1,1,1,2),ntot) call rzero(bcscal,ntot) c-----Set scalar values for different boundaries nface = 2*ndim do e=1,nelv do f=1,nface if (cbc(f,e,1).eq.'P ') then call facev(bcscal,e,f,1.0,lx1,ly1,lz1) endif if (cbc(f,e,1).eq.'O ') then call facev(bcscal,e,f,2.0,lx1,ly1,lz1) endif if (cbc(f,e,1).eq.'W ') then call facev(bcscal,e,f,3.0,lx1,ly1,lz1) endif if (cbc(f,e,1).eq.'v ') then call facev(bcscal,e,f,4.0,lx1,ly1,lz1) endif enddo enddo call copy(t(1,1,1,1,2),bcscal,ntot) " This will assign different values (1 for periodic, 2 for outflow, ...) to faces on boundaries of certain types. " ifpsco(1) = .true. ifreguo = .false. call prepost(.true.,'bcs') " This will turn on output of scalar 1 (t(1,1,1,1,2)) and write it out on a GLL mesh with filename prefix 'bcs'. In VisIt, I then define equations (Controls->Expressions...) that create new scalar variables (wall, periodic, ...), for example for walls: Variable name: walls " if(eq(s1,3), walls=1, walls=0) " I then use a "Contour" plot for each variable. I'd be glad to hear some feedback, Markus From nek5000-users at lists.mcs.anl.gov Thu Feb 4 15:10:09 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 4 Feb 2010 22:10:09 +0100 Subject: [Nek5000-users] Domain boundary visualization in VisIt In-Reply-To: <1265317331.4b6b35d3c275e@webmail.vt.edu> References: <1265317331.4b6b35d3c275e@webmail.vt.edu> Message-ID: <252D4D57-F2AE-432F-A0FC-5E91709ECF1E@lav.mavt.ethz.ch> Yes - I do something similar to check my boundary conditions! btw: postnek has a feature to display BC (but I prefer VisIt for other reasons). Cheers, Stefan On Feb 4, 2010, at 10:02 PM, nek5000-users at lists.mcs.anl.gov wrote: > Hi all, > > hoping not to have found a solution to a non-existing problem, I'd like to post > my idea about how to visualize domain boundaries based on their type in VisIt. > That'll come in handy when I want to generate a picture to describe with > colors/gray levels what boundary conditions were used in the simulation. > > First, I add the following lines to the .usr file: > " > real bcscal(lx1,ly1,lz1,lelt) !Scalar for boundary condition visualization > integer e, f > > ntot = nx1*ny1*nz1*nelv > call rzero(t(1,1,1,1,2),ntot) > call rzero(bcscal,ntot) > > c-----Set scalar values for different boundaries > nface = 2*ndim > do e=1,nelv > do f=1,nface > if (cbc(f,e,1).eq.'P ') then > call facev(bcscal,e,f,1.0,lx1,ly1,lz1) > endif > if (cbc(f,e,1).eq.'O ') then > call facev(bcscal,e,f,2.0,lx1,ly1,lz1) > endif > if (cbc(f,e,1).eq.'W ') then > call facev(bcscal,e,f,3.0,lx1,ly1,lz1) > endif > if (cbc(f,e,1).eq.'v ') then > call facev(bcscal,e,f,4.0,lx1,ly1,lz1) > endif > enddo > enddo > > call copy(t(1,1,1,1,2),bcscal,ntot) > " > This will assign different values (1 for periodic, 2 for outflow, ...) to faces > on boundaries of certain types. > " > ifpsco(1) = .true. > ifreguo = .false. > call prepost(.true.,'bcs') > " > This will turn on output of scalar 1 (t(1,1,1,1,2)) and write it out on a GLL > mesh with filename prefix 'bcs'. > > In VisIt, I then define equations (Controls->Expressions...) that create new > scalar > variables (wall, periodic, ...), for example for walls: > Variable name: walls > " > if(eq(s1,3), walls=1, walls=0) > " > I then use a "Contour" plot for each variable. > > I'd be glad to hear some feedback, > Markus > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Mon Feb 8 21:34:43 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 08 Feb 2010 22:34:43 -0500 Subject: [Nek5000-users] LES and heat transfer in channel Message-ID: <4B70D7D3.2050407@vt.edu> Hi, I am running a channel LES simulation (periodic in streamwise and spanwise, length=width=9*channel height) in nek with the following parameters: -LES model: Filter last 3 modes, 5% filter weight; lx1=14; ld1=20 -Piece of the rea file " 1.00000 DENSITY 0.31250E-04 VISCOS 1.00000 RHOCP 1.00000 CONDUCT 4.00000 p99=3 ==> dealiasing turned on T IFFLOW T IFHEAT T IFTRAN T T F F F F F F F F F IFNAV & IFADVC (convection in P.S. fields) F F T T T T T T T T T T IFTMSH (IF mesh for this field is T mesh) F IFAXIS F IFSTRS F IFSPLIT F IFMGRID F IFMODEL F IFKEPS F IFMVBD T IFCHAR " -Reynolds number based on 2*channel height: 12800 -Constant heat flux on walls -The streamwise boundary is a recycling one, I set it up similar to the turbJet example, but I added temperature recycling. To avoid "over-heating", I subtract the added thermal energy from the temperature when recycling -Some material parameters are set in the .usr file: " param(8) = param(2)/0.71 ! Prandt=0.71 cpfld(2,1) = param(8) ! conductivity " When I compare the spanwise averaged and time averaged Nusselt number on a wall, it is fairly constant with streamwise direction (which is good), but about 50% higher then the experimental one from the correlation Nu=0.022*Re^0.8*Pr^0.5 This might be because I am only 2.5 flow throughs away from the initial condition, but I wanted to make sure that everything is right before putting in more computing hours. Does the above look OK? Is there another explanation for the heat transfer coefficient being too high? Thanks, Markus From nek5000-users at lists.mcs.anl.gov Mon Feb 8 22:42:38 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 8 Feb 2010 22:42:38 -0600 (CST) Subject: [Nek5000-users] LES and heat transfer in channel In-Reply-To: <4B70D7D3.2050407@vt.edu> References: <4B70D7D3.2050407@vt.edu> Message-ID: Hi Markus, How do you subtract off the excess heat ? The std way would be something like: subroutine userq (ix,iy,iz,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,eg common /fluxa/ flux_area,gamma_t e = gllel(eg) qvol = -vx(ix,iy,iz,e)*gamma_t return end Where gamma_t is chosen to balance your net influx through q" at the surface. Here, what one is effectively doing is (correctly) postulating: T(X,t) = @(X,t) + gamma * x where gamma is a constant and theta is periodic: @(X,t) == @(X+L,t) Then, nek solves for the periodic function @. I guess w/ your recycle bcs you're doing the same thing -- in which case you wouldn't need the qvol bit above, assuming you subtract off gamma*L from your incoming temperature. Also, do you have fixed flow rate? Also -- you might want to use PN-PN (lx2=lx1, etc), as we (Stefan) found this to give much better results for LES. Also, it could be that filtering really does not work well as an SGS for the passive scalar eqn. (It has the right look and feel for momentum equation, particularly when one considers the effects on the spectra; but I've not delved very deep in the case of temperature yet...). Paul On Mon, 8 Feb 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hi, > > I am running a channel LES simulation (periodic in streamwise and spanwise, > length=width=9*channel height) in nek with the following parameters: > -LES model: Filter last 3 modes, 5% filter weight; lx1=14; ld1=20 > -Piece of the rea file > " > 1.00000 DENSITY > 0.31250E-04 VISCOS > 1.00000 RHOCP > 1.00000 CONDUCT > 4.00000 p99=3 ==> dealiasing turned on > T IFFLOW > T IFHEAT > T IFTRAN > T T F F F F F F F F F IFNAV & IFADVC (convection in P.S. fields) > F F T T T T T T T T T T IFTMSH (IF mesh for this field is T mesh) > F IFAXIS > F IFSTRS > F IFSPLIT > F IFMGRID > F IFMODEL > F IFKEPS > F IFMVBD > T IFCHAR > " > -Reynolds number based on 2*channel height: 12800 > -Constant heat flux on walls > -The streamwise boundary is a recycling one, I set it up similar to the > turbJet example, but I added temperature recycling. To avoid "over-heating", > I subtract the added thermal energy from the temperature when recycling > -Some material parameters are set in the .usr file: > " > param(8) = param(2)/0.71 ! Prandt=0.71 > cpfld(2,1) = param(8) ! conductivity > " > > When I compare the spanwise averaged and time averaged Nusselt number on a > wall, it is fairly constant with streamwise direction (which is good), but > about 50% higher then the experimental one from the correlation > Nu=0.022*Re^0.8*Pr^0.5 > This might be because I am only 2.5 flow throughs away from the initial > condition, but I wanted to make sure that everything is right before putting > in more computing hours. > Does the above look OK? Is there another explanation for the heat transfer > coefficient being too high? > > Thanks, > Markus > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Tue Feb 9 05:28:23 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 9 Feb 2010 12:28:23 +0100 Subject: [Nek5000-users] LES and heat transfer in channel In-Reply-To: <4B70D7D3.2050407@vt.edu> References: <4B70D7D3.2050407@vt.edu> Message-ID: Hi Markus, it's hard if something is wrong with your setup or if you're running into other issues. I would start with a simple DNS of turbulent heat transfer in a channel e.g. similar to Hawamura et. al Journal of Head and Fluid Flow, 1998. Let's say you do the following simulation: Re_tau = 180 Pr = 0.71 Domain: (12.8/2/6.4) where you're reference length is the channel half-height Resolution: 24x12x24 elements using N=9 (this should be pretty well resolved) Thermal BC: uniform heat-flux heating - How do the Nek results compare to the results of Hawamura? I think that's a good way of testing your setup and after doing this exercise you're sure that everything works correct (assuming you get the right answers). Then the next step is to use a much coarser resolution together with a SGS-model (e.g. the simple filtering and/or dynamic Smagorinski). - How do the LES results compare to the DNS results? I would be not too surprised if you cannot get good results in your LES. It's well known that turbulent passive scalar mixing behaves quite different and a flow SGS model will not do a very good job in this case. Cheers, Stefan On Feb 9, 2010, at 4:34 AM, nek5000-users at lists.mcs.anl.gov wrote: > Hi, > > I am running a channel LES simulation (periodic in streamwise and spanwise, length=width=9*channel height) in nek with the following parameters: > -LES model: Filter last 3 modes, 5% filter weight; lx1=14; ld1=20 > -Piece of the rea file > " > 1.00000 DENSITY > 0.31250E-04 VISCOS > 1.00000 RHOCP > 1.00000 CONDUCT > 4.00000 p99=3 ==> dealiasing turned on > T IFFLOW > T IFHEAT > T IFTRAN > T T F F F F F F F F F IFNAV & IFADVC (convection in P.S. fields) > F F T T T T T T T T T T IFTMSH (IF mesh for this field is T mesh) > F IFAXIS > F IFSTRS > F IFSPLIT > F IFMGRID > F IFMODEL > F IFKEPS > F IFMVBD > T IFCHAR > " > -Reynolds number based on 2*channel height: 12800 > -Constant heat flux on walls > -The streamwise boundary is a recycling one, I set it up similar to the turbJet example, but I added temperature recycling. To avoid "over-heating", I subtract the added thermal energy from the temperature when recycling > -Some material parameters are set in the .usr file: > " > param(8) = param(2)/0.71 ! Prandt=0.71 > cpfld(2,1) = param(8) ! conductivity > " > > When I compare the spanwise averaged and time averaged Nusselt number on a wall, it is fairly constant with streamwise direction (which is good), but about 50% higher then the experimental one from the correlation > Nu=0.022*Re^0.8*Pr^0.5 > This might be because I am only 2.5 flow throughs away from the initial condition, but I wanted to make sure that everything is right before putting in more computing hours. > Does the above look OK? Is there another explanation for the heat transfer coefficient being too high? > > Thanks, > Markus > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Mon Feb 15 09:45:19 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 15 Feb 2010 16:45:19 +0100 Subject: [Nek5000-users] Segmentation fault occuerd Message-ID: <20100215164519.lldvj9g5lwwso4wk@webmail.uni-karlsruhe.de> Hello, I have some trouble to run my simulation... When I start running my simulation, I receive the following error-message (I compiled the run in debugging mode -g and with traceback): forrtl: severe (174): SIGSEGV, segmentation fault occurred Image PC Routine Line Source libc.so.6 00002B76C2128FA0 Unknown Unknown Unknown nek5000 0000000000500D48 chcopy_ 434 math.f nek5000 000000000071BA91 exitti_ 399 comm_mpi.f nek5000 0000000000565F6A set_conv_char_ 603 convect.f nek5000 0000000000561C80 setup_convect_ 44 convect.f nek5000 00000000004E805C setics_ 393 ic.f nek5000 0000000000417B2A nek_init_ 164 drive1.f nek5000 0000000000415FCD MAIN__ 26 drive.f nek5000 0000000000415F8C Unknown Unknown Unknown libc.so.6 00002B76C20D14CA Unknown Unknown Unknown nek5000 0000000000415EBA Unknown Unknown Unknown -------------------------------------------------------------------------- mpirun has exited due to process rank 0 with PID 10816 on node ifh-taifun exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- By debbuging I've seen, that follwoing values in the subroutine set_conv_char (in convect.f) seem to be a little strange: nxd= 0 nyd= 7457184 nzd= 0 In my DEALIAS-file the follwing line is commented out: C common /dedim/ nxd,nyd,nzd I tried to run the simulation with the declaration in the DEALIAS-file and a include 'DEALIAS' in the subroutine set_conv_char (in convect.f) but the the run also is dying: ZWGJD: Minimum number of Gauss points is 1 -1633391248 call exitt: dying ... Do you maybe know where there might be my buck?? Thx a lot. Fred Here my SIZEu-file: C Dimension file to be included C C HCUBE array dimensions C PARAMETER (LDIM=3) PARAMETER (LX1=6,LY1=6,LZ1=6,LELT=4420,LELV=4420) parameter (lxd=9,lyd=lxd,lzd=lxd) parameter (lelx=6,lely=lelx,lelz=lelx) PARAMETER (LZL=3) PARAMETER (LX2=LX1-2) PARAMETER (LY2=LY1-2) PARAMETER (LZ2=LZ1-2) PARAMETER (LX3=LX1) PARAMETER (LY3=LY1) PARAMETER (LZ3=LZ1) parameter (lpelv=1,lpelt=1,lpert=1) ! perturbation parameter (lpx1=1,lpy1=1,lpz1=1) ! array sizes parameter (lpx2=1,lpy2=1,lpz2=1) parameter (lbelv=1,lbelt=1) ! MHD parameter (lbx1=1,lby1=1,lbz1=1) ! array sizes parameter (lbx2=1,lby2=1,lbz2=1) C LX1M=LX1 when there are moving meshes; =1 otherwise PARAMETER (LX1M=1,LY1M=1,LZ1M=1) PARAMETER (LDIMT= 1) c PARAMETER (LDIMT= 3) PARAMETER (LDIMT1=LDIMT+1) PARAMETER (LDIMT3=LDIMT+3) PARAMETER (LP = 8) PARAMETER (LELG = LP*LELT) c c c Note: In the new code, LELGEC should be about sqrt(LELG) c PARAMETER (LELGEC = 1) PARAMETER (LXYZ2 = 1) PARAMETER (LXZ21 = 1) c PARAMETER (LMAXV=LX1*LY1*LZ1*LELV) PARAMETER (LMAXT=LX1*LY1*LZ1*LELT) PARAMETER (LMAXP=LX2*LY2*LZ2*LELV) PARAMETER (LXZ=LX1*LZ1) PARAMETER (LORDER=3) C FF PARAMETER (MAXOBJ=2,MAXMBR=LELT*6) PARAMETER (MAXOBJ=4,MAXMBR=LELT*6,lhis=100) C C Common Block Dimensions C PARAMETER (LCTMP0 =2*LX1*LY1*LZ1*LELT) PARAMETER (LCTMP1 =4*LX1*LY1*LZ1*LELT) C C The parameter LVEC controls whether an additional 42 field arrays C are required for Steady State Solutions. If you are not using C Steady State, it is recommended that LVEC=1. C PARAMETER (LVEC=1) C C Uzawa projection array dimensions C C FF PARAMETER (MXPREV = 10) parameter (mxprev = 20) parameter (lgmres = 40) C C Split projection array dimensions C parameter(lmvec = 1) parameter(lsvec = 1) parameter(lstore=lmvec*lsvec) c c NONCONFORMING STUFF c parameter (maxmor = lelt) C C Array dimensions C COMMON/DIMN/NELV,NELT,NX1,NY1,NZ1,NX2,NY2,NZ2 $,NX3,NY3,NZ3,NDIM,NFIELD,NID c automatically added by makenek parameter(lxo = lx1) ! max output grid size (lxo>=lx1) c automatically added by makenek parameter(lpart = 1 ) ! max number of particles c automatically added by makenek integer ax1,ay1,az1,ax2,ay2,az2 parameter (ax1=lx1,ay1=ly1,az1=lz1,ax2=lx2,ay2=ly2,az2=lz2) ! running averages From nek5000-users at lists.mcs.anl.gov Mon Feb 15 09:51:26 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 15 Feb 2010 09:51:26 -0600 (CST) Subject: [Nek5000-users] Segmentation fault occuerd In-Reply-To: <20100215164519.lldvj9g5lwwso4wk@webmail.uni-karlsruhe.de> Message-ID: <29426427.573201266249086548.JavaMail.root@zimbra> Hi Fred, You have an old SIZEu file -- the common block /DIMN/ should have the following structure: COMMON/DIMN/NELV,NELT,NX1,NY1,NZ1,NX2,NY2,NZ2 $,NX3,NY3,NZ3,NDIM,NFIELD,NPERT,NID $,NXD,NYD,NZD Recompile it clean. Aleks ----- Original Message ----- From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Sent: Monday, February 15, 2010 9:45:19 AM GMT -06:00 US/Canada Central Subject: [Nek5000-users] Segmentation fault occuerd Hello, I have some trouble to run my simulation... When I start running my simulation, I receive the following error-message (I compiled the run in debugging mode -g and with traceback): forrtl: severe (174): SIGSEGV, segmentation fault occurred Image PC Routine Line Source libc.so.6 00002B76C2128FA0 Unknown Unknown Unknown nek5000 0000000000500D48 chcopy_ 434 math.f nek5000 000000000071BA91 exitti_ 399 comm_mpi.f nek5000 0000000000565F6A set_conv_char_ 603 convect.f nek5000 0000000000561C80 setup_convect_ 44 convect.f nek5000 00000000004E805C setics_ 393 ic.f nek5000 0000000000417B2A nek_init_ 164 drive1.f nek5000 0000000000415FCD MAIN__ 26 drive.f nek5000 0000000000415F8C Unknown Unknown Unknown libc.so.6 00002B76C20D14CA Unknown Unknown Unknown nek5000 0000000000415EBA Unknown Unknown Unknown -------------------------------------------------------------------------- mpirun has exited due to process rank 0 with PID 10816 on node ifh-taifun exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- By debbuging I've seen, that follwoing values in the subroutine set_conv_char (in convect.f) seem to be a little strange: nxd= 0 nyd= 7457184 nzd= 0 In my DEALIAS-file the follwing line is commented out: C common /dedim/ nxd,nyd,nzd I tried to run the simulation with the declaration in the DEALIAS-file and a include 'DEALIAS' in the subroutine set_conv_char (in convect.f) but the the run also is dying: ZWGJD: Minimum number of Gauss points is 1 -1633391248 call exitt: dying ... Do you maybe know where there might be my buck?? Thx a lot. Fred Here my SIZEu-file: C Dimension file to be included C C HCUBE array dimensions C PARAMETER (LDIM=3) PARAMETER (LX1=6,LY1=6,LZ1=6,LELT=4420,LELV=4420) parameter (lxd=9,lyd=lxd,lzd=lxd) parameter (lelx=6,lely=lelx,lelz=lelx) PARAMETER (LZL=3) PARAMETER (LX2=LX1-2) PARAMETER (LY2=LY1-2) PARAMETER (LZ2=LZ1-2) PARAMETER (LX3=LX1) PARAMETER (LY3=LY1) PARAMETER (LZ3=LZ1) parameter (lpelv=1,lpelt=1,lpert=1) ! perturbation parameter (lpx1=1,lpy1=1,lpz1=1) ! array sizes parameter (lpx2=1,lpy2=1,lpz2=1) parameter (lbelv=1,lbelt=1) ! MHD parameter (lbx1=1,lby1=1,lbz1=1) ! array sizes parameter (lbx2=1,lby2=1,lbz2=1) C LX1M=LX1 when there are moving meshes; =1 otherwise PARAMETER (LX1M=1,LY1M=1,LZ1M=1) PARAMETER (LDIMT= 1) c PARAMETER (LDIMT= 3) PARAMETER (LDIMT1=LDIMT+1) PARAMETER (LDIMT3=LDIMT+3) PARAMETER (LP = 8) PARAMETER (LELG = LP*LELT) c c c Note: In the new code, LELGEC should be about sqrt(LELG) c PARAMETER (LELGEC = 1) PARAMETER (LXYZ2 = 1) PARAMETER (LXZ21 = 1) c PARAMETER (LMAXV=LX1*LY1*LZ1*LELV) PARAMETER (LMAXT=LX1*LY1*LZ1*LELT) PARAMETER (LMAXP=LX2*LY2*LZ2*LELV) PARAMETER (LXZ=LX1*LZ1) PARAMETER (LORDER=3) C FF PARAMETER (MAXOBJ=2,MAXMBR=LELT*6) PARAMETER (MAXOBJ=4,MAXMBR=LELT*6,lhis=100) C C Common Block Dimensions C PARAMETER (LCTMP0 =2*LX1*LY1*LZ1*LELT) PARAMETER (LCTMP1 =4*LX1*LY1*LZ1*LELT) C C The parameter LVEC controls whether an additional 42 field arrays C are required for Steady State Solutions. If you are not using C Steady State, it is recommended that LVEC=1. C PARAMETER (LVEC=1) C C Uzawa projection array dimensions C C FF PARAMETER (MXPREV = 10) parameter (mxprev = 20) parameter (lgmres = 40) C C Split projection array dimensions C parameter(lmvec = 1) parameter(lsvec = 1) parameter(lstore=lmvec*lsvec) c c NONCONFORMING STUFF c parameter (maxmor = lelt) C C Array dimensions C COMMON/DIMN/NELV,NELT,NX1,NY1,NZ1,NX2,NY2,NZ2 $,NX3,NY3,NZ3,NDIM,NFIELD,NID c automatically added by makenek parameter(lxo = lx1) ! max output grid size (lxo>=lx1) c automatically added by makenek parameter(lpart = 1 ) ! max number of particles c automatically added by makenek integer ax1,ay1,az1,ax2,ay2,az2 parameter (ax1=lx1,ay1=ly1,az1=lz1,ax2=lx2,ay2=ly2,az2=lz2) ! running averages _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Sun Feb 21 22:12:41 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 21 Feb 2010 23:12:41 -0500 Subject: [Nek5000-users] LES and heat transfer in channel In-Reply-To: References: <4B70D7D3.2050407@vt.edu> Message-ID: <4B820439.3070304@vt.edu> Hi, thanks for the answers. Before considering DNS calculations, I'd like to look into the Smagorinski model. Extending the turbulent channel example in the nekton source, is the following the correct way to set user defined properties for the energy equation? subroutine uservp (ix,iy,iz,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' common /cdsmag/ ediff(lx1,ly1,lz1,lelv) $ , thermdiff(lx1,ly1,lz1,lelv) ie = gllel(ieg) udiff = ediff(ix,iy,iz,ie) utrans = 1. if (ifield.eq.2) then udiff = thermdiff(ix,iy,iz,ie) utrans = 1. endif return end To answer some previous questions, the excess heat is subtracted from the inflow temperature (via an energy balance between the periodic sections). The flow is driven with a constant flow rate, much like in the jet example. Markus nek5000-users at lists.mcs.anl.gov wrote: > Hi Markus, > > it's hard if something is wrong with your setup or if you're running into other issues. > > I would start with a simple DNS of turbulent heat transfer in a channel e.g. similar to Hawamura et. al Journal of Head and Fluid Flow, 1998. > > Let's say you do the following simulation: > Re_tau = 180 > Pr = 0.71 > Domain: (12.8/2/6.4) where you're reference length is the channel half-height > Resolution: 24x12x24 elements using N=9 (this should be pretty well resolved) > Thermal BC: uniform heat-flux heating > > - How do the Nek results compare to the results of Hawamura? > > I think that's a good way of testing your setup and after doing this exercise you're sure that everything works correct (assuming you get the right answers). > Then the next step is to use a much coarser resolution together with a SGS-model (e.g. the simple filtering and/or dynamic Smagorinski). > > - How do the LES results compare to the DNS results? > > I would be not too surprised if you cannot get good results in your LES. It's well known that turbulent passive scalar mixing behaves quite different and a flow SGS model will not do a very good job in this case. > > > Cheers, > Stefan > > > On Feb 9, 2010, at 4:34 AM, nek5000-users at lists.mcs.anl.gov wrote: > >> Hi, >> >> I am running a channel LES simulation (periodic in streamwise and spanwise, length=width=9*channel height) in nek with the following parameters: >> -LES model: Filter last 3 modes, 5% filter weight; lx1=14; ld1=20 >> -Piece of the rea file >> " >> 1.00000 DENSITY >> 0.31250E-04 VISCOS >> 1.00000 RHOCP >> 1.00000 CONDUCT >> 4.00000 p99=3 ==> dealiasing turned on >> T IFFLOW >> T IFHEAT >> T IFTRAN >> T T F F F F F F F F F IFNAV & IFADVC (convection in P.S. fields) >> F F T T T T T T T T T T IFTMSH (IF mesh for this field is T mesh) >> F IFAXIS >> F IFSTRS >> F IFSPLIT >> F IFMGRID >> F IFMODEL >> F IFKEPS >> F IFMVBD >> T IFCHAR >> " >> -Reynolds number based on 2*channel height: 12800 >> -Constant heat flux on walls >> -The streamwise boundary is a recycling one, I set it up similar to the turbJet example, but I added temperature recycling. To avoid "over-heating", I subtract the added thermal energy from the temperature when recycling >> -Some material parameters are set in the .usr file: >> " >> param(8) = param(2)/0.71 ! Prandt=0.71 >> cpfld(2,1) = param(8) ! conductivity >> " >> >> When I compare the spanwise averaged and time averaged Nusselt number on a wall, it is fairly constant with streamwise direction (which is good), but about 50% higher then the experimental one from the correlation >> Nu=0.022*Re^0.8*Pr^0.5 >> This might be because I am only 2.5 flow throughs away from the initial condition, but I wanted to make sure that everything is right before putting in more computing hours. >> Does the above look OK? Is there another explanation for the heat transfer coefficient being too high? >> >> Thanks, >> Markus >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >