From nek5000-users at lists.mcs.anl.gov Wed Apr 13 12:00:17 2011 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 13 Apr 2011 19:00:17 +0200 Subject: [Nek5000-users] Direct/Adjoint perturbation mode Message-ID: Hi Nek's, I am willing to investigate the weakly non-linear dynamics of a perturbation and there for I need the adjoint perturbation. Before going any further, I'd like to be sure I understood how the native perturbation mode works. *My understanding of the perturbation mode is the following:* One first builds the rhs F^(n+1), through *call makefp* in perturb.f, containing at the moment only the eventual forcing term and all explicit contributions from the extrapolation of the convective term. Then, *call cresvipp* transforms the rhs into F^(n+1) + D^T p^n - Hu^n, whereas *call ophinv* solves delta u = H^(-1) (F^(n+1) + D^T p^n - Hu^n). Following is the non-divergence free velocity field u^* = u^n + delta u (*call opadd2*). Last but not least, *call incomprp* solves the following pressure equation: *DQD^T delta p = -Du^** and then project u^* onto the closest divergence free velocity field. Am I correct up to now? If so, I still have a question. It may seem straightforward for spectral elements boys however I'm a new comer to this world and I do not understood how is the Helmholtz operator built. I mean I presume it is done through the *call sethlm* but I don't get why I have h1 and h2 as outputs instead of one single matrix H. *How I would modify the native perturbation mode into the adjoint one.* This will come next, as soon as I'm sure I understood correctly how the native mode works. Regards, -- Jean-Christophe -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 15 03:17:45 2011 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 15 Apr 2011 10:17:45 +0200 Subject: [Nek5000-users] istep Message-ID: Hi, Just a very short (probably stupid) question. When restarting a case it seems to me that the istep parameters is reset to zero, which leads to new fld files overwriting the old. For example, if I run 10 timesteps and store blah.fld01 - blah.fld10 I then put blah.fld10 like this in the rea 1 PRESOLVE/RESTART OPTIONS ***** blah.fld10 And then I ask for 10 more timesteps by setting p11 to 10 in the .rea file. I would expect to get 10 new fld files blah.fld11 - blah.fld20, but instead my blah.fld01 - blah.fld10 files are overwritten. Should it be like this? Is there some parameter or similar that one has to make sure to specify when restarting to avoid this? Is there a cumulative istep I don't know of? Thanks and best regards Mikael -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 15 04:19:18 2011 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 15 Apr 2011 11:19:18 +0200 Subject: [Nek5000-users] istep In-Reply-To: References: Message-ID: Hi Mikael, It's not a bug - it's a feature. Typically, I move my fld into a new folder (e.g. run01, run02, ...) to avoid that my data gets overwritten. Why do you want to get a single fld-file set for different runs? Anyway, one way to do this is to use a shell script creating a set of symbolic links. Here an example: foo.fld01 -> run01/foo.fld01 foo.fld02 -> run01/foo.fld02 foo.fld03 -> run02/foo.fld01 foo.fld04 -> run02/foo.fld02 Hth, Stefan On 4/15/11, nek5000-users at lists.mcs.anl.gov wrote: > Hi, > > Just a very short (probably stupid) question. When restarting a case it > seems to me that the istep parameters is reset to zero, which leads to new > fld files overwriting the old. For example, if I run 10 timesteps and store > blah.fld01 - blah.fld10 I then put blah.fld10 like this in the rea > > 1 PRESOLVE/RESTART OPTIONS ***** > blah.fld10 > > And then I ask for 10 more timesteps by setting p11 to 10 in the .rea file. > I would expect to get 10 new fld files blah.fld11 - blah.fld20, but instead > my blah.fld01 - blah.fld10 files are overwritten. Should it be like this? Is > there some parameter or similar that one has to make sure to specify when > restarting to avoid this? Is there a cumulative istep I don't know of? > > Thanks and best regards > > Mikael > From nek5000-users at lists.mcs.anl.gov Fri Apr 15 04:30:13 2011 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 15 Apr 2011 11:30:13 +0200 Subject: [Nek5000-users] istep In-Reply-To: References: Message-ID: Ok, thanks Mikael On 15 April 2011 11:19, wrote: > Hi Mikael, > > It's not a bug - it's a feature. > Typically, I move my fld into a new folder (e.g. run01, run02, ...) to > avoid that my data gets overwritten. Why do you want to get a single > fld-file set for different runs? Anyway, one way to do this is to use > a shell script creating a set of symbolic links. Here an example: > > foo.fld01 -> run01/foo.fld01 > foo.fld02 -> run01/foo.fld02 > foo.fld03 -> run02/foo.fld01 > foo.fld04 -> run02/foo.fld02 > > > Hth, > Stefan > > > > > On 4/15/11, nek5000-users at lists.mcs.anl.gov > wrote: > > Hi, > > > > Just a very short (probably stupid) question. When restarting a case it > > seems to me that the istep parameters is reset to zero, which leads to > new > > fld files overwriting the old. For example, if I run 10 timesteps and > store > > blah.fld01 - blah.fld10 I then put blah.fld10 like this in the rea > > > > 1 PRESOLVE/RESTART OPTIONS ***** > > blah.fld10 > > > > And then I ask for 10 more timesteps by setting p11 to 10 in the .rea > file. > > I would expect to get 10 new fld files blah.fld11 - blah.fld20, but > instead > > my blah.fld01 - blah.fld10 files are overwritten. Should it be like this? > Is > > there some parameter or similar that one has to make sure to specify when > > restarting to avoid this? Is there a cumulative istep I don't know of? > > > > Thanks and best regards > > > > Mikael > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 15 07:15:18 2011 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 15 Apr 2011 07:15:18 -0500 (CDT) Subject: [Nek5000-users] istep In-Reply-To: References: Message-ID: Hi Mikael, I'm going to make a note of your request - it's not supported now, but could be w/ a bit of effort. The effort would be relatively small for the .fld files, a bit more for the .f0000 files, which is a newer feature. Generally I handle this issue by changing the name, e.g., myrun_1a.rea myrun_1b.rea myrun_1c.rea ... etc. In /tools/scripts/ the scripts mvv and cpn make it easy to do this -- cpn myrun_1a myrun_1b copies all the requisite files. mvv does the same except with move. Although it seems like a look a good idea to have the feature you're asking about, there are some pitfalls - namely, if you change the order midstream in a sequence of runs then Visit/postx will be unhappy processing the sequence. If I wish to process a long sequence of .fld or .f files, I usually ln them to a common base name (e.g., "myrun") using a script that can be cooked up in about 30 seconds. I then process the lot. Paul On Fri, 15 Apr 2011, nek5000-users at lists.mcs.anl.gov wrote: > Hi, > > Just a very short (probably stupid) question. When restarting a case it > seems to me that the istep parameters is reset to zero, which leads to new > fld files overwriting the old. For example, if I run 10 timesteps and store > blah.fld01 - blah.fld10 I then put blah.fld10 like this in the rea > > 1 PRESOLVE/RESTART OPTIONS ***** > blah.fld10 > > And then I ask for 10 more timesteps by setting p11 to 10 in the .rea file. > I would expect to get 10 new fld files blah.fld11 - blah.fld20, but instead > my blah.fld01 - blah.fld10 files are overwritten. Should it be like this? Is > there some parameter or similar that one has to make sure to specify when > restarting to avoid this? Is there a cumulative istep I don't know of? > > Thanks and best regards > > Mikael > From nek5000-users at lists.mcs.anl.gov Fri Apr 15 07:43:43 2011 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 15 Apr 2011 14:43:43 +0200 Subject: [Nek5000-users] istep In-Reply-To: References: Message-ID: Thanks again, I just thought there was an easier way and it seemed strange to me that I couldn't get the cumulative timestep. I don't usually change dt so for now I just use int(time/dt). And I don't really use fld files ;-), since we already have a fantastic inhouse visualization software ( http://persons.unik.no/andershe/papers/voluviz/voluviz.html). Best regards Mikael Mortensen The Norwegian Defence Research Establishment (FFI) 2007 Kjeller Norway On 15 April 2011 14:15, wrote: > > Hi Mikael, > > I'm going to make a note of your request - it's not supported > now, but could be w/ a bit of effort. The effort would be relatively small > for the .fld files, a bit more for the .f0000 files, which is a newer > feature. > > Generally I handle this issue by changing the name, e.g., > > myrun_1a.rea > myrun_1b.rea > myrun_1c.rea > ... > > etc. > > In /tools/scripts/ the scripts mvv and cpn make it easy > to do this -- > > cpn myrun_1a myrun_1b copies all the requisite files. > > mvv does the same except with move. > > Although it seems like a look a good idea to have the feature you're asking > about, there are some pitfalls - namely, if you change the order midstream > in a sequence of runs then Visit/postx > will be unhappy processing the sequence. > > If I wish to process a long sequence of .fld or .f files, I usually > ln them to a common base name (e.g., "myrun") using a script that > can be cooked up in about 30 seconds. I then process the lot. > > > Paul > > > > On Fri, 15 Apr 2011, nek5000-users at lists.mcs.anl.gov wrote: > > Hi, >> >> Just a very short (probably stupid) question. When restarting a case it >> seems to me that the istep parameters is reset to zero, which leads to new >> fld files overwriting the old. For example, if I run 10 timesteps and >> store >> blah.fld01 - blah.fld10 I then put blah.fld10 like this in the rea >> >> 1 PRESOLVE/RESTART OPTIONS ***** >> blah.fld10 >> >> And then I ask for 10 more timesteps by setting p11 to 10 in the .rea >> file. >> I would expect to get 10 new fld files blah.fld11 - blah.fld20, but >> instead >> my blah.fld01 - blah.fld10 files are overwritten. Should it be like this? >> Is >> there some parameter or similar that one has to make sure to specify when >> restarting to avoid this? Is there a cumulative istep I don't know of? >> >> Thanks and best regards >> >> Mikael >> >> _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 15 03:47:50 2011 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 15 Apr 2011 10:47:50 +0200 Subject: [Nek5000-users] Scalar Derivatives / Scalar Dissipation Message-ID: Hi Neks, I would like to investigate the mixing of passive scalars in turbulent flows. Therefore, I tried to write a function to compute the scalar dissipation rate. I found someone on the list who did it for temperature and pressure so I tried the same way. Unfortunately, my gradients seem to be zero all the time. The code i wrote looks like: include 'SIZE' include 'TOTAL' include 'ZPER' include 'NEKUSE' common /scalar/ dc1dx(lx1*ly1,lz1*lelt) & , dc1dy(lx1*ly1,lz1*lelt) & , dc1dz(lx1*ly1,lz1*lelt) real scalar_diss scalar_diss = 0.0 call gradm1(dc1dx, dc1dy, dc1dz, ps(1)) do i=1,lx1*ly1 do j=1,lz1*lelt scalar_diss = scalar_diss + (dc1dx(i, j)**2)*2 scalar_diss = scalar_diss + (dc1dy(i, j)**2)*2 scalar_diss = scalar_diss + (dc1dz(i, j)**2)*2 enddo enddo write(6,*) scalar_diss I looks pretty much the same like in https://lists.mcs.anl.gov/mailman/htdig/nek5000-users/2010-November/001106.html. The fields have all the same size (lx1=lx2=lx3 and so on). My testcase is a small rectangular grid initialized with the passive scalar set to one in the lower half and zero in the upper half. Thus, I would expect some value about 2*number of points. I hope you can help me. I'm struggling with this for some time now, but could not find my mistake. Maybe it is just my lack of experience with Fortran. Thank you Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 15 10:39:19 2011 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 15 Apr 2011 10:39:19 -0500 (CDT) Subject: [Nek5000-users] Scalar Derivatives / Scalar Dissipation In-Reply-To: References: Message-ID: Hi Alex, I'm assuming your call is in userchk? There are several items of which you should be aware (mostly related to parallel processing, since the code fragment is typically being executed on multiple processors). First - you should use t(....,2) as the argument to gradm1, if you want dissipation of PS1. (I'm assuming that you're also tracking temperature -- otherwise you can store your passive scalar in the temperature array. The data layout is as follows: variable pointer ---------------------------- temperature: t(1,1,1,1,1) pass. scal 1: t(1,1,1,1,2) pass. scal 2: t(1,1,1,1,3) etc. You should make certain that ldimt in SIZE is large enough to support the desired number of passive scalars. Assuming you want to compute dissipation rate for passive scalar 1, I would modify your code to read: call gradm1(dc1dx, dc1dy, dc1dz, t(1,1,1,1,2) ) n = nx1*ny1*nz1*nelv scalar_diss = ( glsc3(dc1dx,bm1,dc1dx,n) $ + glsc3(dc1dy,bm1,dc1dy,n) $ + glsc3(dc1dz,bm1,dc1dz,n) ) /volvm1 if (nid.eq.0) write(6,1) istep,time,scalar_diss 1 format(i9,1p2e14.6,' scalar diss') OK --- What does this do? The first part is as you had it, save that you take the gradient of t(....,2), which is where PS1 resides. The second part computes the weighted inner product, with each term having the form: (u,u) = u^T B u / volume where volume := 1^T B 1, with "1" the vector of all 1s, and B being the diagonal mass matrix. In general, in nek, one has / | f g dV = glsc3(f,bm1,g,n) , / assuming that f() and g() are arrays living on mesh 1. The array bm1() is the mass matrix B, on mesh 1 (the velocity/ temperature mesh - but not the pressure mesh, which is mesh 2). The function glsc3(a,b,c,n) takes the triple inner product: glsc3 = sum_i a_i b_i c_i , i=1,...,n Note that glsc3 works both in serial and parallel --- the sum is taken across all processors so that you get the correct result on all processors. If for some reason you desire a local (to a processor) inner product, you would use vlsc3(a,b,c,n) instead of glsc3, which stands for global scalar product, 3 arguments. We also have glsum, glsc2, glmax, glmin, etc. and of course their local counterparts vlsum, etc. Finally, the third change is to ensure that only node 0 writes the result, and not all of the mpi ranks. Otherwise you have P copies of the results written when running on P processors. Hope this helps. Paul On Fri, 15 Apr 2011, nek5000-users at lists.mcs.anl.gov wrote: > Hi Neks, > > I would like to investigate the mixing of passive scalars in turbulent flows. > Therefore, I tried to write a function to compute the scalar dissipation > rate. I found someone on the list who did it for temperature and pressure so > I tried the same way. Unfortunately, my gradients seem to be zero all the > time. The code i wrote looks like: > > include 'SIZE' > include 'TOTAL' > include 'ZPER' > include 'NEKUSE' > > common /scalar/ dc1dx(lx1*ly1,lz1*lelt) > & , dc1dy(lx1*ly1,lz1*lelt) > & , dc1dz(lx1*ly1,lz1*lelt) > real scalar_diss > scalar_diss = 0.0 > > call gradm1(dc1dx, dc1dy, dc1dz, ps(1)) > do i=1,lx1*ly1 > do j=1,lz1*lelt > scalar_diss = scalar_diss + (dc1dx(i, j)**2)*2 > scalar_diss = scalar_diss + (dc1dy(i, j)**2)*2 > scalar_diss = scalar_diss + (dc1dz(i, j)**2)*2 > enddo > enddo > write(6,*) scalar_diss > > I looks pretty much the same like in > https://lists.mcs.anl.gov/mailman/htdig/nek5000-users/2010-November/001106.html. > The fields have all the same size (lx1=lx2=lx3 and so on). My testcase is a > small rectangular grid initialized with the passive scalar set to one in the > lower half and zero in the upper half. Thus, I would expect some value about > 2*number of points. > I hope you can help me. I'm struggling with this for some time now, but could > not find my mistake. Maybe it is just my lack of experience with Fortran. > > Thank you > Alex > > > From nek5000-users at lists.mcs.anl.gov Fri Apr 15 12:08:14 2011 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 15 Apr 2011 19:08:14 +0200 Subject: [Nek5000-users] Scalar Derivatives / Scalar Dissipation In-Reply-To: References: Message-ID: Hi Alex, Here a routine I coded up some time ago to compute the scalar dissipation rate. Hth, Stefan c----------------------------------------------------------------------- subroutine scalDisp(chi,Z,D) c c compute scalar dissipation rate c chi := D * |grad Z|^2 c include 'SIZE' include 'TOTAL' real chi(lx1,ly1,lz1,1) real Z (lx1,ly1,lz1,1) real D (lx1,ly1,lz1,1) common /scrns/ w1(lx1,ly1,lz1,lelt) $ ,w2(lx1,ly1,lz1,lelt) $ ,w3(lx1,ly1,lz1,lelt) ntot = nx1*ny1*nz1*nelv call opgrad (w1,w2,w3,Z) call opdssum(w1,w2,w3) call opcolv (w1,w2,w3,binvm1) call magsqr (chi,w1,w2,w3,ntot) call col2 (chi,D,ntot) return end On 4/15/11, nek5000-users at lists.mcs.anl.gov wrote: > > Hi Alex, > > I'm assuming your call is in userchk? > > There are several items of which you should be aware (mostly related > to parallel processing, since the code fragment is typically > being executed on multiple processors). > > First - you should use t(....,2) as the argument to gradm1, if > you want dissipation of PS1. (I'm assuming that you're also > tracking temperature -- otherwise you can store your passive > scalar in the temperature array. The data layout is as follows: > > > variable pointer > ---------------------------- > temperature: t(1,1,1,1,1) > pass. scal 1: t(1,1,1,1,2) > pass. scal 2: t(1,1,1,1,3) > etc. > > You should make certain that ldimt in SIZE is large enough > to support the desired number of passive scalars. > > Assuming you want to compute dissipation rate for passive > scalar 1, I would modify your code to read: > > call gradm1(dc1dx, dc1dy, dc1dz, t(1,1,1,1,2) ) > > n = nx1*ny1*nz1*nelv > scalar_diss = ( glsc3(dc1dx,bm1,dc1dx,n) > $ + glsc3(dc1dy,bm1,dc1dy,n) > $ + glsc3(dc1dz,bm1,dc1dz,n) ) /volvm1 > > if (nid.eq.0) write(6,1) istep,time,scalar_diss > 1 format(i9,1p2e14.6,' scalar diss') > > > OK --- What does this do? > > The first part is as you had it, save that you take the > gradient of t(....,2), which is where PS1 resides. > > The second part computes the weighted inner product, > with each term having the form: > > (u,u) = u^T B u / volume > > where volume := 1^T B 1, with "1" the vector of all 1s, > and B being the diagonal mass matrix. > > In general, in nek, one has > > > / > | f g dV = glsc3(f,bm1,g,n) , > / > > assuming that f() and g() are arrays living on mesh 1. The > array bm1() is the mass matrix B, on mesh 1 (the velocity/ > temperature mesh - but not the pressure mesh, which is > mesh 2). > > The function glsc3(a,b,c,n) takes the triple inner product: > > glsc3 = sum_i a_i b_i c_i , i=1,...,n > > Note that glsc3 works both in serial and parallel --- the > sum is taken across all processors so that you get the > correct result on all processors. If for some reason you > desire a local (to a processor) inner product, you would > use vlsc3(a,b,c,n) instead of glsc3, which stands for > global scalar product, 3 arguments. We also have glsum, > glsc2, glmax, glmin, etc. and of course their local > counterparts vlsum, etc. > > Finally, the third change is to ensure that only node 0 > writes the result, and not all of the mpi ranks. Otherwise > you have P copies of the results written when running on P > processors. > > Hope this helps. > > Paul > > > > > > > > > > > > > On Fri, 15 Apr 2011, nek5000-users at lists.mcs.anl.gov wrote: > >> Hi Neks, >> >> I would like to investigate the mixing of passive scalars in turbulent >> flows. >> Therefore, I tried to write a function to compute the scalar dissipation >> rate. I found someone on the list who did it for temperature and pressure >> so >> I tried the same way. Unfortunately, my gradients seem to be zero all the >> time. The code i wrote looks like: >> >> include 'SIZE' >> include 'TOTAL' >> include 'ZPER' >> include 'NEKUSE' >> >> common /scalar/ dc1dx(lx1*ly1,lz1*lelt) >> & , dc1dy(lx1*ly1,lz1*lelt) >> & , dc1dz(lx1*ly1,lz1*lelt) >> real scalar_diss >> scalar_diss = 0.0 >> >> call gradm1(dc1dx, dc1dy, dc1dz, ps(1)) >> do i=1,lx1*ly1 >> do j=1,lz1*lelt >> scalar_diss = scalar_diss + (dc1dx(i, j)**2)*2 >> scalar_diss = scalar_diss + (dc1dy(i, j)**2)*2 >> scalar_diss = scalar_diss + (dc1dz(i, j)**2)*2 >> enddo >> enddo >> write(6,*) scalar_diss >> >> I looks pretty much the same like in >> https://lists.mcs.anl.gov/mailman/htdig/nek5000-users/2010-November/001106.html. >> The fields have all the same size (lx1=lx2=lx3 and so on). My testcase is >> a >> small rectangular grid initialized with the passive scalar set to one in >> the >> lower half and zero in the upper half. Thus, I would expect some value >> about >> 2*number of points. > >> I hope you can help me. I'm struggling with this for some time now, but >> could >> not find my mistake. Maybe it is just my lack of experience with Fortran. >> >> Thank you >> Alex >> >> >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Fri Apr 15 12:21:45 2011 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 15 Apr 2011 19:21:45 +0200 Subject: [Nek5000-users] Scalar Derivatives / Scalar Dissipation In-Reply-To: References: Message-ID: Ups, I missed to post the magsqr subroutine: subroutine magSqr(a,b1,b2,b3,n) include 'SIZE' real a(1) real b1(1),b2(1),b3(1) if(if3d) then do i=1,n a(i) = b1(i)*b1(i) + b2(i)*b2(i) + b3(i)*b3(i) enddo else do i=1,n a(i) = b1(i)*b1(i) + b2(i)*b2(i) enddo end return end On 4/15/11, S K wrote: > Hi Alex, > > Here a routine I coded up some time ago to compute the scalar dissipation > rate. > > Hth, > Stefan > > c----------------------------------------------------------------------- > subroutine scalDisp(chi,Z,D) > c > c compute scalar dissipation rate > c chi := D * |grad Z|^2 > c > include 'SIZE' > include 'TOTAL' > > real chi(lx1,ly1,lz1,1) > real Z (lx1,ly1,lz1,1) > real D (lx1,ly1,lz1,1) > > common /scrns/ w1(lx1,ly1,lz1,lelt) > $ ,w2(lx1,ly1,lz1,lelt) > $ ,w3(lx1,ly1,lz1,lelt) > > ntot = nx1*ny1*nz1*nelv > > call opgrad (w1,w2,w3,Z) > call opdssum(w1,w2,w3) > call opcolv (w1,w2,w3,binvm1) > > call magsqr (chi,w1,w2,w3,ntot) > call col2 (chi,D,ntot) > > return > end > > > On 4/15/11, nek5000-users at lists.mcs.anl.gov > wrote: >> >> Hi Alex, >> >> I'm assuming your call is in userchk? >> >> There are several items of which you should be aware (mostly related >> to parallel processing, since the code fragment is typically >> being executed on multiple processors). >> >> First - you should use t(....,2) as the argument to gradm1, if >> you want dissipation of PS1. (I'm assuming that you're also >> tracking temperature -- otherwise you can store your passive >> scalar in the temperature array. The data layout is as follows: >> >> >> variable pointer >> ---------------------------- >> temperature: t(1,1,1,1,1) >> pass. scal 1: t(1,1,1,1,2) >> pass. scal 2: t(1,1,1,1,3) >> etc. >> >> You should make certain that ldimt in SIZE is large enough >> to support the desired number of passive scalars. >> >> Assuming you want to compute dissipation rate for passive >> scalar 1, I would modify your code to read: >> >> call gradm1(dc1dx, dc1dy, dc1dz, t(1,1,1,1,2) ) >> >> n = nx1*ny1*nz1*nelv >> scalar_diss = ( glsc3(dc1dx,bm1,dc1dx,n) >> $ + glsc3(dc1dy,bm1,dc1dy,n) >> $ + glsc3(dc1dz,bm1,dc1dz,n) ) /volvm1 >> >> if (nid.eq.0) write(6,1) istep,time,scalar_diss >> 1 format(i9,1p2e14.6,' scalar diss') >> >> >> OK --- What does this do? >> >> The first part is as you had it, save that you take the >> gradient of t(....,2), which is where PS1 resides. >> >> The second part computes the weighted inner product, >> with each term having the form: >> >> (u,u) = u^T B u / volume >> >> where volume := 1^T B 1, with "1" the vector of all 1s, >> and B being the diagonal mass matrix. >> >> In general, in nek, one has >> >> >> / >> | f g dV = glsc3(f,bm1,g,n) , >> / >> >> assuming that f() and g() are arrays living on mesh 1. The >> array bm1() is the mass matrix B, on mesh 1 (the velocity/ >> temperature mesh - but not the pressure mesh, which is >> mesh 2). >> >> The function glsc3(a,b,c,n) takes the triple inner product: >> >> glsc3 = sum_i a_i b_i c_i , i=1,...,n >> >> Note that glsc3 works both in serial and parallel --- the >> sum is taken across all processors so that you get the >> correct result on all processors. If for some reason you >> desire a local (to a processor) inner product, you would >> use vlsc3(a,b,c,n) instead of glsc3, which stands for >> global scalar product, 3 arguments. We also have glsum, >> glsc2, glmax, glmin, etc. and of course their local >> counterparts vlsum, etc. >> >> Finally, the third change is to ensure that only node 0 >> writes the result, and not all of the mpi ranks. Otherwise >> you have P copies of the results written when running on P >> processors. >> >> Hope this helps. >> >> Paul >> >> >> >> >> >> >> >> >> >> >> >> >> On Fri, 15 Apr 2011, nek5000-users at lists.mcs.anl.gov wrote: >> >>> Hi Neks, >>> >>> I would like to investigate the mixing of passive scalars in turbulent >>> flows. >>> Therefore, I tried to write a function to compute the scalar dissipation >>> rate. I found someone on the list who did it for temperature and >>> pressure >>> so >>> I tried the same way. Unfortunately, my gradients seem to be zero all >>> the >>> time. The code i wrote looks like: >>> >>> include 'SIZE' >>> include 'TOTAL' >>> include 'ZPER' >>> include 'NEKUSE' >>> >>> common /scalar/ dc1dx(lx1*ly1,lz1*lelt) >>> & , dc1dy(lx1*ly1,lz1*lelt) >>> & , dc1dz(lx1*ly1,lz1*lelt) >>> real scalar_diss >>> scalar_diss = 0.0 >>> >>> call gradm1(dc1dx, dc1dy, dc1dz, ps(1)) >>> do i=1,lx1*ly1 >>> do j=1,lz1*lelt >>> scalar_diss = scalar_diss + (dc1dx(i, j)**2)*2 >>> scalar_diss = scalar_diss + (dc1dy(i, j)**2)*2 >>> scalar_diss = scalar_diss + (dc1dz(i, j)**2)*2 >>> enddo >>> enddo >>> write(6,*) scalar_diss >>> >>> I looks pretty much the same like in >>> https://lists.mcs.anl.gov/mailman/htdig/nek5000-users/2010-November/001106.html. >>> The fields have all the same size (lx1=lx2=lx3 and so on). My testcase >>> is >>> a >>> small rectangular grid initialized with the passive scalar set to one in >>> the >>> lower half and zero in the upper half. Thus, I would expect some value >>> about >>> 2*number of points. >> >>> I hope you can help me. I'm struggling with this for some time now, but >>> could >>> not find my mistake. Maybe it is just my lack of experience with >>> Fortran. >>> >>> Thank you >>> Alex >>> >>> >>> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > From nek5000-users at lists.mcs.anl.gov Tue Apr 19 12:34:39 2011 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 19 Apr 2011 19:34:39 +0200 Subject: [Nek5000-users] Robin boundary conditions for the velocity Message-ID: Hi Nek's. I would like to use the following boundary condition as my outflow condition for the adjoint perturbation mode: p = du/dz = dv/dz = 0 dw/dz = Re * Uz * uz where Uz is the z-component of the base flow. I would be tempted to do something like: subroutine usrchk [...] common /mygrad/ gradwx(lx1,ly1,lz1,lelt) $ gradwy(lx1,ly1,lz1,lelt) $ gradw(zlx1,ly1,lz1,lelt) call gradm1(gradwx,gradwy,gradz,vz) [...] and then in userbc subroutine userbc (ix,iy,iz,iside,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' common /mygrad/ gradwx(lx1,ly1,lz1,lelt) $ , gradwy(lx1,ly1,lz1,lelt) $ , gradwz(lx1,ly1,lz1,lelt) integer e,eg e = gllel(eg) ! global element number to processor-local el. # c Assuming param(2)<0 in .rea, Re = 1./param(2) since in connect2 or subs1 I do not remember there is first param(2) = -1/param(2) uz = (1./param(2)) * vz(ix,iy,iz,ie) * gradwz(ix,iy,iz,e) However I am not sure how to handle the other boundary conditions on u, v and p. Regards, JC -- Jean-Christophe -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Apr 20 09:03:03 2011 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 20 Apr 2011 16:03:03 +0200 Subject: [Nek5000-users] Robin boundary conditions for the velocity In-Reply-To: References: Message-ID: I actually miswrote my boundary conditions. What I want to impose is: (U.n) u + (p * IdentityMatrix + inv(Re) * grad(u)) . n = 0 that is, if n = (0,0,1)^T = ez: p = 0 dux/dz = Re * W * ux duy/dz = Re * W * uy duz/dz = Re * W * uz Concerning the velocity bc, I assume once again that the following code would do the trick: subroutine usrchk [...] common /mygrad/ gradux(lx1,ly1,lz1,lelt), ... , gradwz(lx1,ly1,lz1,lelt) call gradm1(graduv,graduy,graduz,vx) call gradm1(gradvx,gradvy,gradvz,vy) call gradm1(gradwx,gradwy,gradz,vz) [...] and then in userbc subroutine userbc (ix,iy,iz,iside,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' common /mygrad/ ... integer e,eg e = gllel(eg) ! global element number to processor-local el. # c Assuming param(2)<0 in .rea, Re = 1./param(2) since in connect2 or subs1 I do not remember there is first param(2) = -1/param(2) ux = (1./param(2)) * vz(ix,iy,iz,ie) * graduz(ix,iy,iz,ie) uy = (1./param(2)) * vz(ix,iy,iz,ie) * gradvz(ix,iy,iz,ie) uz = (1./param(2)) * vz(ix,iy,iz,ie) * gradwz(ix,iy,iz,ie) However, how do I make the code take this into account along with keeping the p = 0 imposed at the outflow? I presume I should use the 'o' bc character instead of 'O' in my .rea boundary conditions definition section. Is that correct? Best regards, JC On 19 April 2011 19:34, Jean-Christophe Loiseau wrote: > Hi Nek's. > > I would like to use the following boundary condition as my outflow > condition for the adjoint perturbation mode: > > p = du/dz = dv/dz = 0 > dw/dz = Re * Uz * uz > > where Uz is the z-component of the base flow. I would be tempted to do > something like: > > subroutine usrchk > > [...] > common /mygrad/ gradwx(lx1,ly1,lz1,lelt) > $ gradwy(lx1,ly1,lz1,lelt) > $ gradw(zlx1,ly1,lz1,lelt) > > call gradm1(gradwx,gradwy,gradz,vz) > > [...] > > and then in userbc > > subroutine userbc (ix,iy,iz,iside,eg) > include 'SIZE' > include 'TOTAL' > include 'NEKUSE' > > common /mygrad/ gradwx(lx1,ly1,lz1,lelt) > $ , gradwy(lx1,ly1,lz1,lelt) > $ , gradwz(lx1,ly1,lz1,lelt) > > integer e,eg > > e = gllel(eg) ! global element number to processor-local el. # > > c Assuming param(2)<0 in .rea, Re = 1./param(2) since in connect2 or subs1 > I do not remember there is first param(2) = -1/param(2) > > uz = (1./param(2)) * vz(ix,iy,iz,ie) * gradwz(ix,iy,iz,e) > > However I am not sure how to handle the other boundary conditions on u, v > and p. > > Regards, > JC > > -- > Jean-Christophe > -- Jean-Christophe -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Apr 21 13:36:39 2011 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 21 Apr 2011 13:36:39 -0500 (CDT) Subject: [Nek5000-users] Robin boundary conditions for the velocity In-Reply-To: Message-ID: <167218127.129151.1303410999538.JavaMail.root@zimbra.anl.gov> Hi Jean-Christophe, Sorry for the delay with the reply to your emails -- we are looking into them and will give a detailed answer next week. Thanks, Aleks ----- Original Message ----- From: nek5000-users at lists.mcs.anl.gov To: "Nek 5000" Sent: Wednesday, April 20, 2011 9:03:03 AM Subject: Re: [Nek5000-users] Robin boundary conditions for the velocity I actually miswrote my boundary conditions. What I want to impose is: (U.n) u + (p * IdentityMatrix + inv(Re) * grad(u)) . n = 0 that is, if n = (0,0,1)^T = ez: p = 0 dux/dz = Re * W * ux duy/dz = Re * W * uy duz/dz = Re * W * uz Concerning the velocity bc, I assume once again that the following code would do the trick: subroutine usrchk [...] common /mygrad/ gradux(lx1,ly1,lz1,lelt), ... , gradwz(lx1,ly1,lz1,lelt) call gradm1(graduv,graduy,graduz,vx) call gradm1(gradvx,gradvy,gradvz,vy) call gradm1(gradwx,gradwy,gradz,vz) [...] and then in userbc subroutine userbc (ix,iy,iz,iside,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' common /mygrad/ ... integer e,eg e = gllel(eg) ! global element number to processor-local el. # c Assuming param(2)<0 in .rea, Re = 1./param(2) since in connect2 or subs1 I do not remember there is first param(2) = -1/param(2) ux = (1./param(2)) * vz(ix,iy,iz,ie) * graduz(ix,iy,iz,ie) uy = (1./param(2)) * vz(ix,iy,iz,ie) * gradvz(ix,iy,iz,ie) uz = (1./param(2)) * vz(ix,iy,iz,ie) * gradwz(ix,iy,iz,ie) However, how do I make the code take this into account along with keeping the p = 0 imposed at the outflow? I presume I should use the 'o' bc character instead of 'O' in my .rea boundary conditions definition section. Is that correct? Best regards, JC On 19 April 2011 19:34, Jean-Christophe Loiseau < loiseau.jc at gmail.com > wrote: Hi Nek's. I would like to use the following boundary condition as my outflow condition for the adjoint perturbation mode: p = du/dz = dv/dz = 0 dw/dz = Re * Uz * uz where Uz is the z-component of the base flow. I would be tempted to do something like: subroutine usrchk [...] common /mygrad/ gradwx(lx1,ly1,lz1,lelt) $ gradwy(lx1,ly1,lz1,lelt) $ gradw(zlx1,ly1,lz1,lelt) call gradm1(gradwx,gradwy,gradz,vz) [...] and then in userbc subroutine userbc (ix,iy,iz,iside,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' common /mygrad/ gradwx(lx1,ly1,lz1,lelt) $ , gradwy(lx1,ly1,lz1,lelt) $ , gradwz(lx1,ly1,lz1,lelt) integer e,eg e = gllel(eg) ! global element number to processor-local el. # c Assuming param(2)<0 in .rea, Re = 1./param(2) since in connect2 or subs1 I do not remember there is first param(2) = -1/param(2) uz = (1./param(2)) * vz(ix,iy,iz,ie) * gradwz(ix,iy,iz,e) However I am not sure how to handle the other boundary conditions on u, v and p. Regards, JC -- Jean-Christophe -- Jean-Christophe _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Tue Apr 26 03:57:07 2011 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 26 Apr 2011 10:57:07 +0200 Subject: [Nek5000-users] Scalar Derivatives / Scalar Dissipation Message-ID: Thanks! This helped a lot. Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: