[Nek5000-users] Scalar Derivatives / Scalar Dissipation
nek5000-users at lists.mcs.anl.gov
nek5000-users at lists.mcs.anl.gov
Fri Apr 15 12:21:45 CDT 2011
Ups, I missed to post the magsqr subroutine:
subroutine magSqr(a,b1,b2,b3,n)
include 'SIZE'
real a(1)
real b1(1),b2(1),b3(1)
if(if3d) then
do i=1,n
a(i) = b1(i)*b1(i) + b2(i)*b2(i) + b3(i)*b3(i)
enddo
else
do i=1,n
a(i) = b1(i)*b1(i) + b2(i)*b2(i)
enddo
end
return
end
On 4/15/11, S K <stgeke at googlemail.com> wrote:
> Hi Alex,
>
> Here a routine I coded up some time ago to compute the scalar dissipation
> rate.
>
> Hth,
> Stefan
>
> c-----------------------------------------------------------------------
> subroutine scalDisp(chi,Z,D)
> c
> c compute scalar dissipation rate
> c chi := D * |grad Z|^2
> c
> include 'SIZE'
> include 'TOTAL'
>
> real chi(lx1,ly1,lz1,1)
> real Z (lx1,ly1,lz1,1)
> real D (lx1,ly1,lz1,1)
>
> common /scrns/ w1(lx1,ly1,lz1,lelt)
> $ ,w2(lx1,ly1,lz1,lelt)
> $ ,w3(lx1,ly1,lz1,lelt)
>
> ntot = nx1*ny1*nz1*nelv
>
> call opgrad (w1,w2,w3,Z)
> call opdssum(w1,w2,w3)
> call opcolv (w1,w2,w3,binvm1)
>
> call magsqr (chi,w1,w2,w3,ntot)
> call col2 (chi,D,ntot)
>
> return
> end
>
>
> On 4/15/11, nek5000-users at lists.mcs.anl.gov
> <nek5000-users at lists.mcs.anl.gov> wrote:
>>
>> Hi Alex,
>>
>> I'm assuming your call is in userchk?
>>
>> There are several items of which you should be aware (mostly related
>> to parallel processing, since the code fragment is typically
>> being executed on multiple processors).
>>
>> First - you should use t(....,2) as the argument to gradm1, if
>> you want dissipation of PS1. (I'm assuming that you're also
>> tracking temperature -- otherwise you can store your passive
>> scalar in the temperature array. The data layout is as follows:
>>
>>
>> variable pointer
>> ----------------------------
>> temperature: t(1,1,1,1,1)
>> pass. scal 1: t(1,1,1,1,2)
>> pass. scal 2: t(1,1,1,1,3)
>> etc.
>>
>> You should make certain that ldimt in SIZE is large enough
>> to support the desired number of passive scalars.
>>
>> Assuming you want to compute dissipation rate for passive
>> scalar 1, I would modify your code to read:
>>
>> call gradm1(dc1dx, dc1dy, dc1dz, t(1,1,1,1,2) )
>>
>> n = nx1*ny1*nz1*nelv
>> scalar_diss = ( glsc3(dc1dx,bm1,dc1dx,n)
>> $ + glsc3(dc1dy,bm1,dc1dy,n)
>> $ + glsc3(dc1dz,bm1,dc1dz,n) ) /volvm1
>>
>> if (nid.eq.0) write(6,1) istep,time,scalar_diss
>> 1 format(i9,1p2e14.6,' scalar diss')
>>
>>
>> OK --- What does this do?
>>
>> The first part is as you had it, save that you take the
>> gradient of t(....,2), which is where PS1 resides.
>>
>> The second part computes the weighted inner product,
>> with each term having the form:
>>
>> (u,u) = u^T B u / volume
>>
>> where volume := 1^T B 1, with "1" the vector of all 1s,
>> and B being the diagonal mass matrix.
>>
>> In general, in nek, one has
>>
>>
>> /
>> | f g dV = glsc3(f,bm1,g,n) ,
>> /
>>
>> assuming that f() and g() are arrays living on mesh 1. The
>> array bm1() is the mass matrix B, on mesh 1 (the velocity/
>> temperature mesh - but not the pressure mesh, which is
>> mesh 2).
>>
>> The function glsc3(a,b,c,n) takes the triple inner product:
>>
>> glsc3 = sum_i a_i b_i c_i , i=1,...,n
>>
>> Note that glsc3 works both in serial and parallel --- the
>> sum is taken across all processors so that you get the
>> correct result on all processors. If for some reason you
>> desire a local (to a processor) inner product, you would
>> use vlsc3(a,b,c,n) instead of glsc3, which stands for
>> global scalar product, 3 arguments. We also have glsum,
>> glsc2, glmax, glmin, etc. and of course their local
>> counterparts vlsum, etc.
>>
>> Finally, the third change is to ensure that only node 0
>> writes the result, and not all of the mpi ranks. Otherwise
>> you have P copies of the results written when running on P
>> processors.
>>
>> Hope this helps.
>>
>> Paul
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Fri, 15 Apr 2011, nek5000-users at lists.mcs.anl.gov wrote:
>>
>>> Hi Neks,
>>>
>>> I would like to investigate the mixing of passive scalars in turbulent
>>> flows.
>>> Therefore, I tried to write a function to compute the scalar dissipation
>>> rate. I found someone on the list who did it for temperature and
>>> pressure
>>> so
>>> I tried the same way. Unfortunately, my gradients seem to be zero all
>>> the
>>> time. The code i wrote looks like:
>>>
>>> include 'SIZE'
>>> include 'TOTAL'
>>> include 'ZPER'
>>> include 'NEKUSE'
>>>
>>> common /scalar/ dc1dx(lx1*ly1,lz1*lelt)
>>> & , dc1dy(lx1*ly1,lz1*lelt)
>>> & , dc1dz(lx1*ly1,lz1*lelt)
>>> real scalar_diss
>>> scalar_diss = 0.0
>>>
>>> call gradm1(dc1dx, dc1dy, dc1dz, ps(1))
>>> do i=1,lx1*ly1
>>> do j=1,lz1*lelt
>>> scalar_diss = scalar_diss + (dc1dx(i, j)**2)*2
>>> scalar_diss = scalar_diss + (dc1dy(i, j)**2)*2
>>> scalar_diss = scalar_diss + (dc1dz(i, j)**2)*2
>>> enddo
>>> enddo
>>> write(6,*) scalar_diss
>>>
>>> I looks pretty much the same like in
>>> https://lists.mcs.anl.gov/mailman/htdig/nek5000-users/2010-November/001106.html.
>>> The fields have all the same size (lx1=lx2=lx3 and so on). My testcase
>>> is
>>> a
>>> small rectangular grid initialized with the passive scalar set to one in
>>> the
>>> lower half and zero in the upper half. Thus, I would expect some value
>>> about
>>> 2*number of points.
>>
>>> I hope you can help me. I'm struggling with this for some time now, but
>>> could
>>> not find my mistake. Maybe it is just my lack of experience with
>>> Fortran.
>>>
>>> Thank you
>>> Alex
>>>
>>>
>>>
>> _______________________________________________
>> Nek5000-users mailing list
>> Nek5000-users at lists.mcs.anl.gov
>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
>>
>
More information about the Nek5000-users
mailing list