From nek5000-users at lists.mcs.anl.gov Sun Apr 1 03:18:37 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 1 Apr 2018 08:18:37 +0000 Subject: [Nek5000-users] 'comp_vort3' in 2D In-Reply-To: References: , Message-ID: Hi Saikat, Although I don't normally advocated modifying the source, the best way in this case is probably to modify subroutine advab in navier1.f in /core. Something like the following should work, where you supply your constant through a common block. hth, Paul common /myscale_factor/ scale_nl NTOT1 = lx1*ly1*lz1*NELV CALL CONVOP (TA1,VX) call cmult (ta1,scale_nl,ntot1) CALL CONVOP (TA2,VY) call cmult (ta2,scale_nl,ntot1) CALL SUBCOL3 (BFX,BM1,TA1,NTOT1) CALL SUBCOL3 (BFY,BM1,TA2,NTOT1) IF (ldim.EQ.2) THEN CALL RZERO (TA3,NTOT1) ELSE CALL CONVOP (TA3,VZ) call cmult (ta3,scale_nl,ntot1) CALL SUBCOL3 (BFZ,BM1,TA3,NTOT1) ENDIF C ________________________________ From: Nek5000-users on behalf of nek5000-users at lists.mcs.anl.gov Sent: Sunday, March 25, 2018 8:47:39 AM To: nek5000-users Subject: Re: [Nek5000-users] 'comp_vort3' in 2D Thanks for the clarification. Saikat Saikat Mukherjee, PhD Student, Paul Research Group - http://www.me.vt.edu/mpaul/ Engineering Science and Mechanics, Virginia Tech. On Sun, Mar 25, 2018 at 4:40 AM, > wrote: Hi, if you look into the code you see that for 2D cases the result it stored in vort(:,1), even though it is of course the z-vorticity. 00791 if (if3d) then 00792 c work1=dw/dy ; work2=dv/dz 00793 call dudxyz(work1,w,rym1,sym1,tym1,jacm1,1,2) 00794 call dudxyz(work2,v,rzm1,szm1,tzm1,jacm1,1,3) 00795 call sub3(vort(1,1),work1,work2,ntot) 00796 c work1=du/dz ; work2=dw/dx 00797 call dudxyz(work1,u,rzm1,szm1,tzm1,jacm1,1,3) 00798 call dudxyz(work2,w,rxm1,sxm1,txm1,jacm1,1,1) 00799 call sub3(vort(1,2),work1,work2,ntot) 00800 c work1=dv/dx ; work2=du/dy 00801 call dudxyz(work1,v,rxm1,sxm1,txm1,jacm1,1,1) 00802 call dudxyz(work2,u,rym1,sym1,tym1,jacm1,1,2) 00803 call sub3(vort(1,3),work1,work2,ntot) 00804 else 00805 c work1=dv/dx ; work2=du/dy 00806 call dudxyz(work1,v,rxm1,sxm1,txm1,jacm1,1,1) 00807 call dudxyz(work2,u,rym1,sym1,tym1,jacm1,1,2) 00808 call sub3(vort,work1,work2,ntot) 00809 endif Philipp On 2018-03-25 00:49, nek5000-users at lists.mcs.anl.gov wrote: Hello neks, I am using 'comp_vort3' for a 2D flow, with a forcing in x direction, call comp_vort3 (vort, w1, w2, vx, vy, vz) call outpost (vort(1,1), vort(1,2), vort(1,3), pr, t, 'vrt') In this case, the x and y component of vorticity should be trivial, and only z component should remain. However I am getting a non-zero component of x-component, vort(1,1) while vort(1,2) and vort(1,3) are zero. However if I add a z-direction, comp_vort3 returns a non-zero value of vort(1,3) and the x and y-components are zero, which should be the case. So my question is, in 2D, does comp_vort3 return only one value which is in vort(1,1)? The vorticity fields in both cases look sort of similar. Thanks, Saikat Saikat Mukherjee, PhD Student, Paul Research Group - http://www.me.vt.edu/mpaul/ Engineering Science and Mechanics, Virginia Tech. _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sun Apr 1 10:46:02 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 1 Apr 2018 15:46:02 +0000 Subject: [Nek5000-users] 'comp_vort3' in 2D In-Reply-To: References: , , Message-ID: Hi Saikat, In retrospect, you can also do this directly by changing param(1) -- the density; and changing dt (if you desire to keep rho*(u^n - u^{n-1}) / dt constant. So, Nek5000 solves: rho ( du/dt + u.gradu) = remaining terms You can see that modifying rho is all you need to do. Note that "dt" is an independent parameter -- it is always set to satisfy the governing stability criteria on the explicit treatment of the advection term. Give this some careful thought... hth Paul ________________________________ From: Nek5000-users on behalf of nek5000-users at lists.mcs.anl.gov Sent: Sunday, April 1, 2018 3:18:37 AM To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] 'comp_vort3' in 2D Hi Saikat, Although I don't normally advocated modifying the source, the best way in this case is probably to modify subroutine advab in navier1.f in /core. Something like the following should work, where you supply your constant through a common block. hth, Paul common /myscale_factor/ scale_nl NTOT1 = lx1*ly1*lz1*NELV CALL CONVOP (TA1,VX) call cmult (ta1,scale_nl,ntot1) CALL CONVOP (TA2,VY) call cmult (ta2,scale_nl,ntot1) CALL SUBCOL3 (BFX,BM1,TA1,NTOT1) CALL SUBCOL3 (BFY,BM1,TA2,NTOT1) IF (ldim.EQ.2) THEN CALL RZERO (TA3,NTOT1) ELSE CALL CONVOP (TA3,VZ) call cmult (ta3,scale_nl,ntot1) CALL SUBCOL3 (BFZ,BM1,TA3,NTOT1) ENDIF C ________________________________ From: Nek5000-users on behalf of nek5000-users at lists.mcs.anl.gov Sent: Sunday, March 25, 2018 8:47:39 AM To: nek5000-users Subject: Re: [Nek5000-users] 'comp_vort3' in 2D Thanks for the clarification. Saikat Saikat Mukherjee, PhD Student, Paul Research Group - http://www.me.vt.edu/mpaul/ Engineering Science and Mechanics, Virginia Tech. On Sun, Mar 25, 2018 at 4:40 AM, > wrote: Hi, if you look into the code you see that for 2D cases the result it stored in vort(:,1), even though it is of course the z-vorticity. 00791 if (if3d) then 00792 c work1=dw/dy ; work2=dv/dz 00793 call dudxyz(work1,w,rym1,sym1,tym1,jacm1,1,2) 00794 call dudxyz(work2,v,rzm1,szm1,tzm1,jacm1,1,3) 00795 call sub3(vort(1,1),work1,work2,ntot) 00796 c work1=du/dz ; work2=dw/dx 00797 call dudxyz(work1,u,rzm1,szm1,tzm1,jacm1,1,3) 00798 call dudxyz(work2,w,rxm1,sxm1,txm1,jacm1,1,1) 00799 call sub3(vort(1,2),work1,work2,ntot) 00800 c work1=dv/dx ; work2=du/dy 00801 call dudxyz(work1,v,rxm1,sxm1,txm1,jacm1,1,1) 00802 call dudxyz(work2,u,rym1,sym1,tym1,jacm1,1,2) 00803 call sub3(vort(1,3),work1,work2,ntot) 00804 else 00805 c work1=dv/dx ; work2=du/dy 00806 call dudxyz(work1,v,rxm1,sxm1,txm1,jacm1,1,1) 00807 call dudxyz(work2,u,rym1,sym1,tym1,jacm1,1,2) 00808 call sub3(vort,work1,work2,ntot) 00809 endif Philipp On 2018-03-25 00:49, nek5000-users at lists.mcs.anl.gov wrote: Hello neks, I am using 'comp_vort3' for a 2D flow, with a forcing in x direction, call comp_vort3 (vort, w1, w2, vx, vy, vz) call outpost (vort(1,1), vort(1,2), vort(1,3), pr, t, 'vrt') In this case, the x and y component of vorticity should be trivial, and only z component should remain. However I am getting a non-zero component of x-component, vort(1,1) while vort(1,2) and vort(1,3) are zero. However if I add a z-direction, comp_vort3 returns a non-zero value of vort(1,3) and the x and y-components are zero, which should be the case. So my question is, in 2D, does comp_vort3 return only one value which is in vort(1,1)? The vorticity fields in both cases look sort of similar. Thanks, Saikat Saikat Mukherjee, PhD Student, Paul Research Group - http://www.me.vt.edu/mpaul/ Engineering Science and Mechanics, Virginia Tech. _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sat Apr 7 22:46:46 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 7 Apr 2018 23:46:46 -0400 Subject: [Nek5000-users] Post-process in Matlab Message-ID: Hi Neks, I have a 3D geometry that I want to post-process in Matlab. I saw the post Read Nek output in Matlab , but I?m not sure if it can post-process 3D data. Any help will be appreciated. The mesh consist of 22,000 elements N=7 Thanks Sula. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 9 14:24:46 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 9 Apr 2018 19:24:46 +0000 Subject: [Nek5000-users] Post-process in Matlab Message-ID: Hi Sula, What kind of post-processing are you trying to do? Usually it is best to do the post-processing within Nek itself in userchk, and then output the data in logfile that you can later plot. The advantage of this is that you will have access to high-order tools such as findpts etc? which are already tested and scale well (this will also help you in future if you move to bigger meshes). For visual post-processing, we have tools like Visit and postnek which can read Nek?s field files (*f000) and give you a variety of options for slicing and plotting different quantities. Ketan From nek5000-users at lists.mcs.anl.gov Wed Apr 11 07:08:12 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 11 Apr 2018 12:08:12 +0000 Subject: [Nek5000-users] Change boundary condition type with time? Message-ID: Hi Neks, I wonder if it is possible to have boundary conditions that chage type with time? I want to simulate a flow where I make a 'puff' of fluid through a hole in a wall. After the puff is made it continues to evolve without beeing forced. Mathematically, this is what I want: 1. Dirichlet boundary conditions (ux,uy,uz) = (0,0,1) for the velocity at a part B of the boundary up to a time T. 2. Open boundary condition for the velocity at B for time > T. Is this possible to implement? Best, Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Apr 12 02:58:21 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Apr 2018 09:58:21 +0200 Subject: [Nek5000-users] Change boundary condition type with time? In-Reply-To: References: Message-ID: Hi Johan, Here is a piece of code adapted from one of my simulations that you could try to include in you userchk. include 'TSTEP' ! time, ifield include 'INPUT' ! cbc integer iel, ifc integer icalld save icalld data icalld /0/ if (icalld .eq. 0 .and. ifield .eq. 1 .and. time .gt. T) then icalld = 1 nface = 2*ndim do iel=1,nelv do ifc=1,nface if ("face belongs to part B") then ! Identify boundary B here cbc (ifc,iel,ifield) = 'O ' ! or 'o ' or 'on ' or 'ON ' depending on what is needed endif enddo enddo ! Reset BC flags and masks call setlog call bcmask endif Best, Nicolas P.S.: please ignore previous message, that was a wrong manipulation. On Wed, Apr 11, 2018 at 2:08 PM, wrote: > Hi Neks, > > > I wonder if it is possible to have boundary conditions that chage type > with time? > > > I want to simulate a flow where I make a 'puff' of fluid through a hole in > a wall. > > After the puff is made it continues to evolve without beeing forced. > > > Mathematically, this is what I want: > > 1. Dirichlet boundary conditions (ux,uy,uz) = (0,0,1) for the velocity > at a part B of the boundary up to a time T. > 2. Open boundary condition for the velocity at B for time > T. > > > Is this possible to implement? > > > Best, > > > Johan > > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Apr 12 02:48:23 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Apr 2018 09:48:23 +0200 Subject: [Nek5000-users] Change boundary condition type with time? In-Reply-To: References: Message-ID: include 'TSTEP' ! time, ifield include 'INPUT' ! cbc if (ifield .eq. 1 .and. time .gt. T) then nface = 2*ndim do iel=1,nelv do ifc=1,nface if ("face belongs to part B") then ! You need to identify your boundary B here cbc (ifc,iel,ifld) = 'O ' ! or 'o ' or 'on ' or 'ON ' depending on what you need there endif enddo enddo endif On Wed, Apr 11, 2018 at 2:08 PM, wrote: > Hi Neks, > > > I wonder if it is possible to have boundary conditions that chage type with > time? > > > I want to simulate a flow where I make a 'puff' of fluid through a hole in a > wall. > > After the puff is made it continues to evolve without beeing forced. > > > Mathematically, this is what I want: > > Dirichlet boundary conditions (ux,uy,uz) = (0,0,1) for the velocity at a > part B of the boundary up to a time T. > Open boundary condition for the velocity at B for time > T. > > > Is this possible to implement? > > > Best, > > > Johan > > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Apr 12 06:15:06 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Apr 2018 11:15:06 +0000 Subject: [Nek5000-users] Change boundary condition type with time? In-Reply-To: References: Message-ID: Dear Johan, I guess I'm confused about the "open boundary" part of your request. Do you anticipate flow leaving through this part? Or no flow coming in? The reason I ask is that it's generally unstable to have any flow coming into the domain through an open boundary. Aside from that, there are some technical issues to deal with. You need to change the boundary conditions, as Nicolas pointed out, but you also need to recompute the preconditioners that are set up at the beginning of the computation because they depend on the type of BC that is employed. I think the easiest and best approach would be to evolve your initial condition to the point where you want to change the BC and then stop the calculation and use the second BC option from that point (i.e., by using a different rea file for the second part of the computation, or by applying Nicolas's suggestion in subroutine usrdat2, which is called before the preconditioners are established). Note that you can have your inflow BC be time dependent -- flow turned, then flow turned off -- which is all Dirichlet, and you don't need to start/stop the calculation. hth Paul ________________________________ From: Nek5000-users on behalf of nek5000-users at lists.mcs.anl.gov Sent: Wednesday, April 11, 2018 7:08:12 AM To: nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] Change boundary condition type with time? Hi Neks, I wonder if it is possible to have boundary conditions that chage type with time? I want to simulate a flow where I make a 'puff' of fluid through a hole in a wall. After the puff is made it continues to evolve without beeing forced. Mathematically, this is what I want: 1. Dirichlet boundary conditions (ux,uy,uz) = (0,0,1) for the velocity at a part B of the boundary up to a time T. 2. Open boundary condition for the velocity at B for time > T. Is this possible to implement? Best, Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 13 01:29:39 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 13 Apr 2018 06:29:39 +0000 Subject: [Nek5000-users] Change boundary condition type with time? In-Reply-To: References: , Message-ID: Thank you Nicolas and Paul, I think I understand your suggestion Paul about changing the BC:s in the way Nicolas suggests in usrdat2. But in practice I dont know how to implement the Boolean "face belongs to part B". Is this done using cbc(ifc,iel,ifld)=='v '? Paul: Yes, I was planning to have an inflow through an open bounday. I was not aware of that this may imply instabilities. I want to simulate a flow resembling a "smoke ring" (vortex ring puffed out bya cigarette smoker). What BC:s would you suggest? I could imagine that Dirichlet BC:s that change with time could work ... Best, Johan ________________________________ From: Nek5000-users on behalf of nek5000-users at lists.mcs.anl.gov Sent: Thursday, April 12, 2018 1:15:06 PM To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] Change boundary condition type with time? Dear Johan, I guess I'm confused about the "open boundary" part of your request. Do you anticipate flow leaving through this part? Or no flow coming in? The reason I ask is that it's generally unstable to have any flow coming into the domain through an open boundary. Aside from that, there are some technical issues to deal with. You need to change the boundary conditions, as Nicolas pointed out, but you also need to recompute the preconditioners that are set up at the beginning of the computation because they depend on the type of BC that is employed. I think the easiest and best approach would be to evolve your initial condition to the point where you want to change the BC and then stop the calculation and use the second BC option from that point (i.e., by using a different rea file for the second part of the computation, or by applying Nicolas's suggestion in subroutine usrdat2, which is called before the preconditioners are established). Note that you can have your inflow BC be time dependent -- flow turned, then flow turned off -- which is all Dirichlet, and you don't need to start/stop the calculation. hth Paul ________________________________ From: Nek5000-users on behalf of nek5000-users at lists.mcs.anl.gov Sent: Wednesday, April 11, 2018 7:08:12 AM To: nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] Change boundary condition type with time? Hi Neks, I wonder if it is possible to have boundary conditions that chage type with time? I want to simulate a flow where I make a 'puff' of fluid through a hole in a wall. After the puff is made it continues to evolve without beeing forced. Mathematically, this is what I want: 1. Dirichlet boundary conditions (ux,uy,uz) = (0,0,1) for the velocity at a part B of the boundary up to a time T. 2. Open boundary condition for the velocity at B for time > T. Is this possible to implement? Best, Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 13 07:55:21 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 13 Apr 2018 13:55:21 +0100 Subject: [Nek5000-users] pipe flow with pressure gradient fixed Message-ID: Hi nek, Is there a way to keep the pressure gradient in the usr file which can has the ability to simulate the single-phase pipe flow where para54 and para55 were kept at constant flow rate, let's take the single-phase pipe flow at Ret=360 as an example. Best wishes, Zhai -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 13 07:56:47 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 13 Apr 2018 13:56:47 +0100 Subject: [Nek5000-users] pipe flow with pressure gradient fixed Message-ID: Hi nek, Further to last post. I tried the way in channel flow which seems like not woking since the velocity is lower. Best wishes, Zhai -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 16 16:45:41 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 16 Apr 2018 18:45:41 -0300 Subject: [Nek5000-users] Problem on cluster Message-ID: Dear Nek users, I'm having a problem when I use the latest version of Nek on a HPC cluster. I can compile, but when I run my simulations they finish. The logfile of a generic case is like the following: =========================================================== /----------------------------------------------------------\\ | _ __ ______ __ __ ______ ____ ____ ____ | | / | / // ____// //_/ / ____/ / __ \\/ __ \\/ __ \\ | | / |/ // __/ / ,< /___ \\ / / / // / / // / / / | | / /| // /___ / /| | ____/ / / /_/ // /_/ // /_/ / | | /_/ |_//_____//_/ |_|/_____/ \\___/ \\___/ \\___/ | | | |----------------------------------------------------------| | | | NEK5000: Open Source Spectral Element Solver | | COPYRIGHT (c) 2008-2017 UCHICAGO ARGONNE, LLC | | Version: 17.0-rc1 | | Web: http://nek5000.mcs.anl.gov | | | \\----------------------------------------------------------/ Number of processors: 80 REAL wdsize : 8 INTEGER wdsize : 4 Timer accuracy : 0.00E+00 Reading /home/jrobinson/casos/Placa_6/Placa_6.rea Reading /home/jrobinson/casos/Placa_6/Placa_6.re2 mapping elements to processors Reading /home/jrobinson/casos/Placa_6/Placa_6.ma2 RANK 0 IEG 1754 1755 1756 1757 1758 1759 1760 1774 1775 1776 1777 1778 1779 1780 1794 1795 1796 1797 1798 1799 1800 1814 1815 1816 1817 1818 1819 1820 1834 1835 1836 1837 1838 1839 1840 1853 1854 1855 1856 1857 1858 1859 1860 1873 1874 1875 1876 1877 1878 1879 1880 1893 1894 1895 1896 1897 1898 1899 1913 1914 1915 1916 1917 1918 1919 1933 1934 1935 1936 1937 1938 1939 1953 1954 1955 1956 1957 1958 1974 1975 1976 1977 1978 1994 1995 1996 1997 1998 1999 2000 2014 2015 2016 2017 2018 2019 2020 9783 9784 9785 9786 9787 9788 9789 9790 9791 9792 9793 9794 9795 9796 9797 9798 9799 9800 9801 9802 9803 9804 9805 9806 9807 9808 9809 9810 9811 9812 9813 9814 9815 9816 9817 9818 9819 9820 9821 9822 9823 9824 9825 9826 9827 9828 9829 9830 9849 9855 9856 9861 9862 element load imbalance: 1 150 151 done :: mapping 0.32155 sec preading mesh ============================================================= So the last line is "preading mesh". This doesn't give too much information, but the cluster generates a file with the following errors (at the end of this text). When I use an old version of Nek on this same cluster, I have no problem running my cases. The problem is that I need to use the latest version because I'm using exo2nek routine for my meshes generated with Trelis (Cubit). Any idea of what could I do? Thank you all. Juan Pablo. ========================================================= This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 4A,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 84, length 10800 This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 11,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 32484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 3 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 3) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 11,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 86484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 8 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 8) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 14,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 248484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 23 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 23) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 108084, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 10 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 10) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 10,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 259284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 24 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 24) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 12,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 118884, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 11 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 11) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 14,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 12,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 140484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 13 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 13) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 10,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 280884, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 26 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 26) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 151284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 14 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 14) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 14,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 291684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 27 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 27) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 12,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 162084, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 15 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 15) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 10,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 302484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 28 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 28) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 172884, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 16 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 16) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 10,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 324084, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 30 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 30) application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 334884, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 31 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 31) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 21684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 2 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 2) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 345684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 32 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 32) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 43284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 4 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 4) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 356484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 33 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 33) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 11,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 54084, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 5 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 5) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 12,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 367284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 34 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 34) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 64884, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 6 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 6) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 378084, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 35 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 35) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 12,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 75684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 7 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 7) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 388884, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 36 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 36) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 12,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 97284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 9 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 9) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 12,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 421284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 39 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 39) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 129684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 12 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 12) ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 216084, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 20 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 20) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 183684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 17 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 17) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 226884, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 21 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 21) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 194484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 18 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 18) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 10,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 237684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 22 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 22) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 205284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 19 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 19) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 313284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 29 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 29) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 11,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 10884, length 10800 This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 399684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 37 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 37) application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 410484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 38 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 38) slurmstepd: error: *** STEP 11073373.0 ON leftraru1 CANCELLED AT 2018-04-16T18:38:42 *** ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 270084, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 25 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 25) srun: Job step aborted: Waiting up to 32 seconds for job step to finish. srun: error: leftraru1: tasks 0-19: Killed srun: Terminating job step 11073373.0 srun: error: leftraru4: tasks 60-79: Killed srun: error: leftraru3: tasks 40-59: Killed srun: error: leftraru2: tasks 20-39: Killed -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 16 18:38:21 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 17 Apr 2018 01:38:21 +0200 Subject: [Nek5000-users] Problem on cluster In-Reply-To: References: Message-ID: Please use the release tarball instead of the GitHub master! The error message suggest that your MPI installation is outdated -> some necessary MPIIO features are missing. I think updating MPI will do the trick. Stefan. On 16 Apr 2018, at 17:46, "nek5000-users at lists.mcs.anl.gov " > wrote: Dear Nek users, I'm having a problem when I use the latest version of Nek on a HPC cluster. I can compile, but when I run my simulations they finish. The logfile of a generic case is like the following: =========================================================== /----------------------------------------------------------\\ |? ? ? _? ?__ ______ __ __? ______? ____? ?____? ?____? ? ?| |? ? ?/ | / // ____// //_/ / ____/ / __ \\/ __ \\/ __ \\? ?| |? ? /? |/ // __/? / , ? ? ? ? ? ? ? ? ? ? ?| |? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? | \\----------------------------------------------------------/ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ?Number of processors:? ? ? ? ? 80 ?REAL? ? wdsize? ? ? :? ? ? ? ? ?8 ?INTEGER wdsize? ? ? :? ? ? ? ? ?4 ?Timer accuracy? ? ? : 0.00E+00 ???Reading /home/jrobinson/casos/Placa_6/Placa_6.rea? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Reading /home/jrobinson/casos/Placa_6/Placa_6.re2? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?mapping elements to processors ?Reading /home/jrobinson/casos/Placa_6/Placa_6.ma2? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?RANK? ? ?0 IEG? ? 1754? ? 1755? ? 1756? ? 1757? ? 1758? ? 1759? ? 1760? ? 1774 ? ? ? ? ? ? ? ? ? ?1775? ? 1776? ? 1777? ? 1778? ? 1779? ? 1780? ? 1794? ? 1795 ? ? ? ? ? ? ? ? ? ?1796? ? 1797? ? 1798? ? 1799? ? 1800? ? 1814? ? 1815? ? 1816 ? ? ? ? ? ? ? ? ? ?1817? ? 1818? ? 1819? ? 1820? ? 1834? ? 1835? ? 1836? ? 1837 ? ? ? ? ? ? ? ? ? ?1838? ? 1839? ? 1840? ? 1853? ? 1854? ? 1855? ? 1856? ? 1857 ? ? ? ? ? ? ? ? ? ?1858? ? 1859? ? 1860? ? 1873? ? 1874? ? 1875? ? 1876? ? 1877 ? ? ? ? ? ? ? ? ? ?1878? ? 1879? ? 1880? ? 1893? ? 1894? ? 1895? ? 1896? ? 1897 ? ? ? ? ? ? ? ? ? ?1898? ? 1899? ? 1913? ? 1914? ? 1915? ? 1916? ? 1917? ? 1918 ? ? ? ? ? ? ? ? ? ?1919? ? 1933? ? 1934? ? 1935? ? 1936? ? 1937? ? 1938? ? 1939 ? ? ? ? ? ? ? ? ? ?1953? ? 1954? ? 1955? ? 1956? ? 1957? ? 1958? ? 1974? ? 1975 ? ? ? ? ? ? ? ? ? ?1976? ? 1977? ? 1978? ? 1994? ? 1995? ? 1996? ? 1997? ? 1998 ? ? ? ? ? ? ? ? ? ?1999? ? 2000? ? 2014? ? 2015? ? 2016? ? 2017? ? 2018? ? 2019 ? ? ? ? ? ? ? ? ? ?2020? ? 9783? ? 9784? ? 9785? ? 9786? ? 9787? ? 9788? ? 9789 ? ? ? ? ? ? ? ? ? ?9790? ? 9791? ? 9792? ? 9793? ? 9794? ? 9795? ? 9796? ? 9797 ? ? ? ? ? ? ? ? ? ?9798? ? 9799? ? 9800? ? 9801? ? 9802? ? 9803? ? 9804? ? 9805 ? ? ? ? ? ? ? ? ? ?9806? ? 9807? ? 9808? ? 9809? ? 9810? ? 9811? ? 9812? ? 9813 ? ? ? ? ? ? ? ? ? ?9814? ? 9815? ? 9816? ? 9817? ? 9818? ? 9819? ? 9820? ? 9821 ? ? ? ? ? ? ? ? ? ?9822? ? 9823? ? 9824? ? 9825? ? 9826? ? 9827? ? 9828? ? 9829 ? ? ? ? ? ? ? ? ? ?9830? ? 9849? ? 9855? ? 9856? ? 9861? ? 9862 ???element load imbalance:? ? ? ? ? ? 1? ? ? ? ?150? ? ? ? ?151 ?done :: mapping? ?0.32155? ? ?sec ??? preading mesh? ============================================================= So the last line is "preading mesh". This doesn't give too much information, but the cluster generates a file with the following errors (at the end of this text). When I use an old version of Nek on this same cluster, I have no problem running my cases. The problem is that I need to use the latest version because I'm using exo2nek routine for my meshes generated with Trelis (Cubit). Any idea of what could I do? Thank you all. Juan Pablo. ========================================================= This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 4A,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 16 18:48:55 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 17 Apr 2018 01:48:55 +0200 Subject: [Nek5000-users] Problem on cluster In-Reply-To: References: Message-ID: Before updating, can you please check if the following advise helps: If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option Stefan On 16 Apr 2018, at 19:39, "nek5000-users at lists.mcs.anl.gov " > wrote: Please use the release tarball instead of the GitHub master! The error message suggest that your MPI installation is outdated -> some necessary MPIIO features are missing. I think updating MPI will do the trick. Stefan. On 16 Apr 2018, at 17:46, "nek5000-users at lists.mcs.anl.gov " > wrote: Dear Nek users, I'm having a problem when I use the latest version of Nek on a HPC cluster. I can compile, but when I run my simulations they finish. The logfile of a generic case is like the following: =========================================================== /----------------------------------------------------------\\ |? ? ? _? ?__ ______ __ __? ______? ____? ?____? ?____? ? ?| |? ? ?/ | / // ____// //_/ / ____/ / __ \\/ __ \\/ __ \\? ?| |? ? /? |/ // __/? / , ? ? ? ? ? ? ? ? ? ? ?| |? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? | \\----------------------------------------------------------/ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ?Number of processors:? ? ? ? ? 80 ?REAL? ? wdsize? ? ? :? ? ? ? ? ?8 ?INTEGER wdsize? ? ? :? ? ? ? ? ?4 ?Timer accuracy? ? ? : 0.00E+00 ???Reading /home/jrobinson/casos/Placa_6/Placa_6.rea? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Reading /home/jrobinson/casos/Placa_6/Placa_6.re2? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?mapping elements to processors ?Reading /home/jrobinson/casos/Placa_6/Placa_6.ma2? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?RANK? ? ?0 IEG? ? 1754? ? 1755? ? 1756? ? 1757? ? 1758? ? 1759? ? 1760? ? 1774 ? ? ? ? ? ? ? ? ? ?1775? ? 1776? ? 1777? ? 1778? ? 1779? ? 1780? ? 1794? ? 1795 ? ? ? ? ? ? ? ? ? ?1796? ? 1797? ? 1798? ? 1799? ? 1800? ? 1814? ? 1815? ? 1816 ? ? ? ? ? ? ? ? ? ?1817? ? 1818? ? 1819? ? 1820? ? 1834? ? 1835? ? 1836? ? 1837 ? ? ? ? ? ? ? ? ? ?1838? ? 1839? ? 1840? ? 1853? ? 1854? ? 1855? ? 1856? ? 1857 ? ? ? ? ? ? ? ? ? ?1858? ? 1859? ? 1860? ? 1873? ? 1874? ? 1875? ? 1876? ? 1877 ? ? ? ? ? ? ? ? ? ?1878? ? 1879? ? 1880? ? 1893? ? 1894? ? 1895? ? 1896? ? 1897 ? ? ? ? ? ? ? ? ? ?1898? ? 1899? ? 1913? ? 1914? ? 1915? ? 1916? ? 1917? ? 1918 ? ? ? ? ? ? ? ? ? ?1919? ? 1933? ? 1934? ? 1935? ? 1936? ? 1937? ? 1938? ? 1939 ? ? ? ? ? ? ? ? ? ?1953? ? 1954? ? 1955? ? 1956? ? 1957? ? 1958? ? 1974? ? 1975 ? ? ? ? ? ? ? ? ? ?1976? ? 1977? ? 1978? ? 1994? ? 1995? ? 1996? ? 1997? ? 1998 ? ? ? ? ? ? ? ? ? ?1999? ? 2000? ? 2014? ? 2015? ? 2016? ? 2017? ? 2018? ? 2019 ? ? ? ? ? ? ? ? ? ?2020? ? 9783? ? 9784? ? 9785? ? 9786? ? 9787? ? 9788? ? 9789 ? ? ? ? ? ? ? ? ? ?9790? ? 9791? ? 9792? ? 9793? ? 9794? ? 9795? ? 9796? ? 9797 ? ? ? ? ? ? ? ? ? ?9798? ? 9799? ? 9800? ? 9801? ? 9802? ? 9803? ? 9804? ? 9805 ? ? ? ? ? ? ? ? ? ?9806? ? 9807? ? 9808? ? 9809? ? 9810? ? 9811? ? 9812? ? 9813 ? ? ? ? ? ? ? ? ? ?9814? ? 9815? ? 9816? ? 9817? ? 9818? ? 9819? ? 9820? ? 9821 ? ? ? ? ? ? ? ? ? ?9822? ? 9823? ? 9824? ? 9825? ? 9826? ? 9827? ? 9828? ? 9829 ? ? ? ? ? ? ? ? ? ?9830? ? 9849? ? 9855? ? 9856? ? 9861? ? 9862 ???element load imbalance:? ? ? ? ? ? 1? ? ? ? ?150? ? ? ? ?151 ?done :: mapping? ?0.32155? ? ?sec ??? preading mesh? ============================================================= So the last line is "preading mesh". This doesn't give too much information, but the cluster generates a file with the following errors (at the end of this text). When I use an old version of Nek on this same cluster, I have no problem running my cases. The problem is that I need to use the latest version because I'm using exo2nek routine for my meshes generated with Trelis (Cubit). Any idea of what could I do? Thank you all. Juan Pablo. ========================================================= This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 4A,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 16 20:17:51 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 16 Apr 2018 22:17:51 -0300 Subject: [Nek5000-users] Problem on cluster Message-ID: Using the release or the master I have that problem. I'm guessing it is something I should tell to the adminstrators of the cluster, but I'm not sure what to say exactly. Juan Pablo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 16 20:22:40 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 17 Apr 2018 01:22:40 +0000 Subject: [Nek5000-users] Problem on cluster In-Reply-To: References: Message-ID: Hi Juan Pablo, Are you using the genmap command before compiling the problem? On Mon, Apr 16, 2018 at 9:18 PM wrote: > Using the release or the master I have that problem. > > I'm guessing it is something I should tell to the adminstrators of the > cluster, but I'm not sure what to say exactly. > > Juan Pablo. > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -- *Amitvikram Dutta* Graduate Research Assistant Fluid Mechanics Research Lab Multi-Physics Interaction Lab University of Waterloo -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 16 18:12:51 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 17 Apr 2018 01:12:51 +0200 Subject: [Nek5000-users] Problem on cluster In-Reply-To: References: Message-ID: Looks like your are using an outdated/old MPI implementation. Can you try again with a more recent version? On 16 Apr 2018, at 17:46, "nek5000-users at lists.mcs.anl.gov " > wrote: Dear Nek users, I'm having a problem when I use the latest version of Nek on a HPC cluster. I can compile, but when I run my simulations they finish. The logfile of a generic case is like the following: =========================================================== /----------------------------------------------------------\\ |? ? ? _? ?__ ______ __ __? ______? ____? ?____? ?____? ? ?| |? ? ?/ | / // ____// //_/ / ____/ / __ \\/ __ \\/ __ \\? ?| |? ? /? |/ // __/? / , ? ? ? ? ? ? ? ? ? ? ?| |? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? | \\----------------------------------------------------------/ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ?Number of processors:? ? ? ? ? 80 ?REAL? ? wdsize? ? ? :? ? ? ? ? ?8 ?INTEGER wdsize? ? ? :? ? ? ? ? ?4 ?Timer accuracy? ? ? : 0.00E+00 ???Reading /home/jrobinson/casos/Placa_6/Placa_6.rea? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Reading /home/jrobinson/casos/Placa_6/Placa_6.re2? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?mapping elements to processors ?Reading /home/jrobinson/casos/Placa_6/Placa_6.ma2? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?RANK? ? ?0 IEG? ? 1754? ? 1755? ? 1756? ? 1757? ? 1758? ? 1759? ? 1760? ? 1774 ? ? ? ? ? ? ? ? ? ?1775? ? 1776? ? 1777? ? 1778? ? 1779? ? 1780? ? 1794? ? 1795 ? ? ? ? ? ? ? ? ? ?1796? ? 1797? ? 1798? ? 1799? ? 1800? ? 1814? ? 1815? ? 1816 ? ? ? ? ? ? ? ? ? ?1817? ? 1818? ? 1819? ? 1820? ? 1834? ? 1835? ? 1836? ? 1837 ? ? ? ? ? ? ? ? ? ?1838? ? 1839? ? 1840? ? 1853? ? 1854? ? 1855? ? 1856? ? 1857 ? ? ? ? ? ? ? ? ? ?1858? ? 1859? ? 1860? ? 1873? ? 1874? ? 1875? ? 1876? ? 1877 ? ? ? ? ? ? ? ? ? ?1878? ? 1879? ? 1880? ? 1893? ? 1894? ? 1895? ? 1896? ? 1897 ? ? ? ? ? ? ? ? ? ?1898? ? 1899? ? 1913? ? 1914? ? 1915? ? 1916? ? 1917? ? 1918 ? ? ? ? ? ? ? ? ? ?1919? ? 1933? ? 1934? ? 1935? ? 1936? ? 1937? ? 1938? ? 1939 ? ? ? ? ? ? ? ? ? ?1953? ? 1954? ? 1955? ? 1956? ? 1957? ? 1958? ? 1974? ? 1975 ? ? ? ? ? ? ? ? ? ?1976? ? 1977? ? 1978? ? 1994? ? 1995? ? 1996? ? 1997? ? 1998 ? ? ? ? ? ? ? ? ? ?1999? ? 2000? ? 2014? ? 2015? ? 2016? ? 2017? ? 2018? ? 2019 ? ? ? ? ? ? ? ? ? ?2020? ? 9783? ? 9784? ? 9785? ? 9786? ? 9787? ? 9788? ? 9789 ? ? ? ? ? ? ? ? ? ?9790? ? 9791? ? 9792? ? 9793? ? 9794? ? 9795? ? 9796? ? 9797 ? ? ? ? ? ? ? ? ? ?9798? ? 9799? ? 9800? ? 9801? ? 9802? ? 9803? ? 9804? ? 9805 ? ? ? ? ? ? ? ? ? ?9806? ? 9807? ? 9808? ? 9809? ? 9810? ? 9811? ? 9812? ? 9813 ? ? ? ? ? ? ? ? ? ?9814? ? 9815? ? 9816? ? 9817? ? 9818? ? 9819? ? 9820? ? 9821 ? ? ? ? ? ? ? ? ? ?9822? ? 9823? ? 9824? ? 9825? ? 9826? ? 9827? ? 9828? ? 9829 ? ? ? ? ? ? ? ? ? ?9830? ? 9849? ? 9855? ? 9856? ? 9861? ? 9862 ???element load imbalance:? ? ? ? ? ? 1? ? ? ? ?150? ? ? ? ?151 ?done :: mapping? ?0.32155? ? ?sec ??? preading mesh? ============================================================= So the last line is "preading mesh". This doesn't give too much information, but the cluster generates a file with the following errors (at the end of this text). When I use an old version of Nek on this same cluster, I have no problem running my cases. The problem is that I need to use the latest version because I'm using exo2nek routine for my meshes generated with Trelis (Cubit). Any idea of what could I do? Thank you all. Juan Pablo. ========================================================= This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 4A,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 84, length 10800 This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 11,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 32484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 3 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 3) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 11,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 86484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 8 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 8) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 14,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 248484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 23 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 23) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 108084, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 10 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 10) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 10,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 259284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 24 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 24) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 12,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 118884, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 11 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 11) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 14,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 12,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 140484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 13 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 13) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 10,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 280884, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 26 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 26) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 151284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 14 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 14) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 14,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 291684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 27 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 27) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 12,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 162084, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 15 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 15) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 10,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 302484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 28 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 28) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 172884, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 16 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 16) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 10,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 324084, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 30 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 30) application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 334884, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 31 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 31) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 21684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 2 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 2) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 345684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 32 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 32) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 43284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 4 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 4) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 356484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 33 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 33) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 11,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 54084, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 5 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 5) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 12,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 367284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 34 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 34) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 64884, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 6 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 6) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 378084, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 35 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 35) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 12,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 75684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 7 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 7) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 388884, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 36 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 36) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 12,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 97284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 9 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 9) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 12,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 421284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 39 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 39) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 129684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 12 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 12) ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 216084, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 20 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 20) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 183684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 17 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 17) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 226884, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 21 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 21) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 194484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 18 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 18) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 10,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 237684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 22 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 22) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 205284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 19 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 19) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 313284, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 29 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 29) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 11,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 10884, length 10800 This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 399684, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 37 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 37) application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1) This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 410484, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 38 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 38) slurmstepd: error: *** STEP 11073373.0 ON leftraru1 CANCELLED AT 2018-04-16T18:38:42 *** ADIOI_Set_lock:: Function not implemented ADIOI_Set_lock:offset 270084, length 10800 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 25 In: PMI_Abort(1, application called MPI_Abort(MPI_COMM_WORLD, 1) - process 25) srun: Job step aborted: Waiting up to 32 seconds for job step to finish. srun: error: leftraru1: tasks 0-19: Killed srun: Terminating job step 11073373.0 srun: error: leftraru4: tasks 60-79: Killed srun: error: leftraru3: tasks 40-59: Killed srun: error: leftraru2: tasks 20-39: Killed _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 16 20:03:00 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 17 Apr 2018 01:03:00 +0000 Subject: [Nek5000-users] Problem on cluster In-Reply-To: References: Message-ID: Stefan, Is there a way to run this without the flock option?? For a variety of reasons we don?t support it on our system and I have been trying to run nek but having the same issue.? Are there any compile flags that I can use or a different version of mpi perhaps?? Recommendations welcome. Thanks, ???Julie From: Nek5000-users on behalf of "nek5000-users at lists.mcs.anl.gov" Reply-To: "nek5000-users at lists.mcs.anl.gov" Date: Monday, April 16, 2018 at 7:49 PM To: "nek5000-users at lists.mcs.anl.gov" Subject: Re: [Nek5000-users] Problem on cluster Before updating, can you please check if the following advise helps: If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option Stefan On 16 Apr 2018, at 19:39, "nek5000-users at lists.mcs.anl.gov" wrote: Please use the release tarball instead of the GitHub master! The error message suggest that your MPI installation is outdated -> some necessary MPIIO features are missing. I think updating MPI will do the trick. Stefan. On 16 Apr 2018, at 17:46, "nek5000-users at lists.mcs.anl.gov" wrote: Dear Nek users, I'm having a problem when I use the latest version of Nek on a HPC cluster. I can compile, but when I run my simulations they finish. The logfile of a generic case is like the following: =========================================================== /----------------------------------------------------------\\ | _ __ ______ __ __ ______ ____ ____ ____ | | / | / // ____// //_/ / ____/ / __ \\/ __ \\/ __ \\ | | / |/ // __/ / ,< /___ \\ / / / // / / // / / / | | / /| // /___ / /| | ____/ / / /_/ // /_/ // /_/ / | | /_/ |_//_____//_/ |_|/_____/ \\___/ \\___/ \\___/ | | | |----------------------------------------------------------| | | | NEK5000: Open Source Spectral Element Solver | | COPYRIGHT (c) 2008-2017 UCHICAGO ARGONNE, LLC | | Version: 17.0-rc1 | | Web: http://nek5000.mcs.anl.gov | | | \\----------------------------------------------------------/ Number of processors: 80 REAL wdsize : 8 INTEGER wdsize : 4 Timer accuracy : 0.00E+00 Reading /home/jrobinson/casos/Placa_6/Placa_6.rea Reading /home/jrobinson/casos/Placa_6/Placa_6.re2 mapping elements to processors Reading /home/jrobinson/casos/Placa_6/Placa_6.ma2 RANK 0 IEG 1754 1755 1756 1757 1758 1759 1760 1774 1775 1776 1777 1778 1779 1780 1794 1795 1796 1797 1798 1799 1800 1814 1815 1816 1817 1818 1819 1820 1834 1835 1836 1837 1838 1839 1840 1853 1854 1855 1856 1857 1858 1859 1860 1873 1874 1875 1876 1877 1878 1879 1880 1893 1894 1895 1896 1897 1898 1899 1913 1914 1915 1916 1917 1918 1919 1933 1934 1935 1936 1937 1938 1939 1953 1954 1955 1956 1957 1958 1974 1975 1976 1977 1978 1994 1995 1996 1997 1998 1999 2000 2014 2015 2016 2017 2018 2019 2020 9783 9784 9785 9786 9787 9788 9789 9790 9791 9792 9793 9794 9795 9796 9797 9798 9799 9800 9801 9802 9803 9804 9805 9806 9807 9808 9809 9810 9811 9812 9813 9814 9815 9816 9817 9818 9819 9820 9821 9822 9823 9824 9825 9826 9827 9828 9829 9830 9849 9855 9856 9861 9862 element load imbalance: 1 150 151 done :: mapping 0.32155 sec preading mesh ============================================================= So the last line is "preading mesh". This doesn't give too much information, but the cluster generates a file with the following errors (at the end of this text). When I use an old version of Nek on this same cluster, I have no problem running my cases. The problem is that I need to use the latest version because I'm using exo2nek routine for my meshes generated with Trelis (Cubit). Any idea of what could I do? Thank you all. Juan Pablo. ========================================================= This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 4A,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5197 bytes Desc: not available URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 16 20:57:04 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 17 Apr 2018 03:57:04 +0200 Subject: [Nek5000-users] Problem on cluster In-Reply-To: References: Message-ID: There is a NOMPIIO option. On 16 Apr 2018, at 21:24, "nek5000-users at lists.mcs.anl.gov " > wrote: Stefan, ? Is there a way to run this without the flock option?? For a variety of reasons we don?t support it on our system and I have been trying to run nek but having the same issue.? Are there any compile flags that I can use or a different version of mpi perhaps?? Recommendations welcome. ? Thanks, ???Julie ? From: Nek5000-users > on behalf of "nek5000-users at lists.mcs.anl.gov " > Reply-To: "nek5000-users at lists.mcs.anl.gov " > Date: Monday, April 16, 2018 at 7:49 PM To: "nek5000-users at lists.mcs.anl.gov " > Subject: Re: [Nek5000-users] Problem on cluster ? Before updating, can you please check if the following advise helps: ? If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option Stefan On 16 Apr 2018, at 19:39, "nek5000-users at lists.mcs.anl.gov " > wrote: Please use the release tarball instead of the GitHub master! The error message suggest that your MPI installation is outdated -> some necessary MPIIO features are missing. I think updating MPI will do the trick. ? Stefan. ? On 16 Apr 2018, at 17:46, "nek5000-users at lists.mcs.anl.gov " > wrote: Dear Nek users, I'm having a problem when I use the latest version of Nek on a HPC cluster. I can compile, but when I run my simulations they finish. The logfile of a generic case is like the following: ? ? =========================================================== ? ? /----------------------------------------------------------\\ |? ? ? _? ?__ ______ __ __? ______? ____? ?____? ?____? ? ?| |? ? ?/ | / // ____// //_/ / ____/ / __ \\/ __ \\/ __ \\? ?| |? ? /? |/ // __/? / , ? ? ? ? ? ? ? ? ? ? ?| |? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? | \\----------------------------------------------------------/ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ?Number of processors:? ? ? ? ? 80 ?REAL? ? wdsize? ? ? :? ? ? ? ? ?8 ?INTEGER wdsize? ? ? :? ? ? ? ? ?4 ?Timer accuracy? ? ? : 0.00E+00 ?? ?Reading /home/jrobinson/casos/Placa_6/Placa_6.rea? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Reading /home/jrobinson/casos/Placa_6/Placa_6.re2? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?mapping elements to processors ?Reading /home/jrobinson/casos/Placa_6/Placa_6.ma2? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?RANK? ? ?0 IEG? ? 1754? ? 1755? ? 1756? ? 1757? ? 1758? ? 1759? ? 1760? ? 1774 ? ? ? ? ? ? ? ? ? ?1775? ? 1776? ? 1777? ? 1778? ? 1779? ? 1780? ? 1794? ? 1795 ? ? ? ? ? ? ? ? ? ?1796? ? 1797? ? 1798? ? 1799? ? 1800? ? 1814? ? 1815? ? 1816 ? ? ? ? ? ? ? ? ? ?1817? ? 1818? ? 1819? ? 1820? ? 1834? ? 1835? ? 1836? ? 1837 ? ? ? ? ? ? ? ? ? ?1838? ? 1839? ? 1840? ? 1853? ? 1854? ? 1855? ? 1856? ? 1857 ? ? ? ? ? ? ? ? ? ?1858? ? 1859? ? 1860? ? 1873? ? 1874? ? 1875? ? 1876? ? 1877 ? ? ? ? ? ? ? ? ? ?1878? ? 1879? ? 1880? ? 1893? ? 1894? ? 1895? ? 1896? ? 1897 ? ? ? ? ? ? ? ? ? ?1898? ? 1899? ? 1913? ? 1914? ? 1915? ? 1916? ? 1917? ? 1918 ? ? ? ? ? ? ? ? ? ?1919? ? 1933? ? 1934? ? 1935? ? 1936? ? 1937? ? 1938? ? 1939 ? ? ? ? ? ? ? ? ? ?1953? ? 1954? ? 1955? ? 1956? ? 1957? ? 1958? ? 1974? ? 1975 ? ? ? ? ? ? ? ? ? ?1976? ? 1977? ? 1978? ? 1994? ? 1995? ? 1996? ? 1997? ? 1998 ? ? ? ? ? ? ? ? ? ?1999? ? 2000? ? 2014? ? 2015? ? 2016? ? 2017? ? 2018? ? 2019 ? ? ? ? ? ? ? ? ? ?2020? ? 9783? ? 9784? ? 9785? ? 9786? ? 9787? ? 9788? ? 9789 ? ? ? ? ? ? ? ? ? ?9790? ? 9791? ? 9792? ? 9793? ? 9794? ? 9795? ? 9796? ? 9797 ? ? ? ? ? ? ? ? ? ?9798? ? 9799? ? 9800? ? 9801? ? 9802? ? 9803? ? 9804? ? 9805 ? ? ? ? ? ? ? ? ? ?9806? ? 9807? ? 9808? ? 9809? ? 9810? ? 9811? ? 9812? ? 9813 ? ? ? ? ? ? ? ? ? ?9814? ? 9815? ? 9816? ? 9817? ? 9818? ? 9819? ? 9820? ? 9821 ? ? ? ? ? ? ? ? ? ?9822? ? 9823? ? 9824? ? 9825? ? 9826? ? 9827? ? 9828? ? 9829 ? ? ? ? ? ? ? ? ? ?9830? ? 9849? ? 9855? ? 9856? ? 9861? ? 9862 ?? ?element load imbalance:? ? ? ? ? ? 1? ? ? ? ?150? ? ? ? ?151 ?done :: mapping? ?0.32155? ? ?sec ? ?? ? preading mesh? ? ============================================================= ? ? ? ? So the last line is "preading mesh". ? This doesn't give too much information, but the cluster generates a file with the following errors (at the end of this text). ? When I use an old version of Nek on this same cluster, I have no problem running my cases. The problem is that I need to use the latest version because I'm using exo2nek routine for my meshes generated with Trelis (Cubit). ? ? Any idea of what could I do? ? Thank you all. ? Juan Pablo. ? ? ? ? ? ? ========================================================= ? ? This requires fcntl(2) to be implemented. As of 8/25/2011 it is not. Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 4A,cmd F_SETLKW/7,type F_RDLCK/0,whence 0) with return value FFFFFFFF and errno 26. - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option. ADIOI_Set_lock:: Function not implemented ? _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Apr 17 10:33:27 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 17 Apr 2018 12:33:27 -0300 Subject: [Nek5000-users] Problem on cluster Message-ID: Amitvikram, yes, I'm using genmap correctly. I was able to solve the problem writing to the cluster administrators. They told me that I should send the batch from a directory where the flock flag is active. */mnt/flock*/mydirectory....... Thanks the answers! Juan Pablo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Apr 17 11:15:19 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 17 Apr 2018 17:15:19 +0100 Subject: [Nek5000-users] dissipation rate Message-ID: Hi Nek, Is there a subrountine in the nek I can get the turbulent dissipation rate in the position (x,y,z) which can have the similar interpolation function with interp_v below. subroutine interp_v(uvw,xyz,n) c c evaluate velocity for list of points xyz c include 'SIZE' include 'TOTAL' real uvw(ldim,n),xyz(ldim,n) parameter(nmax=lpart,nfldmx=ldim) common /rwk_intp/ $ fwrk(lx1*ly1*lz1*lelt,nfldmx), $ rwk(nmax,ldim+1), $ fpts(ldim*nmax), $ pts(ldim*nmax) common /iwk_intp/ $ iwk(nmax,3) integer icalld,e save icalld data icalld /0/ nxyz = nx1*ny1*nz1 ntot = nxyz*nelt if (n.gt.nmax) call exitti ('n > nmax in interp_v!$',n) if (nelgt.ne.nelgv) call exitti $ ('nelgt.ne.nelgv not yet supported in interp_v!$',nelgv) do i=1,n ! ? not moving -> save? pts(i) = xyz(1,i) pts(i + n) = xyz(2,i) if (if3d) pts(i + n*2) = xyz(3,i) enddo if (icalld.eq.0) then ! interpolation setup icalld = 1 tolin = 1.e-8 call intp_setup(tolin) endif ! pack working array call opcopy(fwrk(1,1),fwrk(1,2),fwrk(1,3),vx,vy,vz) ! interpolate call intp_do(fpts,fwrk,ndim, $ pts(1),pts(1+n),pts(2*n+1),n, $ iwk,rwk,nmax,.true.) do i=1,n uvw(1,i) = fpts(i) uvw(2,i) = fpts(i + n) if(if3d) uvw(3,i) = fpts(i + n*2) enddo return end Kind regards, Jian -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Apr 17 13:37:18 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 17 Apr 2018 14:37:18 -0400 Subject: [Nek5000-users] Post-process in Matlab Message-ID: Hi Neks, I have a 3D geometry that I want to post-process in Matlab. I saw the post Read Nek output in Matlab , but I?m not sure if it can post-process 3D data. Any help will be appreciated. The mesh consist of 22,000 elements N=7 Thanks Sula. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Apr 18 03:32:26 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 18 Apr 2018 09:32:26 +0100 Subject: [Nek5000-users] dissipation rate Message-ID: Hi Nek, Can anyone know how to interpolate the dissipation rate in nek? I think it should be similar with subroutine interp_v(uvw,xyz,n). But I dont know how to do in details. Kind reagrds, Jian -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Apr 18 03:34:05 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 18 Apr 2018 09:34:05 +0100 Subject: [Nek5000-users] pipe flow Message-ID: Dear Nekers, I post last time regarding pressure gradient in the pipe flow. Is there anyone know how to do the single phase pipe flow keeping pressure gradient rather than flow rate? Best wishes, Zhai -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Apr 18 09:08:21 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 18 Apr 2018 11:08:21 -0300 Subject: [Nek5000-users] Postpro, averages and fluctuations Message-ID: Hi everyone, I've been collecting information in the mailing list about these topics, but I still have some doubts. After I finish my 3D simulations I'm interested in calculate the average field of u, v, w and T; and also the instantaneous fluctuation fields. Everything in a specific plane. So what I should do is to incorporate avg_all in usercheck, specifying a frequency as param(68). Then, when the simulation ends, I should run a case in postproc mode, that is setting nsteps=0, I should create a file with the averages filenames called case.list, and in usercheck call auto_averager. Is that going to create different files with the final averages? one for each field? Or is it going to load the results in nek and I should outpost them? Then, for the instantaneous fluctuations, I should load all my simulation results calling load_fld, and substrac them the average I just obtained, for each field. But I'm not sure how to do this. I read that when load_fld is called, the variables are set in vx, vy, vz, T, right? But since I have several case0.f000X files, then should I load them one by one, make the substract, outpost the fluctuation, and so on? Thank you very much in advance! Juan Pablo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Apr 18 09:12:32 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 18 Apr 2018 16:12:32 +0200 Subject: [Nek5000-users] pipe flow In-Reply-To: References: Message-ID: Hi, you will put the pressure gradient into userf (e.g. ffz=1.). In this way you can specify the expected Re_tau of your simulation (together with Re). Philipp On 2018-04-18 10:34, nek5000-users at lists.mcs.anl.gov wrote: > Dear Nekers, > > I post last time regarding pressure gradient in the pipe flow. Is there > anyone know how to do the single phase pipe flow keeping pressure > gradient rather than flow rate? > > Best wishes, > > Zhai > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Thu Apr 19 03:45:18 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 19 Apr 2018 09:45:18 +0100 Subject: [Nek5000-users] Pipe flow Message-ID: Dear Philipp, Glad to hear from you. Would you please tell me more about the presusre gradient setting, i.e., what ffx should be in terms of different shear? I didnt find any guide to set that online. Kind regards, Zhai -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Apr 19 11:15:09 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 19 Apr 2018 18:15:09 +0200 Subject: [Nek5000-users] Pipe flow In-Reply-To: References: Message-ID: Hi, the pressure gradient is a constant force in the axial direction (i.e. no radial dependence). You can then, via a simple force balance, compute the resulting wall shear stress, which then together with the chosen Reynolds number (i.e. what you set in the rea/par file) defines you the friction Reynolds number. I don't have the formulas for pipe flow ready, but in channel flow it becomes: ffz = -dpw/dx = tauw = 1/Re_cl*dU/dy|w and with that Re_tau=Re_cl * sqrt(-dpw/dx)=Re_cl * sqrt(ffz). So you can easily set the combination of ffz and Re_cl for a target Re_tau. Note that you would like to have U_bulk as close to 1 as possible in order to accelerate convergence. Best regards, Philipp On 2018-04-19 10:45, nek5000-users at lists.mcs.anl.gov wrote: > Dear Philipp, > > Glad to hear from you. Would you please tell me more about the presusre > gradient setting, i.e., what ffx should be in terms of different shear? > I didnt find any guide to set that online. > > Kind regards, > > Zhai > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Sat Apr 21 22:37:09 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 22 Apr 2018 03:37:09 +0000 Subject: [Nek5000-users] MAXNEL change and compiling Message-ID: Hi all, I'm looking to simulat a problem with 287000 cells. On trying to run genmap an error keeps showing up as follows. using default value reading mesh data ... Abort: number of elements too large 287000 change MAXNEL and recompile I changed the MAXNEL flag (In both places) in the maketools file to 300000 and ran ./maketools all. The problem however is still cropping up as before. Are there some additional steps I need to take? Sincerely, -- *Amitvikram Dutta* Graduate Research Assistant Fluid Mechanics Research Lab Multi-Physics Interaction Lab University of Waterloo -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sun Apr 22 06:04:21 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 22 Apr 2018 13:04:21 +0200 Subject: [Nek5000-users] MAXNEL change and compiling In-Reply-To: References: Message-ID: Did you clean and recompile? On 22 Apr 2018, at 05:37, "nek5000-users at lists.mcs.anl.gov " > wrote: Hi all, I'm looking to simulat a problem with 287000 cells. On trying to run genmap an error keeps showing up as follows. using default value ?reading mesh data ... ?Abort: number of elements too large????? 287000 ?change MAXNEL and recompile I changed the MAXNEL flag (In both places) in the maketools file to 300000 and ran ./maketools all. The problem however is still cropping up as before. Are there some additional steps I need to take? Sincerely, -- Amitvikram Dutta Graduate Research Assistant Fluid Mechanics Research Lab Multi-Physics Interaction Lab University of Waterloo _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sun Apr 22 07:12:54 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 22 Apr 2018 12:12:54 +0000 Subject: [Nek5000-users] MAXNEL change and compiling In-Reply-To: References: Message-ID: No. ./maketools clean Then ./maketools all ? On Sun, Apr 22, 2018, 7:04 AM wrote: > Did you clean and recompile? > > On 22 Apr 2018, at 05:37, "nek5000-users at lists.mcs.anl.gov" < > nek5000-users at lists.mcs.anl.gov> wrote: > > Hi all, > > I'm looking to simulat a problem with 287000 cells. On trying to run > genmap an error keeps showing up as follows. > > using default value > reading mesh data ... > Abort: number of elements too large 287000 > change MAXNEL and recompile > > > I changed the MAXNEL flag (In both places) in the maketools file to 300000 > and ran ./maketools all. The problem however is still cropping up as > before. Are there some additional steps I need to take? > > > Sincerely, > > -- > > *Amitvikram Dutta* > > Graduate Research Assistant > > Fluid Mechanics Research Lab > > Multi-Physics Interaction Lab > > University of Waterloo > > _______________________________________________ > > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -- *Amitvikram Dutta* Graduate Research Assistant Fluid Mechanics Research Lab Multi-Physics Interaction Lab University of Waterloo -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Apr 25 10:12:48 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 25 Apr 2018 17:12:48 +0200 Subject: [Nek5000-users] *****SPAM*****Twitter #nek5000 Message-ID: Dear NekCommunity, In Aug 2017 we launched @Nek5000 on Twitter! Just give it a shot and see what's happening: https://twitter.com/Nek5000 Please feel free to share your Nek related work (projects, simulations, developments, publications, events, ...) using the hashtag #nek5000. Cheers, Stefan From nek5000-users at lists.mcs.anl.gov Thu Apr 26 15:19:25 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 26 Apr 2018 21:19:25 +0100 Subject: [Nek5000-users] Fwd: 3D Taylor-Green vortex - error In-Reply-To: References: Message-ID: Hi all, This is my second day with Nek5000. Day one was to go through the process of setting up the simulations with example eddy case and periodic hill case running fine on a single core of local workstation. I was hoping to run the 3D Taylor-Green vortex case (described in https://www.grc.nasa.gov/ hiocfd/wp-content/uploads/sites/22/C3.3_Twente.pdf) very quickly to compare the CPU wall clock time with OpenFOAM. Ideally I'd like to run a 16^3 elements cubical box [-pi,pi] with polynomial order 4 providing 64^3 degrees of freedom. I'm targeting 28-84 cores. The boundary conditions of the case are triply periodic and initial conditions are defined as trigonometric expressions of space. I didn't know how to write expressions such as sin(x)cos(y)sin(z) in Nek5000. So, I tried with initial condition uy=uz=0 and ux=1. I keep getting the following error with my set up: """ makenek - automatic build tool for Nek5000 generating makefile ... build 3rd-party software ... ?tgv.usr? -> ?tgv.f? mpif77 -c -O2 -fdefault-real-8 -fdefault-double-8 -cpp -DMPI -DUNDERSCORE -DTIMER -I/gpfs/home/lboro/ttvs3/Nek5000/run/tgv -I/gpfs/home/lboro/ttvs3/Nek5000/core -I./ /gpfs/home/lboro/ttvs3/Nek5000/run/tgv/tgv.f -o obj/tgv.o mpif77 -O2 -fdefault-real-8 -fdefault-double-8 -cpp -DMPI -DUNDERSCORE -DTIMER -I/gpfs/home/lboro/ttvs3/Nek5000/run/tgv -I/gpfs/home/lboro/ttvs3/Nek5000/core -I./ -o nek5000 obj/tgv.o obj/drive.o obj/drive1.o obj/drive2.o obj/plan5.o obj/plan4.o obj/bdry.o obj/coef.o obj/conduct.o obj/connect1.o obj/connect2.o obj/dssum.o obj/edgec.o obj/eigsolv.o obj/gauss.o obj/genxyz.o obj/navier1.o obj/makeq.o obj/navier0.o obj/navier2.o obj/navier3.o obj/navier4.o obj/prepost.o obj/speclib.o obj/map2.o obj/mvmesh.o obj/ic.o obj/gfldr.o obj/ssolv.o obj/planx.o obj/math.o obj/mxm_wrapper.o obj/hmholtz.o obj/gfdm_par.o obj/gfdm_op.o obj/gfdm_solve.o obj/subs1.o obj/subs2.o obj/genbox.o obj/gmres.o obj/hsmg.o obj/convect.o obj/induct.o obj/perturb.o obj/navier5.o obj/navier6.o obj/navier7.o obj/navier8.o obj/fast3d.o obj/fasts.o obj/calcz.o obj/byte.o obj/chelpers.o obj/byte_mpi.o obj/postpro.o obj/intp_usr.o obj/cvode_driver.o obj/nek_comm.o obj/vprops.o obj/makeq_aux.o obj/papi.o obj/nek_in_situ.o obj/reader_rea.o obj/reader_par.o obj/reader_re2.o obj/finiparser.o obj/iniparser.o obj/dictionary.o obj/hpf.o obj/mxm_std.o obj/comm_mpi.o obj/singlmesh.o obj/blas.o obj/dsygv.o -L/gpfs/home/lboro/ttvs3/Nek5000/3rd_party/gslib/lib -lgs -L/trinity/shared/apps/local/bzip2/1.0.6/1/lib -I/trinity/shared/apps/local/tk/8.5.13/1/lib -I/trinity/shared/apps/local/tcl/8.5.13/1/lib -L/trinity/shared/apps/local/zlib/1.2.11/1/lib obj/drive2.o: In function `dofcnt_': drive2.f:(.text+0x8d50): relocation truncated to fit: R_X86_64_PC32 against symbol `scrns_' defined in COMMON section in obj/convect.o obj/plan5.o: In function `midstep_': plan5.f:(.text+0x186): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan5.f:(.text+0x379): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan5.f:(.text+0x391): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan5.f:(.text+0x3c9): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o obj/plan4.o: In function `crespsp_': plan4.f:(.text+0x9db): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xb1a): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xc79): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xd7a): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xd90): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xda3): additional relocation overflows omitted from the output collect2: error: ld returned 1 exit status make: *** [nek5000] Error 1 """ In your experience, the error is related to my compiler/mpi or the set up files? If latter, can anyone be kind enough to check out my set up files please? If you want to save time and have done this case before, you can share those files from the past. Any input is highly appreciated. PS- Thanks Prof. Fischer for your reply. Hope this one is more clear. Cheers, Vishal -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Apr 26 17:11:55 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 26 Apr 2018 22:11:55 +0000 Subject: [Nek5000-users] Fwd: 3D Taylor-Green vortex - error In-Reply-To: References: Message-ID: Vishal, This is answered in the FAQ https://nek5000.github.io/NekDoc/faq.html -Dillon From: Nek5000-users [mailto:nek5000-users-bounces at lists.mcs.anl.gov] On Behalf Of nek5000-users at lists.mcs.anl.gov Sent: Thursday, April 26, 2018 3:19 PM To: nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] Fwd: 3D Taylor-Green vortex - error Hi all, This is my second day with Nek5000. Day one was to go through the process of setting up the simulations with example eddy case and periodic hill case running fine on a single core of local workstation. I was hoping to run the 3D Taylor-Green vortex case (described in https://www.grc.nasa.gov/hiocfd/wp-content/uploads/sites/22/C3.3_Twente.pdf) very quickly to compare the CPU wall clock time with OpenFOAM. Ideally I'd like to run a 16^3 elements cubical box [-pi,pi] with polynomial order 4 providing 64^3 degrees of freedom. I'm targeting 28-84 cores. The boundary conditions of the case are triply periodic and initial conditions are defined as trigonometric expressions of space. I didn't know how to write expressions such as sin(x)cos(y)sin(z) in Nek5000. So, I tried with initial condition uy=uz=0 and ux=1. I keep getting the following error with my set up: """ makenek - automatic build tool for Nek5000 generating makefile ... build 3rd-party software ... ?tgv.usr? -> ?tgv.f? mpif77 -c -O2 -fdefault-real-8 -fdefault-double-8 -cpp -DMPI -DUNDERSCORE -DTIMER -I/gpfs/home/lboro/ttvs3/Nek5000/run/tgv -I/gpfs/home/lboro/ttvs3/Nek5000/core -I./ /gpfs/home/lboro/ttvs3/Nek5000/run/tgv/tgv.f -o obj/tgv.o mpif77 -O2 -fdefault-real-8 -fdefault-double-8 -cpp -DMPI -DUNDERSCORE -DTIMER -I/gpfs/home/lboro/ttvs3/Nek5000/run/tgv -I/gpfs/home/lboro/ttvs3/Nek5000/core -I./ -o nek5000 obj/tgv.o obj/drive.o obj/drive1.o obj/drive2.o obj/plan5.o obj/plan4.o obj/bdry.o obj/coef.o obj/conduct.o obj/connect1.o obj/connect2.o obj/dssum.o obj/edgec.o obj/eigsolv.o obj/gauss.o obj/genxyz.o obj/navier1.o obj/makeq.o obj/navier0.o obj/navier2.o obj/navier3.o obj/navier4.o obj/prepost.o obj/speclib.o obj/map2.o obj/mvmesh.o obj/ic.o obj/gfldr.o obj/ssolv.o obj/planx.o obj/math.o obj/mxm_wrapper.o obj/hmholtz.o obj/gfdm_par.o obj/gfdm_op.o obj/gfdm_solve.o obj/subs1.o obj/subs2.o obj/genbox.o obj/gmres.o obj/hsmg.o obj/convect.o obj/induct.o obj/perturb.o obj/navier5.o obj/navier6.o obj/navier7.o obj/navier8.o obj/fast3d.o obj/fasts.o obj/calcz.o obj/byte.o obj/chelpers.o obj/byte_mpi.o obj/postpro.o obj/intp_usr.o obj/cvode_driver.o obj/nek_comm.o obj/vprops.o obj/makeq_aux.o obj/papi.o obj/nek_in_situ.o obj/reader_rea.o obj/reader_par.o obj/reader_re2.o obj/finiparser.o obj/iniparser.o obj/dictionary.o obj/hpf.o obj/mxm_std.o obj/comm_mpi.o obj/singlmesh.o obj/blas.o obj/dsygv.o -L/gpfs/home/lboro/ttvs3/Nek5000/3rd_party/gslib/lib -lgs -L/trinity/shared/apps/local/bzip2/1.0.6/1/lib -I/trinity/shared/apps/local/tk/8.5.13/1/lib -I/trinity/shared/apps/local/tcl/8.5.13/1/lib -L/trinity/shared/apps/local/zlib/1.2.11/1/lib obj/drive2.o: In function `dofcnt_': drive2.f:(.text+0x8d50): relocation truncated to fit: R_X86_64_PC32 against symbol `scrns_' defined in COMMON section in obj/convect.o obj/plan5.o: In function `midstep_': plan5.f:(.text+0x186): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan5.f:(.text+0x379): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan5.f:(.text+0x391): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan5.f:(.text+0x3c9): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o obj/plan4.o: In function `crespsp_': plan4.f:(.text+0x9db): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xb1a): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xc79): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xd7a): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xd90): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xda3): additional relocation overflows omitted from the output collect2: error: ld returned 1 exit status make: *** [nek5000] Error 1 """ In your experience, the error is related to my compiler/mpi or the set up files? If latter, can anyone be kind enough to check out my set up files please? If you want to save time and have done this case before, you can share those files from the past. Any input is highly appreciated. PS- Thanks Prof. Fischer for your reply. Hope this one is more clear. Cheers, Vishal -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Apr 26 17:13:20 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 26 Apr 2018 22:13:20 +0000 Subject: [Nek5000-users] Fwd: 3D Taylor-Green vortex - error In-Reply-To: References: , Message-ID: Hi Vishal, You could try reducing lelt in SIZE file. If you are planning to use 32 MPI ranks, lelt can be as small as 16^3/32 = 128. Regards, Li ________________________________ From: Nek5000-users [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Thursday, April 26, 2018 15:19 To: nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] Fwd: 3D Taylor-Green vortex - error Hi all, This is my second day with Nek5000. Day one was to go through the process of setting up the simulations with example eddy case and periodic hill case running fine on a single core of local workstation. I was hoping to run the 3D Taylor-Green vortex case (described in https://www.grc.nasa.gov/hiocfd/wp-content/uploads/sites/22/C3.3_Twente.pdf) very quickly to compare the CPU wall clock time with OpenFOAM. Ideally I'd like to run a 16^3 elements cubical box [-pi,pi] with polynomial order 4 providing 64^3 degrees of freedom. I'm targeting 28-84 cores. The boundary conditions of the case are triply periodic and initial conditions are defined as trigonometric expressions of space. I didn't know how to write expressions such as sin(x)cos(y)sin(z) in Nek5000. So, I tried with initial condition uy=uz=0 and ux=1. I keep getting the following error with my set up: """ makenek - automatic build tool for Nek5000 generating makefile ... build 3rd-party software ... ?tgv.usr? -> ?tgv.f? mpif77 -c -O2 -fdefault-real-8 -fdefault-double-8 -cpp -DMPI -DUNDERSCORE -DTIMER -I/gpfs/home/lboro/ttvs3/Nek5000/run/tgv -I/gpfs/home/lboro/ttvs3/Nek5000/core -I./ /gpfs/home/lboro/ttvs3/Nek5000/run/tgv/tgv.f -o obj/tgv.o mpif77 -O2 -fdefault-real-8 -fdefault-double-8 -cpp -DMPI -DUNDERSCORE -DTIMER -I/gpfs/home/lboro/ttvs3/Nek5000/run/tgv -I/gpfs/home/lboro/ttvs3/Nek5000/core -I./ -o nek5000 obj/tgv.o obj/drive.o obj/drive1.o obj/drive2.o obj/plan5.o obj/plan4.o obj/bdry.o obj/coef.o obj/conduct.o obj/connect1.o obj/connect2.o obj/dssum.o obj/edgec.o obj/eigsolv.o obj/gauss.o obj/genxyz.o obj/navier1.o obj/makeq.o obj/navier0.o obj/navier2.o obj/navier3.o obj/navier4.o obj/prepost.o obj/speclib.o obj/map2.o obj/mvmesh.o obj/ic.o obj/gfldr.o obj/ssolv.o obj/planx.o obj/math.o obj/mxm_wrapper.o obj/hmholtz.o obj/gfdm_par.o obj/gfdm_op.o obj/gfdm_solve.o obj/subs1.o obj/subs2.o obj/genbox.o obj/gmres.o obj/hsmg.o obj/convect.o obj/induct.o obj/perturb.o obj/navier5.o obj/navier6.o obj/navier7.o obj/navier8.o obj/fast3d.o obj/fasts.o obj/calcz.o obj/byte.o obj/chelpers.o obj/byte_mpi.o obj/postpro.o obj/intp_usr.o obj/cvode_driver.o obj/nek_comm.o obj/vprops.o obj/makeq_aux.o obj/papi.o obj/nek_in_situ.o obj/reader_rea.o obj/reader_par.o obj/reader_re2.o obj/finiparser.o obj/iniparser.o obj/dictionary.o obj/hpf.o obj/mxm_std.o obj/comm_mpi.o obj/singlmesh.o obj/blas.o obj/dsygv.o -L/gpfs/home/lboro/ttvs3/Nek5000/3rd_party/gslib/lib -lgs -L/trinity/shared/apps/local/bzip2/1.0.6/1/lib -I/trinity/shared/apps/local/tk/8.5.13/1/lib -I/trinity/shared/apps/local/tcl/8.5.13/1/lib -L/trinity/shared/apps/local/zlib/1.2.11/1/lib obj/drive2.o: In function `dofcnt_': drive2.f:(.text+0x8d50): relocation truncated to fit: R_X86_64_PC32 against symbol `scrns_' defined in COMMON section in obj/convect.o obj/plan5.o: In function `midstep_': plan5.f:(.text+0x186): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan5.f:(.text+0x379): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan5.f:(.text+0x391): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan5.f:(.text+0x3c9): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o obj/plan4.o: In function `crespsp_': plan4.f:(.text+0x9db): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xb1a): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xc79): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xd7a): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xd90): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xda3): additional relocation overflows omitted from the output collect2: error: ld returned 1 exit status make: *** [nek5000] Error 1 """ In your experience, the error is related to my compiler/mpi or the set up files? If latter, can anyone be kind enough to check out my set up files please? If you want to save time and have done this case before, you can share those files from the past. Any input is highly appreciated. PS- Thanks Prof. Fischer for your reply. Hope this one is more clear. Cheers, Vishal -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Apr 26 23:07:02 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 27 Apr 2018 04:07:02 +0000 Subject: [Nek5000-users] Finite-amplitude disturbance based on os7000 Message-ID: Dear Nek users, I run into a problem when I tried to realize a 2D finite-amplitude disturbance in the plane poiseuille flow (based on example of os7000). The code was compiled successfully and the initial conditions (base + disturbance) were all good (I verified the profiles with the literature). Both of the base flow (u = 1-y^2) and perturbation are set in the useric subroutine. Periodic boundary condition is employed. one wavelength is simulated. However, after the simulation starts, the perturbations somehow do not really evolve correctly; the wave shape along the streamwise was somehow distorted especially for the first few time steps (both ux' and uy'). After that, it recovered a bit but never went back to the perfect shape same as that in literature at later time steps. This issue yields some significant discrepancy with the available literature paper. I wonder if this could be due to the non-linear effect. Could anyone help me on this please? Regards, Emily -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 27 02:00:18 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 27 Apr 2018 09:00:18 +0200 Subject: [Nek5000-users] Fwd: 3D Taylor-Green vortex - error In-Reply-To: References: Message-ID: See NekDoc FAQs On 26 Apr 2018, at 23:37, "nek5000-users at lists.mcs.anl.gov " > wrote: Hi all, This is my second day with Nek5000. Day one was to go through the process of setting up the simulations with example eddy case and periodic hill case running fine on a single core of local workstation. I was hoping to run the 3D Taylor-Green vortex case (described in?https://www.grc.nasa.gov/hiocfd/wp-content/uploads/sites/22/C3.3_Twente.pdf) very quickly to compare the CPU wall clock time with OpenFOAM. Ideally I'd like to run a 16^3 elements cubical box [-pi,pi] with polynomial order 4 providing 64^3 degrees of freedom. I'm targeting 28-84 cores. The boundary conditions of the case are triply periodic and initial conditions are defined as trigonometric expressions of space. I didn't know how to write expressions such as sin(x)cos(y)sin(z) in Nek5000. So, I tried with initial condition uy=uz=0 and ux=1. I keep getting the following error with my set up: """ makenek - automatic build tool for Nek5000 generating makefile ... build 3rd-party software ... ?tgv.usr? -> ?tgv.f? mpif77 -c? -O2 -fdefault-real-8 -fdefault-double-8 -cpp -DMPI -DUNDERSCORE -DTIMER -I/gpfs/home/lboro/ttvs3/Nek5000/run/tgv -I/gpfs/home/lboro/ttvs3/Nek5000/core -I./ /gpfs/home/lboro/ttvs3/Nek5000/run/tgv/tgv.f? -o obj/tgv.o mpif77? -O2 -fdefault-real-8 -fdefault-double-8 -cpp -DMPI -DUNDERSCORE -DTIMER -I/gpfs/home/lboro/ttvs3/Nek5000/run/tgv -I/gpfs/home/lboro/ttvs3/Nek5000/core -I./ -o nek5000 obj/tgv.o obj/drive.o obj/drive1.o obj/drive2.o obj/plan5.o obj/plan4.o obj/bdry.o obj/coef.o obj/conduct.o obj/connect1.o obj/connect2.o obj/dssum.o obj/edgec.o obj/eigsolv.o obj/gauss.o obj/genxyz.o obj/navier1.o obj/makeq.o obj/navier0.o obj/navier2.o obj/navier3.o obj/navier4.o obj/prepost.o obj/speclib.o obj/map2.o obj/mvmesh.o obj/ic.o obj/gfldr.o obj/ssolv.o obj/planx.o obj/math.o obj/mxm_wrapper.o obj/hmholtz.o obj/gfdm_par.o obj/gfdm_op.o obj/gfdm_solve.o obj/subs1.o obj/subs2.o obj/genbox.o obj/gmres.o obj/hsmg.o obj/convect.o obj/induct.o obj/perturb.o obj/navier5.o obj/navier6.o obj/navier7.o obj/navier8.o obj/fast3d.o obj/fasts.o obj/calcz.o obj/byte.o obj/chelpers.o obj/byte_mpi.o obj/postpro.o obj/intp_usr.o obj/cvode_driver.o obj/nek_comm.o obj/vprops.o obj/makeq_aux.o obj/papi.o obj/nek_in_situ.o obj/reader_rea.o obj/reader_par.o obj/reader_re2.o obj/finiparser.o obj/iniparser.o obj/dictionary.o obj/hpf.o obj/mxm_std.o obj/comm_mpi.o obj/singlmesh.o obj/blas.o obj/dsygv.o -L/gpfs/home/lboro/ttvs3/Nek5000/3rd_party/gslib/lib -lgs -L/trinity/shared/apps/local/bzip2/1.0.6/1/lib -I/trinity/shared/apps/local/tk/8.5.13/1/lib -I/trinity/shared/apps/local/tcl/8.5.13/1/lib -L/trinity/shared/apps/local/zlib/1.2.11/1/lib obj/drive2.o: In function `dofcnt_': drive2.f:(.text+0x8d50): relocation truncated to fit: R_X86_64_PC32 against symbol `scrns_' defined in COMMON section in obj/convect.o obj/plan5.o: In function `midstep_': plan5.f:(.text+0x186): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan5.f:(.text+0x379): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan5.f:(.text+0x391): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan5.f:(.text+0x3c9): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o obj/plan4.o: In function `crespsp_': plan4.f:(.text+0x9db): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xb1a): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xc79): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xd7a): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xd90): relocation truncated to fit: R_X86_64_32S against symbol `scrns_' defined in COMMON section in obj/convect.o plan4.f:(.text+0xda3): additional relocation overflows omitted from the output collect2: error: ld returned 1 exit status make: *** [nek5000] Error 1 """ In your experience, the error is related to my compiler/mpi or the set up files? If latter, can anyone be kind enough to check out my set up files please? If you want to save time and have done this case before, you can share those files from the past. Any input is highly appreciated. PS- Thanks Prof. Fischer for your reply. Hope this one is more clear. Cheers, Vishal _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 27 03:11:24 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 27 Apr 2018 10:11:24 +0200 (CEST) Subject: [Nek5000-users] Finite-amplitude disturbance based on os7000 Message-ID: Dear Emily How much is the amplitude of your initial perturbation? if it is smaller than 10-3, it is unlikely from the nonlinear effect. I don't know how you solve OS equation, but the Reynold's number in OS equation is generally based on central line velocity. While the Reynold's number in Nek for the channel flow is generally based on mean flow velocity. If the wave was distorted, maybe you should check the shape of the disturbance, and the streamwise wave number alpha (which determines the length of computational domain.) Another reminder is that you need to run this case in temporal mode of OS equation if you use periodic boundary condition. Bien cordialement Zhenrong JING Doctorant (Doctor Candidate) LHEEA Ecole Centrale de Nantes 1 Rue de la No?, 44321 Nantes, France -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 27 06:50:55 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 27 Apr 2018 11:50:55 +0000 Subject: [Nek5000-users] Finite-amplitude disturbance based on os7000 In-Reply-To: References: Message-ID: Dear Emily, There could be several reasons for the discrepancy. It could be a question of spatial and/or temporal resolution, or it could be because these amplitudes are finite, whereas the literature values are for the infinitesimal case. This is a challenging problem on the nek side because there is a balance between having a very small perturbation - to be like linear theory - and having significant signal to overcome noise, which is at 1.e-16 + a bit more because of the iterative solver tolerances. I tried to pick a value of the perturbation amplitude that would allow us to be as close as possible to linear theory, as measured in the growth rate. It is certainly true that Nek will evolve the SEM solution, and not the analytical eigenfunction. Hope this helps, and will be curious about what you find out. Thanks! Paul ________________________________ From: Nek5000-users on behalf of nek5000-users at lists.mcs.anl.gov Sent: Thursday, April 26, 2018 11:07:02 PM To: nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] Finite-amplitude disturbance based on os7000 Dear Nek users, I run into a problem when I tried to realize a 2D finite-amplitude disturbance in the plane poiseuille flow (based on example of os7000). The code was compiled successfully and the initial conditions (base + disturbance) were all good (I verified the profiles with the literature). Both of the base flow (u = 1-y^2) and perturbation are set in the useric subroutine. Periodic boundary condition is employed. one wavelength is simulated. However, after the simulation starts, the perturbations somehow do not really evolve correctly; the wave shape along the streamwise was somehow distorted especially for the first few time steps (both ux' and uy'). After that, it recovered a bit but never went back to the perfect shape same as that in literature at later time steps. This issue yields some significant discrepancy with the available literature paper. I wonder if this could be due to the non-linear effect. Could anyone help me on this please? Regards, Emily -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 27 07:04:57 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 27 Apr 2018 12:04:57 +0000 Subject: [Nek5000-users] Finite-amplitude disturbance based on os7000 In-Reply-To: References: , Message-ID: As a follow-up, I recompiled with lx1=18, with a dt of 1/2 the size, and IFCHAR=F, for which you will find much closer agreement with linear theory. I have not explored all the possible parameter options - e.g., changing just dt, changing just lx1, changing just IFCHAR..., but it's clear that the default settings are not at the limit of what we can realize with the code. hth, Paul ________________________________ From: Nek5000-users on behalf of nek5000-users at lists.mcs.anl.gov Sent: Friday, April 27, 2018 6:50:55 AM To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] Finite-amplitude disturbance based on os7000 Dear Emily, There could be several reasons for the discrepancy. It could be a question of spatial and/or temporal resolution, or it could be because these amplitudes are finite, whereas the literature values are for the infinitesimal case. This is a challenging problem on the nek side because there is a balance between having a very small perturbation - to be like linear theory - and having significant signal to overcome noise, which is at 1.e-16 + a bit more because of the iterative solver tolerances. I tried to pick a value of the perturbation amplitude that would allow us to be as close as possible to linear theory, as measured in the growth rate. It is certainly true that Nek will evolve the SEM solution, and not the analytical eigenfunction. Hope this helps, and will be curious about what you find out. Thanks! Paul ________________________________ From: Nek5000-users on behalf of nek5000-users at lists.mcs.anl.gov Sent: Thursday, April 26, 2018 11:07:02 PM To: nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] Finite-amplitude disturbance based on os7000 Dear Nek users, I run into a problem when I tried to realize a 2D finite-amplitude disturbance in the plane poiseuille flow (based on example of os7000). The code was compiled successfully and the initial conditions (base + disturbance) were all good (I verified the profiles with the literature). Both of the base flow (u = 1-y^2) and perturbation are set in the useric subroutine. Periodic boundary condition is employed. one wavelength is simulated. However, after the simulation starts, the perturbations somehow do not really evolve correctly; the wave shape along the streamwise was somehow distorted especially for the first few time steps (both ux' and uy'). After that, it recovered a bit but never went back to the perfect shape same as that in literature at later time steps. This issue yields some significant discrepancy with the available literature paper. I wonder if this could be due to the non-linear effect. Could anyone help me on this please? Regards, Emily -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 27 10:34:51 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 27 Apr 2018 15:34:51 +0000 Subject: [Nek5000-users] Finite-amplitude disturbance based on os7000 Message-ID: Dear all, First of all, thank you a lot for your help and feedback. The amplitude of ux? in my case is 1.82E-1 which is superimposed on the base flow u = 1 ? y**2 along with uy? (this setting is identical to the literature paper). An order of BDF 3 is employed. Polynomial order is 8 with 4 elements in streamwise direction and 12 elements in the wall-normal direction. 1 wave length is simulated in the channel. The default setting of os7000 was what I started. I have also tried to increase the spatial resolution, and reduce the time steps (dt = 0.05, 0.01, 0.005). However, it did not really change anything. The distortion is less obvious if the magnitude of perturbation is decreased and totally disappears if it is as small as 1.82E-3. Again, thank you a lot for the help and I will further explore the parameters and see what I can find. Regards, Emily -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 27 20:20:18 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 28 Apr 2018 01:20:18 +0000 Subject: [Nek5000-users] Finite-amplitude disturbance based on os7000 In-Reply-To: References: Message-ID: Dear Emily, At that amplitude, you are seeing the nonlinear effects - this is at it should be. If you wish to compare against linear theory, then the amplitude needs to be small enough such that the u.grad u term is quite small. I'm not certain of your ultimate objective -- if you're trying to measure convergence, then a small amplitude is important. If you are interested in large amplitude effects, then you won't be able to compare with linear theory results. hth, Paul ________________________________ From: Nek5000-users on behalf of nek5000-users at lists.mcs.anl.gov Sent: Friday, April 27, 2018 10:34:51 AM To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] Finite-amplitude disturbance based on os7000 Dear all, First of all, thank you a lot for your help and feedback. The amplitude of ux? in my case is 1.82E-1 which is superimposed on the base flow u = 1 ? y**2 along with uy? (this setting is identical to the literature paper). An order of BDF 3 is employed. Polynomial order is 8 with 4 elements in streamwise direction and 12 elements in the wall-normal direction. 1 wave length is simulated in the channel. The default setting of os7000 was what I started. I have also tried to increase the spatial resolution, and reduce the time steps (dt = 0.05, 0.01, 0.005). However, it did not really change anything. The distortion is less obvious if the magnitude of perturbation is decreased and totally disappears if it is as small as 1.82E-3. Again, thank you a lot for the help and I will further explore the parameters and see what I can find. Regards, Emily -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 30 06:40:59 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 30 Apr 2018 12:40:59 +0100 Subject: [Nek5000-users] reading rm2 files Message-ID: Hi Nek, what I can do for the following warning? Open file: rm2.list 1Reading: rm2pp0.f00001 Reading checkpoint data WARNING: restart file has a NSPCAL > LDIMT read only part of the fld-data! WARNING: NPSCAL read from restart file differs from currently used NPSCAL! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 30 07:00:54 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 30 Apr 2018 12:00:54 +0000 Subject: [Nek5000-users] reading rm2 files In-Reply-To: References: Message-ID: It looks like you need to change the parameter ldimt in SIZE and recompile using makeknek. Increase ldimt from 2 to 5, say... ________________________________ From: Nek5000-users on behalf of nek5000-users at lists.mcs.anl.gov Sent: Monday, April 30, 2018 6:40:59 AM To: nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] reading rm2 files Hi Nek, what I can do for the following warning? Open file: rm2.list 1Reading: rm2pp0.f00001 Reading checkpoint data WARNING: restart file has a NSPCAL > LDIMT read only part of the fld-data! WARNING: NPSCAL read from restart file differs from currently used NPSCAL! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 30 11:49:09 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 30 Apr 2018 17:49:09 +0100 Subject: [Nek5000-users] Fwd: 3D Taylor-Green vortex - error In-Reply-To: References: Message-ID: Hi again, Thank you for your guidance! Increasing lpmin in SIZE fixed the problem. I seem to have set up my case on a local workstation. It runs well and preliminary results look fine. On a different note, when I migrate on the computing cluster, the program stops during/after "prereading mesh" (see attached log file). The compilation is however reported successful. Any ideas of this behaviour? Cheers, Vishal --- *Vishal SAINI* Master of Research, University of Cambridge. Master in Turbulence EC Lille, ENSIP and ENSMA France. Contact: vs434 at cam.ac.uk, vishal.saini.nitj at gmail.com (+44) 7 459 186 139 (UK) (+33) 7 58 24 84 02 (France) On Fri, Apr 27, 2018 at 8:00 AM, wrote: > See NekDoc FAQs > > On 26 Apr 2018, at 23:37, "nek5000-users at lists.mcs.anl.gov" < > nek5000-users at lists.mcs.anl.gov> wrote: > > Hi all, > > This is my second day with Nek5000. Day one was to go through the process > of setting up the simulations with example eddy case and periodic hill case > running fine on a single core of local workstation. I was hoping to run the > 3D Taylor-Green vortex case (described in https://www.grc.nasa.gov/hi > ocfd/wp-content/uploads/sites/22/C3.3_Twente.pdf) very quickly to compare > the CPU wall clock time with OpenFOAM. Ideally I'd like to run a 16^3 > elements cubical box [-pi,pi] with polynomial order 4 providing 64^3 > degrees of freedom. I'm targeting 28-84 cores. The boundary conditions of > the case are triply periodic and initial conditions are defined as > trigonometric expressions of space. I didn't know how to write expressions > such as sin(x)cos(y)sin(z) in Nek5000. So, I tried with initial condition > uy=uz=0 and ux=1. I keep getting the following error with my set up: > > """ > makenek - automatic build tool for Nek5000 > generating makefile ... > build 3rd-party software ... > ?tgv.usr? -> ?tgv.f? > mpif77 -c -O2 -fdefault-real-8 -fdefault-double-8 -cpp -DMPI -DUNDERSCORE > -DTIMER -I/gpfs/home/lboro/ttvs3/Nek5000/run/tgv > -I/gpfs/home/lboro/ttvs3/Nek5000/core -I./ /gpfs/home/lboro/ttvs3/Nek5000/run/tgv/tgv.f > -o obj/tgv.o > mpif77 -O2 -fdefault-real-8 -fdefault-double-8 -cpp -DMPI -DUNDERSCORE > -DTIMER -I/gpfs/home/lboro/ttvs3/Nek5000/run/tgv > -I/gpfs/home/lboro/ttvs3/Nek5000/core -I./ -o nek5000 obj/tgv.o > obj/drive.o obj/drive1.o obj/drive2.o obj/plan5.o obj/plan4.o obj/bdry.o > obj/coef.o obj/conduct.o obj/connect1.o obj/connect2.o obj/dssum.o > obj/edgec.o obj/eigsolv.o obj/gauss.o obj/genxyz.o obj/navier1.o > obj/makeq.o obj/navier0.o obj/navier2.o obj/navier3.o obj/navier4.o > obj/prepost.o obj/speclib.o obj/map2.o obj/mvmesh.o obj/ic.o obj/gfldr.o > obj/ssolv.o obj/planx.o obj/math.o obj/mxm_wrapper.o obj/hmholtz.o > obj/gfdm_par.o obj/gfdm_op.o obj/gfdm_solve.o obj/subs1.o obj/subs2.o > obj/genbox.o obj/gmres.o obj/hsmg.o obj/convect.o obj/induct.o > obj/perturb.o obj/navier5.o obj/navier6.o obj/navier7.o obj/navier8.o > obj/fast3d.o obj/fasts.o obj/calcz.o obj/byte.o obj/chelpers.o > obj/byte_mpi.o obj/postpro.o obj/intp_usr.o obj/cvode_driver.o > obj/nek_comm.o obj/vprops.o obj/makeq_aux.o obj/papi.o obj/nek_in_situ.o > obj/reader_rea.o obj/reader_par.o obj/reader_re2.o obj/finiparser.o > obj/iniparser.o obj/dictionary.o obj/hpf.o obj/mxm_std.o obj/comm_mpi.o > obj/singlmesh.o obj/blas.o obj/dsygv.o -L/gpfs/home/lboro/ttvs3/Nek5000/3rd_party/gslib/lib > -lgs -L/trinity/shared/apps/local/bzip2/1.0.6/1/lib > -I/trinity/shared/apps/local/tk/8.5.13/1/lib > -I/trinity/shared/apps/local/tcl/8.5.13/1/lib > -L/trinity/shared/apps/local/zlib/1.2.11/1/lib > obj/drive2.o: In function `dofcnt_': > drive2.f:(.text+0x8d50): relocation truncated to fit: R_X86_64_PC32 > against symbol `scrns_' defined in COMMON section in obj/convect.o > obj/plan5.o: In function `midstep_': > plan5.f:(.text+0x186): relocation truncated to fit: R_X86_64_32S against > symbol `scrns_' defined in COMMON section in obj/convect.o > plan5.f:(.text+0x379): relocation truncated to fit: R_X86_64_32S against > symbol `scrns_' defined in COMMON section in obj/convect.o > plan5.f:(.text+0x391): relocation truncated to fit: R_X86_64_32S against > symbol `scrns_' defined in COMMON section in obj/convect.o > plan5.f:(.text+0x3c9): relocation truncated to fit: R_X86_64_32S against > symbol `scrns_' defined in COMMON section in obj/convect.o > obj/plan4.o: In function `crespsp_': > plan4.f:(.text+0x9db): relocation truncated to fit: R_X86_64_32S against > symbol `scrns_' defined in COMMON section in obj/convect.o > plan4.f:(.text+0xb1a): relocation truncated to fit: R_X86_64_32S against > symbol `scrns_' defined in COMMON section in obj/convect.o > plan4.f:(.text+0xc79): relocation truncated to fit: R_X86_64_32S against > symbol `scrns_' defined in COMMON section in obj/convect.o > plan4.f:(.text+0xd7a): relocation truncated to fit: R_X86_64_32S against > symbol `scrns_' defined in COMMON section in obj/convect.o > plan4.f:(.text+0xd90): relocation truncated to fit: R_X86_64_32S against > symbol `scrns_' defined in COMMON section in obj/convect.o > plan4.f:(.text+0xda3): additional relocation overflows omitted from the > output > collect2: error: ld returned 1 exit status > make: *** [nek5000] Error 1 > """ > > In your experience, the error is related to my compiler/mpi or the set up > files? If latter, can anyone be kind enough to check out my set up files > please? If you want to save time and have done this case before, you can > share those files from the past. > > Any input is highly appreciated. > > PS- Thanks Prof. Fischer for your reply. Hope this one is more clear. > > Cheers, > Vishal > > > _______________________________________________ > > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: log-163511 Type: application/octet-stream Size: 5562 bytes Desc: not available URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 30 11:56:55 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 30 Apr 2018 18:56:55 +0200 Subject: [Nek5000-users] Fwd: 3D Taylor-Green vortex - error In-Reply-To: References: Message-ID: Hmm ... this looks odd. Can you please send me your case. You'll find my email address on the Nek5000 webpage. Cheers, Steffan -----Original message----- > From:nek5000-users at lists.mcs.anl.gov > Sent: Monday 30th April 2018 18:53 > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] Fwd: 3D Taylor-Green vortex - error > > Hi again, > > Thank you for your guidance! Increasing lpmin in SIZE fixed the problem. I seem to have set up my case on a local workstation. It runs well and preliminary results look fine. > On a different note, when I migrate on the computing cluster, the program stops during/after "prereading mesh" (see attached log file). The compilation is however reported successful. > > Any ideas of this behaviour? > > Cheers, > Vishal > > >
--- > Vishal SAINI > Master of Research, > University of Cambridge. > Master in Turbulence > EC Lille, ENSIP and ENSMA France. > Contact: vs434 at cam.ac.uk, vishal.saini.nitj at gmail.com > ?????????????? (+44) 7 459 186 139 (UK) > ?????????????? (+33) 7 58 24 84 02? (France) > > On Fri, Apr 27, 2018 at 8:00 AM, > wrote: > See NekDoc FAQs > > On 26 Apr 2018, at 23:37, "nek5000-users at lists.mcs.anl.gov " > wrote: > > Hi all, > > This is my second day with Nek5000. Day one was to go through the process of setting up the simulations with example eddy case and periodic hill case running fine on a single core of local workstation. I was hoping to run the 3D Taylor-Green vortex case (described in?https://www.grc.nasa.gov/hiocfd/wp-content/uploads/sites/22/C3.3_Twente.pdf ) very quickly to compare the CPU wall clock time with OpenFOAM. Ideally Id like to run a 16^3 elements cubical box [-pi,pi] with polynomial order 4 providing 64^3 degrees of freedom. Im targeting 28-84 cores. The boundary conditions of the case are triply periodic and initial conditions are defined as trigonometric expressions of space. I didnt know how to write expressions such as sin(x)cos(y)sin(z) in Nek5000. So, I tried with initial condition uy=uz=0 and ux=1. I keep getting the following error with my set up: > > """ > makenek - automatic build tool for Nek5000 > generating makefile ... > build 3rd-party software ... > ?tgv.usr? -> ?tgv.f? > mpif77 -c? -O2 -fdefault-real-8 -fdefault-double-8 -cpp -DMPI -DUNDERSCORE -DTIMER -I/gpfs/home/lboro/ttvs3/Nek5000/run/tgv -I/gpfs/home/lboro/ttvs3/Nek5000/core -I./ /gpfs/home/lboro/ttvs3/Nek5000/run/tgv/tgv.f? -o obj/tgv.o > mpif77? -O2 -fdefault-real-8 -fdefault-double-8 -cpp -DMPI -DUNDERSCORE -DTIMER -I/gpfs/home/lboro/ttvs3/Nek5000/run/tgv -I/gpfs/home/lboro/ttvs3/Nek5000/core -I./ -o nek5000 obj/tgv.o obj/drive.o obj/drive1.o obj/drive2.o obj/plan5.o obj/plan4.o obj/bdry.o obj/coef.o obj/conduct.o obj/connect1.o obj/connect2.o obj/dssum.o obj/edgec.o obj/eigsolv.o obj/gauss.o obj/genxyz.o obj/navier1.o obj/makeq.o obj/navier0.o obj/navier2.o obj/navier3.o obj/navier4.o obj/prepost.o obj/speclib.o obj/map2.o obj/mvmesh.o obj/ic.o obj/gfldr.o obj/ssolv.o obj/planx.o obj/math.o obj/mxm_wrapper.o obj/hmholtz.o obj/gfdm_par.o obj/gfdm_op.o obj/gfdm_solve.o obj/subs1.o obj/subs2.o obj/genbox.o obj/gmres.o obj/hsmg.o obj/convect.o obj/induct.o obj/perturb.o obj/navier5.o obj/navier6.o obj/navier7.o obj/navier8.o obj/fast3d.o obj/fasts.o obj/calcz.o obj/byte.o obj/chelpers.o obj/byte_mpi.o obj/postpro.o obj/intp_usr.o obj/cvode_driver.o obj/nek_comm.o obj/vprops.o obj/makeq_aux.o obj/papi.o o bj/nek_i n_situ.o obj/reader_rea.o obj/reader_par.o obj/reader_re2.o obj/finiparser.o obj/iniparser.o obj/dictionary.o obj/hpf.o obj/mxm_std.o obj/comm_mpi.o obj/singlmesh.o obj/blas.o obj/dsygv.o -L/gpfs/home/lboro/ttvs3/Nek5000/3rd_party/gslib/lib -lgs -L/trinity/shared/apps/local/bzip2/1.0.6/1/lib -I/trinity/shared/apps/local/tk/8.5.13/1/lib -I/trinity/shared/apps/local/tcl/8.5.13/1/lib -L/trinity/shared/apps/local/zlib/1.2.11/1/lib > obj/drive2.o: In function `dofcnt_: > drive2.f:(.text+0x8d50): relocation truncated to fit: R_X86_64_PC32 against symbol `scrns_ defined in COMMON section in obj/convect.o > obj/plan5.o: In function `midstep_: > plan5.f:(.text+0x186): relocation truncated to fit: R_X86_64_32S against symbol `scrns_ defined in COMMON section in obj/convect.o > plan5.f:(.text+0x379): relocation truncated to fit: R_X86_64_32S against symbol `scrns_ defined in COMMON section in obj/convect.o > plan5.f:(.text+0x391): relocation truncated to fit: R_X86_64_32S against symbol `scrns_ defined in COMMON section in obj/convect.o > plan5.f:(.text+0x3c9): relocation truncated to fit: R_X86_64_32S against symbol `scrns_ defined in COMMON section in obj/convect.o > obj/plan4.o: In function `crespsp_: > plan4.f:(.text+0x9db): relocation truncated to fit: R_X86_64_32S against symbol `scrns_ defined in COMMON section in obj/convect.o > plan4.f:(.text+0xb1a): relocation truncated to fit: R_X86_64_32S against symbol `scrns_ defined in COMMON section in obj/convect.o > plan4.f:(.text+0xc79): relocation truncated to fit: R_X86_64_32S against symbol `scrns_ defined in COMMON section in obj/convect.o > plan4.f:(.text+0xd7a): relocation truncated to fit: R_X86_64_32S against symbol `scrns_ defined in COMMON section in obj/convect.o > plan4.f:(.text+0xd90): relocation truncated to fit: R_X86_64_32S against symbol `scrns_ defined in COMMON section in obj/convect.o > plan4.f:(.text+0xda3): additional relocation overflows omitted from the output > collect2: error: ld returned 1 exit status > make: *** [nek5000] Error 1 > """ > > In your experience, the error is related to my compiler/mpi or the set up files? If latter, can anyone be kind enough to check out my set up files please? If you want to save time and have done this case before, you can share those files from the past. > > Any input is highly appreciated. > > PS- Thanks Prof. Fischer for your reply. Hope this one is more clear. > > Cheers, > Vishal > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Mon Apr 30 12:45:39 2018 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 30 Apr 2018 18:45:39 +0100 Subject: [Nek5000-users] reading rm2 files Message-ID: Hi, I dont think increasing the ldimt can solve the problem. Currently it set to be 3. -------------- next part -------------- An HTML attachment was scrubbed... URL: