<div>Hi,</div>
<div>&nbsp;</div>
<div>I&#39;ve tried to use the shared version of mpich2 which I installed seperately (due to above problem) with PETSc. The command is </div>
<div>&nbsp;</div>
<div>./config/configure.py --with-fc=/lsftmp/g0306332/inter/fc/bin/ifort --with-blas-lapack-dir=/lsftmp/g0306332/inter/mkl/ --with-mpi-dir=/lsftmp/g0306332/mpich2-l32 --with-x=0 --with-shared</div>
<div>&nbsp;</div>
<div>During the test, this error msg was shown:</div>
<div>&nbsp;</div>
<div>gcc -c -fPIC -Wall -Wwrite-strings -g3 -I/nas/lsftmp/g0306332/petsc-2.3.2-p8 -I/nas/lsftmp/g0306332/petsc-2.3.2-p8/bmake/linux-mpich2 -I/nas/lsftmp/g0306332/petsc-2.3.2-p8/include -I/lsftmp/g0306332/mpich2-l32/include -D__SDIR__=&quot;src/snes/examples/tutorials/&quot; 
ex19.c<br>gcc -fPIC -Wall -Wwrite-strings -g3&nbsp; -o ex19&nbsp; ex19.o -Wl,-rpath,/nas/lsftmp/g0306332/petsc-2.3.2-p8/lib/linux-mpich2 -L/nas/lsftmp/g0306332/petsc-2.3.2-p8/lib/linux-mpich2 -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc&nbsp;&nbsp; -Wl,-rpath,/lsftmp/g0306332/mpich2-l32/lib -L/lsftmp/g0306332/mpich2-l32/lib -lmpich -lnsl -laio -lrt -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/32 -L/lsftmp/g0306332/inter/mkl/lib/32 -lmkl_lapack -lmkl_def -lguide -lvml -lpthread -lm -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3 -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3 -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -ldl -lgcc_eh -Wl,-rpath,&quot;/usr/lib/gcc-lib/i386-pc-linux/3.2.3&quot; -Wl,-rpath,&quot;/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../..&quot; -Wl,-rpath,/lsftmp/g0306332/inter/fc/lib -L/lsftmp/g0306332/inter/fc/lib -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/ -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/ -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../../ -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../../ -lifport -lifcore -limf -lm -lipgo -lirc -lgcc_s -lirc_s -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3 -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -lm&nbsp; -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3 -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3 -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -ldl -lgcc_eh -ldl 
<br>/nas/lsftmp/g0306332/petsc-2.3.2-p8/lib/linux-mpich2/libpetsc.so: undefined reference to `mpi_conversion_fn_null_&#39;<br>collect2: ld returned 1 exit status<br>&nbsp;</div>
<div>Then I tried to my fortran example.</div>
<div>&nbsp;</div>
<div>I realised that if I compile using ifort (using command similar to &quot;make ex1f&quot;, there is no problem but the result shown that 4 processors are running 4 <strong>individual</strong> codes.</div>
<div>&nbsp;</div>
<div>I then tried to use the mpif90 in the mpich2 directory. compiling was ok but during linking the error was:</div>
<div>&nbsp;</div>
<div>&nbsp;/lsftmp/g0306332/mpich2-l32/bin/mpif90&nbsp; -fPIC -g&nbsp;&nbsp; -o ex2f ex2f.o&nbsp; -Wl,-rpath,/nas/lsftmp/g0306332/petsc-2.3.2-p8/lib/linux-mpich2 -L/nas/lsftmp/g0306332/petsc-2.3.2-p8/lib/linux-mpich2 -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc&nbsp;&nbsp;&nbsp; -Wl,-rpath,/lsftmp/g0306332/mpich2-l32/lib -L/lsftmp/g0306332/mpich2-l32/lib -lmpich -lnsl -laio -lrt -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/32 -L/lsftmp/g0306332/inter/mkl/lib/32 -lmkl_lapack -lmkl_def -lguide -lvml -lpthread -lm -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3 -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3 -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -ldl -lgcc_eh -Wl,-rpath,&quot;/usr/lib/gcc-lib/i386-pc-linux/3.2.3&quot; -Wl,-rpath,&quot;/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../..&quot; -Wl,-rpath,/lsftmp/g0306332/inter/fc/lib -L/lsftmp/g0306332/inter/fc/lib -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/ -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/ -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../../ -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../../ -lifport -lifcore -limf -lm -lipgo -lirc -lgcc_s -lirc_s -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3 -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -lm&nbsp; -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3 -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3 -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -ldl -lgcc_eh -ldl
<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__clog10q&#39;<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__cexp10q&#39;<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__csqrtq&#39;
<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__ccoshq&#39;<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__ctanhq&#39;<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__ccosq&#39;
<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__clogq&#39;<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__csinhq&#39;<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__ctanq&#39;
<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__cpowq&#39;<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__exp10q&#39;<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__cexpq&#39;
<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__cabsq&#39;<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__fabsq&#39;<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__csinq&#39;
<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__ldexpq&#39;<br>/lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to `__frexpq&#39;<br>&nbsp;</div>
<div>&nbsp;</div>
<div>So what is wrong? I am a novice in mpi so I hope someone can give some advice. Thank you.</div>
<div>&nbsp;</div>
<div>PS: Has the &quot;make clean&quot; on MPICH been updated on BuildSystem/config/packages/MPI.py?<br><br>&nbsp;</div>
<div><span class="gmail_quote">On 1/13/07, <b class="gmail_sendername">Ben Tay</b> &lt;<a href="mailto:zonexo@gmail.com">zonexo@gmail.com</a>&gt; wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">
<div>Thanks Barry &amp; Aron.</div>
<div>&nbsp;</div>
<div>I&#39;ve tried to install mpich2 on a scratch directory and it finished in a short while.<br><br>&nbsp;</div>
<div><span class="e" id="q_1101c055f9a1d8db_1">
<div><span class="gmail_quote">On 1/13/07, <b class="gmail_sendername">Barry Smith</b> &lt;<a onclick="return top.js.OpenExtLink(window,event,this)" href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>
&gt; wrote:</span> 
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid"><br>Ben,<br><br>This is partially our fault. We never run &quot;make clean&quot; on MPICH after the<br>install so there are lots of .o and .a files lying around. I&#39;ve updated 
<br>BuildSystem/config/packages/MPI.py to run make clean after the install.<br><br>&nbsp;&nbsp;Barry<br><br>On Sat, 13 Jan 2007, Aron Ahmadia wrote:<br><br>&gt; Hi Ben,<br>&gt;<br>&gt; My PETSc install on an OS X machine requires about 343 MB of space, 
<br>&gt; about 209 MB of which is MPICH.&nbsp;&nbsp;Unfortunately this has the potential<br>&gt; of exceeding 500 MB temporarily I believe as the make process<br>&gt; generates a lot of object files during the software build.<br>&gt; 
<br>&gt; I think what you want to do is compile and install your own copy of<br>&gt; MPICH (using a scratch directory or whatever tools you have at your<br>&gt; disposal), then use the --with-mpi-dir=/location/to/mpich/install 
<br>&gt; argument into configure.<br>&gt;<br>&gt; I&#39;ve never staged a PETSc build on a machine with an extremely limited<br>&gt; quota, the developers might have some suggestions on how to do this.<br>&gt;<br>&gt; ~A<br>
&gt;<br>&gt; On 1/13/07, Ben Tay &lt;<a onclick="return top.js.OpenExtLink(window,event,this)" href="mailto:zonexo@gmail.com" target="_blank">zonexo@gmail.com</a>&gt; wrote:<br>&gt; &gt; Hi,<br>&gt; &gt;<br>&gt; &gt; I am trying to compile PETSc with mpi using --download-mpich=1 in linux. The 
<br>&gt; &gt; command is<br>&gt; &gt;<br>&gt; &gt;<br>&gt; &gt;<br>&gt; &gt; ./config/configure.py<br>&gt; &gt; --with-fc=/lsftmp/g0306332/inter/fc/bin/ifort<br>&gt; &gt; --with-blas-lapack-dir=/lsftmp/g0306332/inter/mkl/ 
<br>&gt; &gt; --download-mpich=1 --with-x=0 --with-shared<br>&gt; &gt;<br>&gt; &gt; It displays:<br>&gt; &gt;<br>&gt; &gt; =================================================================================<br>&gt; &gt; Running configure on MPICH; this may take several 
<br>&gt; &gt; minutes<br>&gt; &gt; =================================================================================<br>&gt; &gt; =================================================================================<br>&gt; &gt; Running make on MPICH; this may take several 
<br>&gt; &gt; minutes<br>&gt; &gt; =================================================================================<br>&gt; &gt;<br>&gt; &gt; then it says disk quota exceeded. I&#39;ve about 450mb free space, which is all 
<br>&gt; &gt; filled up when the error shows. May I know how much disk space is required?<br>&gt; &gt;<br>&gt; &gt; Also can I compile just mpich on a scratch directory and then moved it to<br>&gt; &gt; the PETSc externalpackages directory? Or do I have to compile everything 
<br>&gt; &gt; (including PETSc) on a scratch directory and moved it my my directory?<br>&gt; &gt;<br>&gt; &gt; thank you.<br>&gt;<br>&gt;<br><br></blockquote></div><br></span></div></blockquote></div><br>