<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Fri, Jun 26, 2015 at 10:46 PM, Justin Chang <span dir="ltr"><<a href="mailto:jychang48@gmail.com" target="_blank">jychang48@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>Barry,<br><br></div>Thank you that kind of did the trick, although I have run into a couple issues:<br><br></div>1) Is it expected for quad precision to have a significantly worse performance than double precision in terms of wall-clock time? What took 60 seconds to solve with double precision now takes approximately 1500 seconds to solve. I am guessing this has a lot to do with the fact that scalars are now 16 bytes large as opposed to 8?<br><br></div>2) The program crashes at DMPlexDistribute() when I used two or more processors. Here's the error log:<br></div></blockquote><div><br></div><div>I am probably not careful to insulate ParMetis from PetscReal. I have to look at it.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Input Error: Incorrect sum of 0.000000 for tpwgts for constraint 0.<br>[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------<br>[0]PETSC ERROR: Error in external library<br>[0]PETSC ERROR: Error in METIS_PartGraphKway()<br>[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble shooting.<br>[0]PETSC ERROR: Petsc Development GIT revision: v3.6-815-g2d9afd9 GIT Date: 2015-06-26 18:48:28 -0500<br>[0]PETSC ERROR: ./main on a arch-linux2-quad-opt named pacotaco-xps by justin Fri Jun 26 22:29:36 2015<br>[0]PETSC ERROR: Configure options --download-f2cblaslapack --download-metis --download-mpich --download-parmetis --download-triangle --with-cc=gcc --with-cmake=cmake --with-cxx=g++ --with-debugging=0 --with-fc=gfortran --with-valgrind=1 PETSC_ARCH=arch-linux2-quad-opt --with-precision=__float128<br>[0]PETSC ERROR: #1 PetscPartitionerPartition_ParMetis() line 1181 in /home/justin/Software/petsc-dev/src/dm/impls/plex/plexpartition.c<br>[0]PETSC ERROR: #2 PetscPartitionerPartition() line 653 in /home/justin/Software/petsc-dev/src/dm/impls/plex/plexpartition.c<br>[0]PETSC ERROR: #3 DMPlexDistribute() line 1505 in /home/justin/Software/petsc-dev/src/dm/impls/plex/plexdistribute.c<br>[0]PETSC ERROR: #4 CreateMesh() line 762 in /home/justin/Dropbox/DMPlex-nonneg/main.c<br>[0]PETSC ERROR: #5 main() line 993 in /home/justin/Dropbox/DMPlex-nonneg/main.c<br>[0]PETSC ERROR: PETSc Option Table entries:<br>[0]PETSC ERROR: -al 1<br>[0]PETSC ERROR: -am 0<br>[0]PETSC ERROR: -at 0.001<br>[0]PETSC ERROR: -bcloc 0,1,0,1,0,0,0,1,0,1,1,1,0,0,0,1,0,1,1,1,0,1,0,1,0,1,0,0,0,1,0,1,1,1,0,1,0.45,0.55,0.45,0.55,0.45,0.55<br>[0]PETSC ERROR: -bcnum 7<br>[0]PETSC ERROR: -bcval 0,0,0,0,0,0,1<br>[0]PETSC ERROR: -binary_read_double<br>[0]PETSC ERROR: -dim 3<br>[0]PETSC ERROR: -dm_refine 1<br>[0]PETSC ERROR: -dt 0.001<br>[0]PETSC ERROR: -edges 3,3<br>[0]PETSC ERROR: -floc 0.25,0.75,0.25,0.75,0.25,0.75<br>[0]PETSC ERROR: -fnum 0<br>[0]PETSC ERROR: -ftime 0,99<br>[0]PETSC ERROR: -fval 1<br>[0]PETSC ERROR: -ksp_max_it 50000<br>[0]PETSC ERROR: -ksp_rtol 1.0e-10<br>[0]PETSC ERROR: -ksp_type cg<br>[0]PETSC ERROR: -log_summary<br>[0]PETSC ERROR: -lower 0,0<br>[0]PETSC ERROR: -mat_petscspace_order 0<br>[0]PETSC ERROR: -mesh datafiles/cube_with_hole4_mesh.dat<br>[0]PETSC ERROR: -mu 1<br>[0]PETSC ERROR: -nonneg 1<br>[0]PETSC ERROR: -numsteps 0<br>[0]PETSC ERROR: -options_left 0<br>[0]PETSC ERROR: -pc_type jacobi<br>[0]PETSC ERROR: -petscpartitioner_type parmetis<br>[0]PETSC ERROR: -progress 1<br>[0]PETSC ERROR: -simplex 1<br>[0]PETSC ERROR: -solution_petscspace_order 1<br>[0]PETSC ERROR: -tao_fatol 1e-8<br>[0]PETSC ERROR: -tao_frtol 1e-8<br>[0]PETSC ERROR: -tao_max_it 50000<br>[0]PETSC ERROR: -tao_monitor<br>[0]PETSC ERROR: -tao_type blmvm<br>[0]PETSC ERROR: -tao_view<br>[0]PETSC ERROR: -trans datafiles/cube_with_hole4_trans.dat<br>[0]PETSC ERROR: -upper 1,1<br>[0]PETSC ERROR: -vtuname figures/cube_with_hole_4<br>[0]PETSC ERROR: -vtuprint 1<br>[0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint@mcs.anl.gov----------<br>application called MPI_Abort(MPI_COMM_WORLD, 76) - process 0<br>[cli_0]: aborting job:<br>application called MPI_Abort(MPI_COMM_WORLD, 76) - process 0<br><br>===================================================================================<br>= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES<br>= PID 12626 RUNNING AT pacotaco-xps<br>= EXIT CODE: 76<br>= CLEANING UP REMAINING PROCESSES<br>= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES<br>===================================================================================<br><div class="gmail_extra"><br></div><div class="gmail_extra">Know what's going on here?<br><br></div><div class="gmail_extra">Thanks,<br></div><div class="gmail_extra">Justin<br></div><div><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jun 26, 2015 at 11:24 AM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
We have an undocumented option -binary_read_double that will read in double precision from the file and place it into a quad precision array. This is exactly what you need<br>
<br>
Barry<br>
<br>
Yes, we should fix up our binary viewers to allow reading and writing generally for any precision but need a volunteer to do it.<br>
<div><div><br>
<br>
<br>
> On Jun 26, 2015, at 1:45 AM, Justin Chang <<a href="mailto:jychang48@gmail.com" target="_blank">jychang48@gmail.com</a>> wrote:<br>
><br>
> Hi all,<br>
><br>
> I need to run simulations that rely on several of my custom binary datafiles (written in 32 bit int and 64 bit doubles). These date files were generated from MATLAB. In my PETSc code I invoke PetscBinaryRead(...) into these binary files, which gives me mesh data, auxiliaries, etc.<br>
><br>
> However, when I now configure with quad precision (--with-precision=__float128 and --download-f2cblaslapack) my PetscBinaryRead() functions give me segmentation violation errors. I am guessing this is because I have binary files written in double precision but have PETSc which reads in quad precision, meaning I will be reading past the end of these files due to the larger strides.<br>
><br>
> So my question is, is there a way to circumvent this issue? That is, to read double-precision binary data into a PETSc program configured with quad-precision? Otherwise I would have to rewrite or redo all of my datafiles, which I would prefer not to do if possible.<br>
><br>
> Thanks,<br>
> Justin<br>
><br>
<br>
</div></div></blockquote></div><br></div></div></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>