<div dir="ltr"><div>Hi,<br><br></div>You're absolutely right. It turns out I was not saving my data properly when running the program in parallel. Thanks for checking!<br></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div><div><div><div><div>Regards,<br></div><div><span style="color:rgb(12,52,61)"><font size="4"><i>Adam Denchfield</i></font></span><br></div><i><span style="color:rgb(0,0,255)"><font size="4"><span style="background-color:rgb(0,0,255)"></span></font></span></i></div><b>Peer Career Coach - Career Services</b><i><br></i></div>Illinois Institute of Technology<br></div><b>Bachelors of Science in Applied Physics (2018)<br></b></div>Email: <a href="mailto:adenchfi@hawk.iit.edu" target="_blank">adenchfi@hawk.iit.edu</a><br></div><a href="http://www.linkedin.com/in/adamrobertdenchfield" target="_blank">My LinkedIn</a> <a href="https://www.researchgate.net/profile/Adam_Denchfield" target="_blank">My ResearchGate Profile</a><br></div></div></div></div></div></div></div></div></div></div></div></div></div>
<br><div class="gmail_quote">On Wed, Jul 12, 2017 at 3:04 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Are you talking about the below? If so, I suspect that the code is actually generating a square matrix, not a rectangular one. Hypre definitely cannot handle rectangular matrices.<br>
<br>
Barry<br>
<br>
<br>
<br>
PC Object: 2 MPI processes<br>
type: hypre<br>
HYPRE BoomerAMG preconditioning<br>
HYPRE BoomerAMG: Cycle type V<br>
HYPRE BoomerAMG: Maximum number of levels 25<br>
HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1<br>
HYPRE BoomerAMG: Convergence tolerance PER hypre call 0.<br>
HYPRE BoomerAMG: Threshold for strong coupling 0.25<br>
HYPRE BoomerAMG: Interpolation truncation factor 0.<br>
HYPRE BoomerAMG: Interpolation: max elements per row 0<br>
HYPRE BoomerAMG: Number of levels of aggressive coarsening 0<br>
HYPRE BoomerAMG: Number of paths for aggressive coarsening 1<br>
HYPRE BoomerAMG: Maximum row sums 0.9<br>
HYPRE BoomerAMG: Sweeps down 1<br>
HYPRE BoomerAMG: Sweeps up 1<br>
HYPRE BoomerAMG: Sweeps on coarse 1<br>
HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi<br>
HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi<br>
HYPRE BoomerAMG: Relax on coarse Gaussian-elimination<br>
HYPRE BoomerAMG: Relax weight (all) 1.<br>
HYPRE BoomerAMG: Outer relax weight (all) 1.<br>
HYPRE BoomerAMG: Using CF-relaxation<br>
HYPRE BoomerAMG: Not using more complex smoothers.<br>
HYPRE BoomerAMG: Measure type local<br>
HYPRE BoomerAMG: Coarsen type Falgout<br>
HYPRE BoomerAMG: Interpolation type classical<br>
linear system matrix = precond matrix:<br>
Mat Object: 2 MPI processes<br>
type: mpiaij<br>
rows=2112, cols=2112, bs=2<br>
total: nonzeros=29056, allocated nonzeros=29056<br>
total number of mallocs used during MatSetValues calls =0<br>
using I-node (on process 0) routines: found 531 nodes, limit used is 5<br>
<div><div class="h5"><br>
> On Jul 12, 2017, at 2:55 PM, Adam Denchfield <<a href="mailto:adenchfi@hawk.iit.edu">adenchfi@hawk.iit.edu</a>> wrote:<br>
><br>
> Hi,<br>
> Below I've attached the text file with my ksp_view. It's a time-dependent problem so I included ten time-steps. The information doesn't seem to be much different in the multiple time steps.<br>
><br>
> ______________<br>
><br>
><br>
><br>
> Regards,<br>
> Adam Denchfield<br>
> Peer Career Coach - Career Services<br>
> Illinois Institute of Technology<br>
> Bachelors of Science in Applied Physics (2018)<br>
> Email: <a href="mailto:adenchfi@hawk.iit.edu">adenchfi@hawk.iit.edu</a><br>
> My LinkedIn My ResearchGate Profile<br>
><br>
> On Wed, Jul 12, 2017 at 2:48 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
> Adam,<br>
><br>
> Please send the output from KSP view. We don't "change" the matrix but we might have some intermediate matrix of that other side, or it could be a bug in our output.<br>
><br>
> Barry<br>
><br>
> > On Jul 12, 2017, at 2:45 PM, Adam Denchfield <<a href="mailto:adenchfi@hawk.iit.edu">adenchfi@hawk.iit.edu</a>> wrote:<br>
> ><br>
> > Hi,<br>
> ><br>
> > I'm using the FEniCS package, which uses DOLFIN/petsc4py (and thus petsc). If I'm understanding the libraries right, I'm assembling rectangular matrices and sending them to be solved in a KrylovSolver. However, with -ksp_view, I'm seeing 2112x2112 matrices, not 1062x2112 matrices. Does PETSc do conversions like this automatically during preconditioning or something? Trying to identify if this would be a behaviour on PETSc's part.<br>
> ><br>
> > Regards,<br>
> > Adam Denchfield<br>
> > Peer Career Coach - Career Services<br>
> > Illinois Institute of Technology<br>
> > Bachelors of Science in Applied Physics (2018)<br>
> > Email: <a href="mailto:adenchfi@hawk.iit.edu">adenchfi@hawk.iit.edu</a><br>
> > My LinkedIn My ResearchGate Profile<br>
><br>
><br>
</div></div>> <output_ksp.txt><br>
<br>
</blockquote></div><br></div>