<div dir="ltr"><div dir="ltr"><div dir="ltr">On Wed, May 8, 2019 at 9:00 PM Zhang, Hong <<a href="mailto:hzhang@mcs.anl.gov">hzhang@mcs.anl.gov</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">Justin:<br>
</div>
<div>Great, the issue is resolved.</div>
<div>Why MatSetOption(J,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE) does not raise an error?</div></div></div></div></blockquote><div><br></div><div>Because it has PETSC_FALSE.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div dir="ltr"><div dir="ltr">
<div>Matt,</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div class="gmail_quote">
<div><br>
</div>
<div>We usually prevent this with a structured SetValues API. For example, DMDA uses MatSetValuesStencil() which cannot write</div>
<div>outside the stencil you set. DMPlex uses MatSetValuesClosure(), which is guaranteed to be allocated. We should write one</div>
<div>for DMNetwork. The allocation is just like Plex (I believe) where you allocate closure(star(p)), which would mean that setting</div>
<div>values for a vertex gets the neighboring edges and their vertices, and setting values for an edge gets the covering vertices.</div>
<div>Is that right for DMNetwork?</div>
</div>
</div>
</blockquote>
</div>
</blockquote>
<div>Yes, DMNetwork behaves in this fashion. </div>
<div>I cannot find MatSetValuesClosure() in petsc-master. </div></div></div></div></div></blockquote><div><br></div><div>I mean <a href="https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexMatSetClosure.html">https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexMatSetClosure.html</a></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div dir="ltr"><div dir="ltr"><div class="gmail_quote">
<div>Can you provide detailed instruction on how to implement MatSetValuesClosure() for DMNetwork?</div></div></div></div></div></blockquote><div><br></div><div>It will just work as is for edges, but not for vertices since you want to set the star, not the closure. You would</div><div>just need to reverse exactly what is in that function.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div dir="ltr"><div dir="ltr"><div class="gmail_quote">
<div>Note, dmnetwork is a subclass of DMPlex.</div>
<div><br>
</div>
<div>Hong</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div class="gmail_quote">
<div><br>
</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, May 8, 2019 at 4:00 PM Dave May <<a href="mailto:dave.mayhem23@gmail.com" target="_blank">dave.mayhem23@gmail.com</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr"><br>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, 8 May 2019 at 20:34, Justin Chang via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div dir="ltr">So here's the branch/repo to the working example I have:
<div><br>
</div>
<div><a href="https://github.com/jychang48/petsc-dss/tree/single-bus-vertex" target="_blank">https://github.com/jychang48/petsc-dss/tree/single-bus-vertex</a><br>
</div>
<div><br>
</div>
<div>Type 'make' to compile the dss, it should work with the latest petsc-dev</div>
<div><br>
</div>
<div>To test the performance, I've taken an existing IEEE 13-bus and duplicated it N times to create a long radial-like network. I have three sizes where N = 100, 500, and 1000. Those test files are listed as:</div>
<div><br>
</div>
<div>input/test_100.m</div>
<div>input/test_500.m</div>
<div>input/test_1000.m</div>
<div><br>
</div>
<div>I also created another set of examples where the IEEE 13-bus is fully balanced (but the program will crash ar the solve step because I used some unrealistic parameters for the Y-bus matrices and probably have some zeros somewhere). They are listed as:</div>
<div><br>
</div>
<div>input/test2_100.m</div>
<div>input/test2_500.m</div>
<div>input/test2_1000.m</div>
<div><br>
</div>
<div>The dof count and matrices for the test2_*.m files are slightly larger than their respective test_*.m but they have a bs=6.</div>
<div><br>
</div>
<div>To run these tests, type the following:</div>
<div><br>
</div>
<div>./dpflow -input input/test_100.m </div>
<div><br>
</div>
<div>I have a timer that shows how long it takes to compute the Jacobian. Attached are the log outputs I have for each of the six cases.</div>
<div><br>
</div>
<div>Turns out that only the first call to the SNESComputeJacobian() is slow, all the subsequent calls are fast as I expect. This makes me think it still has something to do with matrix allocation.</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>I think it is a preallocation issue.</div>
<div>Looking to some of the output files (test_1000.out, test_100.out), under Mat Object I see this in the KSPView</div>
<div><br>
</div>
<div>
<div> total number of mallocs used during MatSetValues calls =10000</div>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div dir="ltr">
<div><br>
</div>
<div>Thanks for the help everyone,</div>
<div><br>
</div>
<div>Justin</div>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, May 8, 2019 at 12:36 PM Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div dir="ltr">On Wed, May 8, 2019 at 2:30 PM Justin Chang <<a href="mailto:jychang48@gmail.com" target="_blank">jychang48@gmail.com</a>> wrote:<br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">Hi everyone,
<div><br>
</div>
Yes I have these lines in my code:<br>
<br>
ierr = DMCreateMatrix(networkdm,&J);CHKERRQ(ierr);<br>
ierr = MatSetOption(J,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE);CHKERRQ(ierr);<br>
</div>
</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Okay, its not allocation. So maybe Hong is right that its setting great big element matrices. We will see with the example.</div>
<div><br>
</div>
<div> Thanks,</div>
<div><br>
</div>
<div> Matt</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr"></div>
<div>I tried -info and here's my output:</div>
<div><br>
</div>
<div>[0] PetscInitialize(): PETSc successfully started: number of processors = 1</div>
<div>[0] PetscInitialize(): Running on machine: jchang31606s.domain</div>
<div>[0] PetscCommDuplicate(): Duplicating a communicator 4436504608 140550815662944 max tags = 2147483647</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436504608 140550815662944</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436504608 140550815662944</div>
<div>Base power = 0.166667, numbus = 115000, numgen = 5000, numyl = 75000, numdl = 5000, numlbr = 109999, numtbr = 5000</div>
<div><br>
</div>
<div>**** Power flow dist case ****</div>
<div><br>
</div>
<div>Base power = 0.166667, nbus = 115000, ngen = 5000, nwye = 75000, ndelta = 5000, nbranch = 114999</div>
<div>[0] PetscCommDuplicate(): Duplicating a communicator 4436505120 140550815683104 max tags = 2147483647<br>
</div>
<div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436505120 140550815683104</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436505120 140550815683104</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436505120 140550815683104</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436505120 140550815683104</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436505120 140550815683104</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436505120 140550815683104</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436505120 140550815683104</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436505120 140550815683104</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436505120 140550815683104</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436505120 140550815683104</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436505120 140550815683104</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436505120 140550815683104</div>
<div>[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 620000 X 620000; storage space: 0 unneeded,10799928 used</div>
<div>[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0</div>
<div>[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 28</div>
<div>[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 620000) < 0.6. Do not use CompressedRow routines.</div>
<div>[0] MatSeqAIJCheckInode(): Found 205000 nodes of 620000. Limit used: 5. Using Inode routines</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436505120 140550815683104</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436504608 140550815662944</div>
<div>[0] DMGetDMSNES(): Creating new DMSNES</div>
<div>[0] DMGetDMKSP(): Creating new DMKSP</div>
<div>[0] PetscCommDuplicate(): Using internal PETSc communicator 4436505120 140550815683104</div>
<div> 0 SNES Function norm 1155.45 </div>
</div>
<div><br>
</div>
<div>nothing else -info related shows up as I'm iterating through the vertex loop.</div>
<div><br>
</div>
<div>I'll have a MWE for you guys to play with shortly.</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Justin</div>
</div>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, May 8, 2019 at 12:10 PM Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Justin,<br>
<br>
Are you providing matrix entries that connect directly one vertex to another vertex ACROSS an edge? I don't think that is supported by the DMNetwork model. The assumption is that edges are only connected to vertices and vertices are only connected to
neighboring edges.<br>
<br>
Everyone,<br>
<br>
I second Matt's reply. <br>
<br>
How is the DMNetwork preallocating for the Jacobian? Does it take into account coupling between neighboring vertices/edges? Or does it assume no coupling. Or assume full coupling. If it assumes no coupling and the user has a good amount of coupling it will
be very slow. <br>
<br>
There would need to be a way for the user provide the coupling information between neighboring vertices/edges if it assumes no coupling.<br>
<br>
Barry<br>
<br>
<br>
> On May 8, 2019, at 7:44 AM, Matthew Knepley via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>
> <br>
> On Wed, May 8, 2019 at 4:45 AM Justin Chang via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>
> Hi guys,<br>
> <br>
> I have a fully working distribution system solver written using DMNetwork, The idea is that each electrical bus can have up to three phase nodes, and each phase node has two unknowns: voltage magnitude and angle. In a completely balanced system, each bus
has three nodes, but in an unbalanced system some of the buses can be either single phase or two-phase.<br>
> <br>
> The working DMNetwork code I developed, loosely based on the SNES network/power.c, essentially represents each vertex as a bus. DMNetworkAddNumVariables() function will add either 2, 4, or 6 unknowns to each vertex. If every single bus had the same number
of variables, the mat block size = 2, 4, or 6, and my code is both fast and scalable. However, if the unknowns per DMNetwork vertex unknowns are not the same across, then my SNESFormJacobian function becomes extremely extremely slow. Specifically, the MatSetValues()
calls when the col/row global indices contain an offset value that points to a neighboring bus vertex.
<br>
> <br>
> I have never seen MatSetValues() be slow unless it is allocating. Did you confirm that you are not allocating, with -info?<br>
> <br>
> Thanks,<br>
> <br>
> MAtt<br>
> <br>
> Why is that? Is it because I no longer have a uniform block structure and lose the speed/optimization benefits of iterating through an AIJ matrix? I see three potential workarounds:<br>
> <br>
> 1) Treat every vertex as a three phase bus and "zero out" all the unused phase node dofs and put a 1 in the diagonal. The problem I see with this is that I will have unnecessary degrees of freedom (aka non-zeros in the matrix). From the distribution systems
I've seen, it's possible that anywhere from 1/3 to 1/2 of the buses will be two-phase or less, meaning I may have nearly twice the amount of dofs than necessary if I wanted to preserve the block size = 6 for the AU mat.<br>
> <br>
> 2) Treat every phase node as a vertex aka solve a single-phase power flow solver. That way I guarantee to have a block size = 2, this is what Domenico's former student did in his thesis work. The problem I see with this is that I have a larger graph, which
can take more time to setup and parallelize.<br>
> <br>
> 3) Create a "fieldsplit" where I essentially have three "blocks" - one for buses with all three phases, another for buses with only two phases, one for single-phase buses. This way each block/fieldsplit will have a consistent block size. I am not sure if
this will solve the MatSetValues() issues, but it's, but can anyone give pointers on how to go about achieving this?<br>
> <br>
> Thanks,<br>
> Justin<br>
> <br>
> <br>
> -- <br>
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> -- Norbert Wiener<br>
> <br>
> <a href="https://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">
https://www.cse.buffalo.edu/~knepley/</a><br>
<br>
</blockquote>
</div>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr" class="gmail-m_6141205071388303378gmail-m_-2426182854758167114gmail-m_2015061060332466011gmail-m_-960321644039551053gmail-m_-2358588997161181854gmail-m_77808791709372678gmail-m_3639311099960849162gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr" class="gmail-m_6141205071388303378gmail-m_-2426182854758167114gmail-m_2015061060332466011gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div></div>