<div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><div>What Ellen wants to do seems exactly the same use case as required by dynamic AMR. </div><div dir="auto"><br></div><div>Some thoughts:</div><div dir="auto">* If the target problem is nonlinear, then you will need to evaluate the Jacobian more than once (with the same nonzero pattern) per time step. You would also have to solve a linear problem at each Newton iterate. Collectively I think both tasks will consume much more time than that required to create a new matrix and determine / set the nonzero pattern (which is only required once per time step). </div><div dir="auto"><br></div><div dir="auto">* If you are using an incompressible SPH method (e.g. you use a kernel with a constant compact support) then you will have code which allows you to efficiently determine which particles interact, e.g. via a background cell structure, thus you have a means to infer the the nonzero structure. However computing the off-diagonal counts can be a pain...</div><div dir="auto"><br></div><div dir="auto">* Going further, provided you have a unique id assigned to each particle, you can use MatPreallocator (<a href="https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPreallocatorPreallocate.html">https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPreallocatorPreallocate.html</a>) to easily define the exact nonzero pattern. <br></div></div><div dir="auto"><br></div><div dir="auto"><div dir="auto"><div dir="auto">Given all the above, I don’t see why you shouldn’t try your idea of creating a new matrix at each step and assembling the Jacobian. </div><div dir="auto"></div></div><div dir="auto"></div></div><div dir="auto">Why not just try using MatPreallocator and profile the time required to define the nonzero structure?<br></div><div dir="auto"><br></div><div dir="auto">I like Barry’s idea of defining the preconditioner for the Jacobian using an operator defined via kernels with smaller compact support. I’d be interested to know how effective that is as a preconditioner.</div><div dir="auto"><br></div><div>There is potentially a larger issue to consider (if your application runs in parallel). Whilst the number of particles is constant in time, the number of particles per MPI rank will likely change as particles advect (I'm assuming you decomposed the problem using the background search cell grid and do not load balance the particles which is a commonly done with incompressible SPH implementations). As a result, the local size of the Vec object which stores the solution will change between time steps. Vec cannot be re-sized, hence you will not only need to change the nonzero structure of the Jacobian but you will also need to destroy/create all Vec's objects and all objects associated with the nonlinear solve. Given this, I'm not even sure you can use TS for your use case (hopefully a TS expert will comment on this). </div><div><br></div><div>My experience has been that creating new objects (Vec, Mat, KSP, SNES) in PETSc is fast (compared to a linear solve). You might have to give up on using TS, and instead roll your own time integrator. By doing this you will have control of only a SNES object (plus Mat's for J Vec's for the residual and solution) which you can create and destroy within each time step. To use FD coloring you would need to provide this function SNESComputeJacobianDefaultColor to SNESComputeJacobian(), along with a preallocated matrix (which you define using MatPreallocator).</div><div><br></div><div><br></div><div dir="auto">Thanks</div><div>Dave</div></div></div></div></div></div></div></div></div></div></div><div dir="auto">Dave</div><div dir="auto"><br></div><div dir="auto"><br></div><div><br><div class="gmail_quote"><div dir="ltr">On Wed 16. Oct 2019 at 13:25, Matthew Knepley via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Tue, Oct 15, 2019 at 4:56 PM Smith, Barry F. via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
Because of the difficulty of maintaining a nonzero matrix for such problems aren't these problems often done "matrix-free"?<br>
<br>
So you provide a routine that computes the action of the operator but doesn't form the operator explicitly (and you hope that you don't need a preconditioner). There are simple ways (but less optimal) to do this as well as more sophisticated (such as multipole methods). <br>
<br>
If the convergence of the linear solver is too slow (due to lack of preconditioner) you might consider continuing with matrix free but at each new Newton solve (or several Newton solves) construct a very sparse matrix that captures just the very local coupling in the problem. Once particles have moved around a bit you would throw away the old matrix and construct a new one. For example the matrix might just captures interactions between particles that are less than some radius away from themselves. You could use a direct solver or iterative solver to solve this very sparse system.<br></blockquote><div><br></div><div>I tried to do this with Dan Negrut many years ago and had the same problems. That is part of why incompressible SPH never works,</div><div>since you need global modes.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div></div></div><div dir="ltr"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Barry<br>
<br>
This is why KSPSetOperators and SNESSetJacobian and TSSetRHS/IJacobain take two Jacobian matrices, the first holds the matrix free Jacobian and the second holds the approximation used inside the preconditioner.<br>
<br>
<br>
<br>
> On Oct 15, 2019, at 3:29 PM, Ellen M. Price <<a href="mailto:ellen.price@cfa.harvard.edu" target="_blank">ellen.price@cfa.harvard.edu</a>> wrote:<br>
> <br>
> Thanks for the reply, Barry! Unfortunately, this is a particle code<br>
> (SPH, specifically), so the particle neighbors, which influence the<br>
> properties, change over time; the degrees of freedom is a constant, as<br>
> is the particle number. Any thoughts, given the new info? Or should I<br>
> stick with explicit integration? I've seen explicit used most commonly,<br>
> but, as I mentioned before, the optimal timestep that gives the error<br>
> bounds I need is just too small for my application, and the only other<br>
> thing I can think to try is to throw a lot more cores at the problem and<br>
> wait.<br>
> <br>
> Ellen<br>
> <br>
> <br>
> On 10/15/19 4:14 PM, Smith, Barry F. wrote:<br>
>> <br>
>> So you have a fixed "mesh" and fixed number of degrees of freedom for the entire time but the dependency on the function value at each particular point on the neighbor points changes over time?<br>
>> <br>
>> For example, if you are doing upwinding and the direction changes when you used to use values from the right you now use values from the left?<br>
>> <br>
>> Independent of the coloring, just changing the locations in the matrix where the entries are nonzero is expensive and painful. So what I would do is build the initial Jacobian nonzero structure to contain all possible connections, color that and then use that for the entire computation. At each time step you will be working with some zero entries in the Jacobian but that is ok, it is simpler and ultimately probably faster than trying to keep changing the nonzero structure to "optimize" and only treat truly nonzero values. <br>
>> <br>
>> For "stencil" type discretizations (finite difference, finite value) what I describe should be fine. But if you problem is completely different (I can't imagine how) and the Jacobian changes truly dramatically in structure then what I suggest may not be appropriate but you'll need to tell us your problem in that case so we can make suggestions.<br>
>> <br>
>> Barry<br>
>> <br>
>> <br>
>> <br>
>>> On Oct 15, 2019, at 2:56 PM, Ellen M. Price via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>
>>> <br>
>>> Hi PETSc users!<br>
>>> <br>
>>> I have a problem that I am integrating with TS, and I think an implicit<br>
>>> method would let me take larger timesteps. The Jacobian is difficult to<br>
>>> compute, but, more importantly, the nonzero structure is changing with<br>
>>> time, so even if I use coloring and finite differences to compute the<br>
>>> actual values, I will have to update the coloring every time the<br>
>>> Jacobian recomputes.<br>
>>> <br>
>>> Has anyone done this before, and/or is there a correct way to tell TS to<br>
>>> re-compute the coloring of the matrix before SNES actually computes the<br>
>>> entries? Is this even a good idea, or is the coloring so expensive that<br>
>>> this is foolish (in general -- I know the answer depends on the problem,<br>
>>> but there may be a rule of thumb)?<br>
>>> <br>
>>> Thanks,<br>
>>> Ellen Price<br>
>> <br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div></div>