<div dir="ltr">You may want to look at DMNetwork since you are doing a multi-physics system, <a href="http://www.mcs.anl.gov/papers/P7065-0617.pdf">http://www.mcs.anl.gov/papers/P7065-0617.pdf</a>.<br></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div><div><div><div><div>Regards,<br></div><div><span style="color:rgb(12,52,61)"><font size="4"><i>Adam Denchfield</i></font></span><br></div><i><span style="color:rgb(0,0,255)"><font size="4"><span style="background-color:rgb(0,0,255)"></span></font></span></i></div><b>Peer Career Coach - Career Services</b><i><br></i></div>Illinois Institute of Technology<br></div><b>Bachelors of Science in Applied Physics (2018)<br></b></div>Email: <a href="mailto:adenchfi@hawk.iit.edu" target="_blank">adenchfi@hawk.iit.edu</a><br></div><a href="http://www.linkedin.com/in/adamrobertdenchfield" target="_blank">My LinkedIn</a> <a href="https://www.researchgate.net/profile/Adam_Denchfield" target="_blank">My ResearchGate Profile</a><br></div></div></div></div></div></div></div></div></div></div></div></div></div>
<br><div class="gmail_quote">On Thu, Jul 12, 2018 at 10:30 AM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><span class=""><div dir="ltr">On Thu, Jul 12, 2018 at 11:14 AM Ali Reza Khaz'ali <<a href="mailto:arkhazali@cc.iut.ac.ir" target="_blank">arkhazali@cc.iut.ac.ir</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<p>Thanks very much for your answer,</p>
<p>I am experimenting with SNES for now, however, my purpose is to
solve a multi-physics large system with unstructured grids
(Enhanced Oil Recovery simulation in fractured petroleum
reservoirs to be specific, phase equilibrium and three phase flow
equations must be solved simultaneously), where to the extend of
my knowledge, no existing PETSc vector structure can be used.
Therefore, I started with solving simple ODEs with SNES without
using those structures.</p></div></blockquote></span><div>The intention is that you use DMPlex for this. <br></div><span class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div text="#000000" bgcolor="#FFFFFF">
<p>An interesting fact about my code samples is, with increasing the
number of MPI workers, the number of SNES iterations (and hence,
the precision of the solution) decrease. I'm sure there is a flaw
in my code (I do not have much MPI experience), but I can't find
it.<br></p></div></blockquote></span><div>We really do not have the resources to debug raw MPI code. That is why we have the examples.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt </div><div><div class="h5"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div text="#000000" bgcolor="#FFFFFF"><p>
</p>
<p>Best wishes,</p>
<p>Ali<br>
</p>
<br>
<div class="m_-8503147034292866429m_-655653169572082924moz-cite-prefix">On 7/12/2018 3:58 PM, Matthew Knepley
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_quote">
<div dir="ltr">On Thu, Jul 12, 2018 at 6:30 AM Ali Reza
Khaz'ali <<a href="mailto:arkhazali@cc.iut.ac.ir" target="_blank">arkhazali@cc.iut.ac.ir</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
I have to apologize if I'm asking a noob question. I am
trying to solve <br>
y"-(1-x/5)yy'=x with boundary condition of y(1)=2 and
y(3)=-1 with <br>
finite difference method. The discretized equations is to be
solved by <br>
SNES. I wrote the follwoing code, however, it only works if
number of <br>
MPI processes is set to one. for more workers, it does not
seem to give <br>
valid results. I'd be grateful if anybody can guide me to
debug the <br>
code, please.<br>
</blockquote>
<div><br>
</div>
<div>Why not just use DMDA? There are lots of examples (SNES
ex5 for instance) and it</div>
<div>handles the parallelism for you. This problem could be
done in a few hours I think.</div>
<div><br>
</div>
<div> Thanks,</div>
<div><br>
</div>
<div> Matt</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Regards,<br>
<br>
Ali<br>
<br>
<br>
<br>
static char help[] = "Newton's method for a non-linear ODE
system.\n\n";<br>
double h;<br>
int nParts;<br>
<br>
#include <petscsnes.h><br>
<br>
/*<br>
User-defined routines<br>
*/<br>
extern PetscErrorCode FormJacobian(SNES, Vec, Mat, Mat,
void*);<br>
extern PetscErrorCode FormFunction(SNES, Vec, Vec, void*);<br>
<br>
int main(int argc, char **argv)<br>
{<br>
SNES snes; /* nonlinear solver
context */<br>
KSP ksp; /* linear solver context
*/<br>
PC pc; /* preconditioner context
*/<br>
Vec x, r; /* solution, residual
vectors */<br>
Mat J; /* Jacobian matrix */<br>
PetscErrorCode ierr;<br>
PetscMPIInt size;<br>
PetscScalar pfive = 0, *xx;<br>
int rank;<br>
int i, j;<br>
Vec f;<br>
<br>
<br>
nParts = 30;<br>
h = (3.0 - 1.0) / nParts;<br>
<br>
ierr = PetscInitialize(&argc, &argv, (char*)0,
help); if (ierr) <br>
return ierr;<br>
ierr = MPI_Comm_size(PETSC_COMM_<wbr>WORLD, &size);
CHKERRQ(ierr);<br>
ierr = SNESCreate(PETSC_COMM_WORLD, &snes);
CHKERRQ(ierr);<br>
ierr = VecCreateMPI(PETSC_COMM_WORLD, PETSC_DECIDE,
nParts-1, &x); <br>
CHKERRQ(ierr);<br>
ierr = VecDuplicate(x, &r); CHKERRQ(ierr);<br>
ierr = MatCreateAIJ(PETSC_COMM_WORLD, PETSC_DECIDE,
PETSC_DECIDE, <br>
nParts - 1, nParts - 1, 3, NULL, 3, NULL, &J);
CHKERRQ(ierr);<br>
<br>
<br>
<br>
<br>
ierr = SNESSetFunction(snes, r, FormFunction, NULL);
CHKERRQ(ierr);<br>
ierr = SNESSetJacobian(snes, J, J, FormJacobian, NULL);
CHKERRQ(ierr);<br>
<br>
<br>
<br>
ierr = SNESGetKSP(snes, &ksp); CHKERRQ(ierr);<br>
ierr = KSPGetPC(ksp, &pc); CHKERRQ(ierr);<br>
ierr = KSPSetTolerances(ksp, 1.e-8, 1e-16,
PETSC_DEFAULT, 200); <br>
CHKERRQ(ierr);<br>
ierr = SNESSetTolerances(snes, 1.e-8, 1e-16,
PETSC_DEFAULT, 200, <br>
200); CHKERRQ(ierr);<br>
ierr = SNESSetType(snes, SNESNEWTONLS); CHKERRQ(ierr);<br>
ierr = KSPSetType(ksp, KSPCG); CHKERRQ(ierr);<br>
<br>
ierr = SNESSetFromOptions(snes); CHKERRQ(ierr);<br>
ierr = VecSet(x, pfive); CHKERRQ(ierr);<br>
ierr = SNESSolve(snes, NULL, x); CHKERRQ(ierr);<br>
ierr = SNESGetFunction(snes, &f, 0, 0);
CHKERRQ(ierr);<br>
<br>
<br>
<br>
ierr = VecDestroy(&x); CHKERRQ(ierr); ierr =
VecDestroy(&r); <br>
CHKERRQ(ierr);<br>
ierr = MatDestroy(&J); CHKERRQ(ierr); ierr =
SNESDestroy(&snes); <br>
CHKERRQ(ierr);<br>
ierr = PetscFinalize();<br>
return ierr;<br>
}<br>
<br>
PetscErrorCode FormFunction(SNES snes, Vec x, Vec f, void
*ctx)<br>
{<br>
PetscErrorCode ierr;<br>
const PetscScalar *xx;<br>
PetscScalar *ff;<br>
PetscInt vstart, vend;<br>
int i, j, rank, size, xlength;<br>
double xp, xxstart, xxend, x1, x2;<br>
MPI_Status MyStat;<br>
<br>
xxstart = 0;<br>
xxend = 0;<br>
<br>
<br>
MPI_Comm_rank(PETSC_COMM_<wbr>WORLD, &rank);<br>
MPI_Comm_size(PETSC_COMM_<wbr>WORLD, &size);<br>
<br>
ierr = VecGetOwnershipRange(x, &vstart, &vend);
CHKERRQ(ierr);<br>
xlength = vend - vstart;<br>
ierr = VecGetArrayRead(x, &xx); CHKERRQ(ierr);<br>
ierr = VecGetArray(f, &ff); CHKERRQ(ierr);<br>
ierr = VecView(x, PETSC_VIEWER_STDOUT_WORLD);<br>
if (size > 1) {<br>
if (rank == 0) {<br>
ierr = MPI_Send(&xx[xlength - 1], 1,
MPI_DOUBLE, 1, 0, <br>
PETSC_COMM_WORLD);<br>
ierr = MPI_Recv(&xxend, 1, MPI_DOUBLE, 1,
1, <br>
PETSC_COMM_WORLD, &MyStat);<br>
}<br>
else if (rank == (size - 1)) {<br>
ierr = MPI_Recv(&xxstart, 1, MPI_DOUBLE,
rank - 1, rank - <br>
1, PETSC_COMM_WORLD, &MyStat);<br>
ierr = MPI_Send(&xx[0], 1, MPI_DOUBLE, rank
- 1, rank, <br>
PETSC_COMM_WORLD);<br>
}<br>
else {<br>
ierr = MPI_Send(&xx[xlength - 1], 1,
MPI_DOUBLE, rank + 1, <br>
rank, PETSC_COMM_WORLD);<br>
ierr = MPI_Recv(&xxend, 1, MPI_DOUBLE, rank
+ 1, rank + 1, <br>
PETSC_COMM_WORLD, &MyStat);<br>
ierr = MPI_Send(&xx[0], 1, MPI_DOUBLE, rank
- 1, rank, <br>
PETSC_COMM_WORLD);<br>
ierr = MPI_Recv(&xxstart, 1, MPI_DOUBLE,
rank - 1, rank - <br>
1, PETSC_COMM_WORLD, &MyStat);<br>
}<br>
}<br>
<br>
<br>
<br>
<br>
for (i = vstart; i < vend; i++) {<br>
xp = 1 + i * h;<br>
j = i - vstart;<br>
<br>
<br>
if (i == vstart) x1 = xxstart;<br>
else x1 = xx[j - 1];<br>
if (i == (vend - 1)) x2 = xxend;<br>
else x2 = xx[j + 1];<br>
<br>
<br>
if (i==0) ff[j] = (x2 - 2 * xx[j] + 2) / (h*h) - (1
- xp / <br>
5)*xx[j] * (x2 - 2) / (2 * h) - xp;<br>
else if (i==(nParts-2)) ff[j]=(-1 - 2 * xx[j] + x1)
/ (h*h) - <br>
(1 - xp / 5)*xx[j] * (-1 - x1) / (2 * h) - xp;<br>
else ff[j] = (x2 - 2 * xx[j] + x1) / (h*h) - (1 -
xp / 5)*xx[j] <br>
* (x2 - x1) / (2 * h) - xp;<br>
}<br>
<br>
/* Restore vectors */<br>
ierr = VecRestoreArrayRead(x, &xx); CHKERRQ(ierr);<br>
ierr = VecRestoreArray(f, &ff); CHKERRQ(ierr);<br>
return 0;<br>
}<br>
<br>
PetscErrorCode FormJacobian(SNES snes, Vec x, Mat jac, Mat
B, void *dummy)<br>
{<br>
const PetscScalar *xx;<br>
PetscErrorCode ierr;<br>
<br>
PetscInt vstart, vend;<br>
double rrow[3];<br>
int col[3];<br>
int i, j, rank, size, xlength;<br>
double xp, xxstart, xxend, x1, x2;<br>
MPI_Status MyStat;<br>
<br>
ierr = MatZeroEntries(jac); CHKERRQ(ierr);<br>
<br>
ierr = VecGetOwnershipRange(x, &vstart, &vend);
CHKERRQ(ierr);<br>
xlength = vend - vstart;<br>
MPI_Comm_rank(PETSC_COMM_<wbr>WORLD, &rank);<br>
MPI_Comm_size(PETSC_COMM_<wbr>WORLD, &size);<br>
<br>
xxstart = 0;<br>
xxend = 0;<br>
<br>
/*<br>
Get pointer to vector data.0+1<br>
*/<br>
ierr = VecGetArrayRead(x, &xx); CHKERRQ(ierr);<br>
if (size > 1) {<br>
if (rank == 0) {<br>
ierr = MPI_Send(&xx[xlength - 1], 1,
MPI_DOUBLE, 1, 0, <br>
PETSC_COMM_WORLD);<br>
ierr = MPI_Recv(&xxend, 1, MPI_DOUBLE, 1,
1, <br>
PETSC_COMM_WORLD, &MyStat);<br>
xxstart = 1e12;<br>
}<br>
else if (rank == (size - 1)) {<br>
ierr = MPI_Recv(&xxstart, 1, MPI_DOUBLE,
rank - 1, rank - <br>
1, PETSC_COMM_WORLD, &MyStat);<br>
ierr = MPI_Send(&xx[0], 1, MPI_DOUBLE, rank
- 1, rank, <br>
PETSC_COMM_WORLD);<br>
xxend = 1e12;<br>
}<br>
else {<br>
ierr = MPI_Send(&xx[xlength - 1], 1,
MPI_DOUBLE, rank + 1, <br>
rank, PETSC_COMM_WORLD);<br>
ierr = MPI_Recv(&xxend, 1, MPI_DOUBLE, rank
+ 1, rank + 1, <br>
PETSC_COMM_WORLD, &MyStat);<br>
ierr = MPI_Send(&xx[0], 1, MPI_DOUBLE, rank
- 1, rank, <br>
PETSC_COMM_WORLD);<br>
ierr = MPI_Recv(&xxstart, 1, MPI_DOUBLE,
rank - 1, rank - <br>
1, PETSC_COMM_WORLD, &MyStat);<br>
}<br>
}<br>
<br>
<br>
for (i = vstart; i < vend; i++) {<br>
xp = 1 + i * h;<br>
j = i - vstart;<br>
<br>
if (i == vstart) x1 = xxstart;<br>
else x1 = xx[j - 1];<br>
if (i == (vend - 1)) x2 = xxend;<br>
else x2 = xx[j + 1];<br>
<br>
if (i == 0) {<br>
rrow[0] = -2 / (h*h) - (1 - xp / 5)*(x2 - 2) /
(2 * h);<br>
rrow[1] = 1 / (h*h) - (1 + xp / 5)*xx[j] / (2 *
h);<br>
col[0] = i;<br>
col[1] = i + 1;<br>
ierr = MatSetValues(jac, 1, &i, 2, col,
rrow, <br>
INSERT_VALUES); CHKERRQ(ierr);<br>
}<br>
else if (i == (nParts-2)) {<br>
rrow[0] = 1 / (h*h) + (1 + xp / 5)*xx[j] / (2 *
h);<br>
rrow[1] = -2 / (h*h) - (1 - xp / 5)*(-1 - x1) /
(2 * h);<br>
col[0] = i - 1;<br>
col[1] = i;<br>
ierr = MatSetValues(jac, 1, &i, 2, col,
rrow, <br>
INSERT_VALUES); CHKERRQ(ierr);<br>
}<br>
else {<br>
rrow[0] = 1 / (h*h) + (1 + xp / 5)*xx[j] / (2 *
h);<br>
rrow[1] = -2 / (h*h) - (1 - xp / 5)*(x2 - x1) /
(2 * h);<br>
rrow[2] = 1 / (h*h) - (1 + xp / 5)*xx[j] / (2 *
h);<br>
col[0] = i-1;<br>
col[1] = i;<br>
col[2] = i + 1;<br>
ierr = MatSetValues(jac, 1, &i, 3, col,
rrow, <br>
INSERT_VALUES); CHKERRQ(ierr);<br>
}<br>
}<br>
<br>
<br>
<br>
/*<br>
Restore vector<br>
*/<br>
ierr = VecRestoreArrayRead(x, &xx); CHKERRQ(ierr);<br>
<br>
/*<br>
Assemble matrix<br>
*/<br>
B = jac;<br>
ierr = MatAssemblyBegin(B, MAT_FINAL_ASSEMBLY);
CHKERRQ(ierr);<br>
ierr = MatAssemblyEnd(B, MAT_FINAL_ASSEMBLY);
CHKERRQ(ierr);<br>
ierr = MatAssemblyBegin(jac, MAT_FINAL_ASSEMBLY);
CHKERRQ(ierr);<br>
ierr = MatAssemblyEnd(jac, MAT_FINAL_ASSEMBLY);
CHKERRQ(ierr);<br>
<br>
return 0;<br>
}<br>
<br>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr" class="m_-8503147034292866429m_-655653169572082924gmail_signature" data-smartmail="gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for granted before
they begin their experiments is infinitely more
interesting than any results to which their
experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a href="http://www.caam.rice.edu/%7Emk51/" target="_blank">https://www.cse.buffalo.edu/~<wbr>knepley/</a><br>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<br>
</div>
</blockquote></div></div></div><div><div class="h5"><br clear="all"><div><br></div>-- <br><div dir="ltr" class="m_-8503147034292866429gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.caam.rice.edu/~mk51/" target="_blank">https://www.cse.buffalo.edu/~<wbr>knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div><br></div>