<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Thank your for this explanation, it makes sense. And after I updated
my code, the external CFD code runs without problems in parallel.<br>
<br>
However, now I'm back to the problem with the creation of the
vectors/domains. By using the application ordering, I can assign the
correct points from PETSc to the corresponding points in my external
code. At least, as long as both use the same subdomain size. But
sometimes they differ, and then the KSP breaks down, because the
solution Vector it receives has a different size than what it
expects.<br>
<br>
An example: <br>
I have an unstructured grid with 800,000 datapoints.<br>
<br>
If I decompose this to run on 2 processors, PETSc delegates exactly
400,000 points to each process. However, the external code might
assign 400,100 points to the first and 399,900 process. As a result,
PETSc expects a solution vector of size 400,000 on each process, but
receives one of 400,100 and one of 399,900, leading to a breakdown.
<br>
<br>
I suppose I could use VecScatterCreateToAll to collect all the
values from the solution vectors of my external code, and then
create from those a temporary vector that only contains the expected
400,000 values to hand over to the KSP. But this would create a lot
of communication between the different processes and seems quite
clunky.<br>
<br>
Is there a more elegant way? Is it maybe possible to manually assign
the size of the PETSc subdomains?<br>
<br>
Kind regards,<br>
Michael Werner<br>
<br>
<div class="moz-cite-prefix">Am 17.10.2017 um 12:31 schrieb Matthew
Knepley:<br>
</div>
<blockquote type="cite"
cite="mid:CAMYG4GkKYzpMG9esGgVvu+6UoUFLhKkvH=EOEu0DpEKDOVBMoQ@mail.gmail.com">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On Tue, Oct 17, 2017 at 6:08 AM,
Michael Werner <span dir="ltr"><<a
href="mailto:michael.werner@dlr.de" target="_blank"
moz-do-not-send="true">michael.werner@dlr.de</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> Because usally this
code is called just once. It runs one multiple
processes, but there it's still always processing the
whole domain. I can't run it on only one subdomain. As I
understand it now, when I call it from PETSc, this call
is issued once per process, so I would end up running
several contesting instances of the computation on the
whole domain.<br>
<br>
But maybe that's only because I haven't completly
understood how MPI really works in such cases...<br>
</div>
</blockquote>
<div><br>
</div>
<div>No, it makes one call in which all processes
participate. So you would call your external CFD routine
once from all processes, passing in the MPI communicator.</div>
<div><br>
</div>
<div> Matt</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> Kind regards,<br>
Michael<br>
<br>
<div class="m_7608694679485040253moz-cite-prefix">Am
17.10.2017 um 11:50 schrieb Matthew Knepley:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On Tue, Oct 17, 2017 at
5:46 AM, Michael Werner <span dir="ltr"><<a
href="mailto:michael.werner@dlr.de"
target="_blank" moz-do-not-send="true">michael.werner@dlr.de</a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF"> I'm not sure what you
mean with this question?<br>
The external CFD code, if that was what you
referred to, can be run in parallel.<br>
</div>
</blockquote>
<div><br>
</div>
<div>Then why is it a problem that "in a
parallel case, this call obviously gets called
once per process"?</div>
<div><br>
</div>
<div> Matt</div>
<div> </div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<div
class="m_7608694679485040253gmail-m_-6501336469052660476moz-cite-prefix">Am
17.10.2017 um 11:11 schrieb Matthew
Knepley:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On Tue, Oct
17, 2017 at 4:21 AM, Michael Werner
<span dir="ltr"><<a
href="mailto:michael.werner@dlr.de"
target="_blank"
moz-do-not-send="true">michael.werner@dlr.de</a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF"> That's
something I'm still struggling
with. In the serial case, I can
simply extract the values from
the original grid, and since the
ordering of the Jacobian is the
same there is no problem. In the
parallel case this is still a
more or less open question.
That's why I thought about
reordering the Jacobian. As long
as the position of the
individual IDs is the same for
both, I don't have to care about
their absolute position.<br>
<br>
I also wanted to thank you for
your previous answer, it seems
that the application ordering
might be what I'm looking for.
However, in the meantime I
stumbled about another problem,
that I have to solve first. My
new problem is, that I call the
external code within the shell
matrix' multiply call. But in a
parallel case, this call
obviously gets called once per
process. So right now I'm trying
to circumvent this, so it might
take a while before I'm able to
come back to the original
problem...<br>
</div>
</blockquote>
<div><br>
</div>
<div>I am not understanding. Is your
original code parallel?</div>
<div><br>
</div>
<div> Thanks,</div>
<div><br>
</div>
<div> Matt</div>
<div> </div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF"> Kind
regards,<br>
Michael<br>
<br>
<div
class="m_7608694679485040253gmail-m_-6501336469052660476m_-5791166958946276202moz-cite-prefix">Am
16.10.2017 um 17:25 schrieb
Praveen C:<br>
</div>
<blockquote type="cite">
<div dir="ltr">I am interested
to learn more about how this
works. How are the vectors
created if the ids are not
contiguous in a partition ?
<div><br>
</div>
<div>Thanks</div>
<div>praveen</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Mon, Oct 16, 2017 at 2:02
PM, Stefano Zampini <span
dir="ltr"><<a
href="mailto:stefano.zampini@gmail.com"
target="_blank"
moz-do-not-send="true">stefano.zampini@gmail.com</a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0px 0px
0px
0.8ex;border-left:1px
solid
rgb(204,204,204);padding-left:1ex">
<div dir="ltr"><br>
<div
class="gmail_extra"><br>
<div
class="gmail_quote">
<div>
<div
class="m_7608694679485040253gmail-m_-6501336469052660476m_-5791166958946276202h5">2017-10-16
10:26
GMT+03:00
Michael Werner
<span
dir="ltr"><<a
href="mailto:michael.werner@dlr.de" target="_blank"
moz-do-not-send="true">michael.werner@dlr.de</a>></span>:<br>
<blockquote
class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">Hello,<br>
<br>
I'm having
trouble with
parallelizing
a matrix-free
code with
PETSc. In this
code, I use an
external CFD
code to
provide the
matrix-vector
product for an
iterative
solver in
PETSc. To
increase
convergence
rate, I'm
using an
explicitly
stored
Jacobian
matrix to
precondition
the solver.
This works
fine for
serial runs.
However, when
I try to use
multiple
processes, I
face the
problem that
PETSc
decomposes the
preconditioner
matrix, and
probably also
the shell
matrix, in a
different way
than the
external CFD
code
decomposes the
grid.<br>
<br>
The Jacobian
matrix is
built in a
way, that its
rows and
columns
correspond to
the global IDs
of the
individual
points in my
CFD mesh<br>
<br>
The CFD code
decomposes the
domain based
on the
proximity of
points to each
other, so that
the resulting
subgrids are
coherent.
However, since
its an
unstructured
grid, those
subgrids are
not
necessarily
made up of
points with
successive
global IDs.
This is a
problem, since
PETSc seems to
partition the
matrix in
coherent
slices.<br>
<br>
I'm not sure
what the best
approach to
this problem
might be. Is
it maybe
possible to
exactly tell
PETSc, which
rows/columns
it should
assign to the
individual
processes?<br>
<br>
</blockquote>
<div><br>
</div>
</div>
</div>
<div>If you are
explicitly
setting the
values in your
Jacobians via
MatSetValues(),
you can create a
ISLocalToGlobalMapping </div>
<div><br>
</div>
<div><a
href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/IS/ISLocalToGlobalMappingCreate.html"
target="_blank" moz-do-not-send="true">http://www.mcs.anl.gov/petsc/p<wbr>etsc-current/docs/manualpages/<wbr>IS/ISLocalToGlobalMappingCreat<wbr>e.html</a></div>
<div><br>
</div>
<div>that maps the
numbering you
use for the
Jacobians to
their
counterpart in
the CFD
ordering, then
call
MatSetLocalToGlobalMapping
and then use
MatSetValuesLocal
with the same
arguments you
are calling
MatSetValues
now.</div>
<div><br>
</div>
<div>Otherwise,
you can play
with the
application
ordering <a
href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/AO/index.html"
target="_blank" moz-do-not-send="true">http://www.mcs.anl.gov/petsc/p<wbr>etsc-current/docs/manualpages/<wbr>AO/index.html</a></div>
<span
class="m_7608694679485040253gmail-m_-6501336469052660476m_-5791166958946276202HOEnZb"><font
color="#888888">
<div> </div>
</font></span></div>
<span
class="m_7608694679485040253gmail-m_-6501336469052660476m_-5791166958946276202HOEnZb"><font
color="#888888"><br>
<br clear="all">
<span
class="m_7608694679485040253gmail-m_-6501336469052660476HOEnZb"><font
color="#888888">
<div><br>
</div>
-- <br>
<div
class="m_7608694679485040253gmail-m_-6501336469052660476m_-5791166958946276202m_-5153143099617110787gmail_signature">Stefano</div>
</font></span></font></span></div>
<span
class="m_7608694679485040253gmail-m_-6501336469052660476HOEnZb"><font
color="#888888"> </font></span></div>
<span
class="m_7608694679485040253gmail-m_-6501336469052660476HOEnZb"><font
color="#888888"> </font></span></blockquote>
<span
class="m_7608694679485040253gmail-m_-6501336469052660476HOEnZb"><font
color="#888888"> </font></span></div>
<span
class="m_7608694679485040253gmail-m_-6501336469052660476HOEnZb"><font
color="#888888"> <br>
</font></span></div>
<span
class="m_7608694679485040253gmail-m_-6501336469052660476HOEnZb"><font
color="#888888"> </font></span></blockquote>
<span
class="m_7608694679485040253gmail-m_-6501336469052660476HOEnZb"><font
color="#888888"> <br>
<pre class="m_7608694679485040253gmail-m_-6501336469052660476m_-5791166958946276202moz-signature" cols="72">--
______________________________<wbr>______________________
Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)
Institut für Aerodynamik und Strömungstechnik | <a href="https://maps.google.com/?q=Bunsenstr.+10+%7C+37073+G%C3%B6ttingen&entry=gmail&source=g" target="_blank" moz-do-not-send="true">Bunsenstr. 10 | 37073 Göttingen</a>
Michael Werner
Telefon 0551 709-2627 | Telefax 0551 709-2811 | <a class="m_7608694679485040253gmail-m_-6501336469052660476m_-5791166958946276202moz-txt-link-abbreviated" href="mailto:Michael.Werner@dlr.de" target="_blank" moz-do-not-send="true">Michael.Werner@dlr.de</a>
DLR.de
</pre>
</font></span></div>
</blockquote>
</div>
<br>
<br clear="all">
<span
class="m_7608694679485040253gmail-HOEnZb"><font
color="#888888">
<div><br>
</div>
-- <br>
<div
class="m_7608694679485040253gmail-m_-6501336469052660476gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most
experimenters take for
granted before they
begin their experiments
is infinitely more
interesting than any
results to which their
experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a
href="http://www.caam.rice.edu/%7Emk51/"
target="_blank"
moz-do-not-send="true">https://www.cse.buffalo.edu/~k<wbr>nepley/</a><br>
</div>
</div>
</div>
</div>
</div>
</font></span></div>
<span
class="m_7608694679485040253gmail-HOEnZb"><font
color="#888888"> </font></span></div>
<span
class="m_7608694679485040253gmail-HOEnZb"><font
color="#888888"> </font></span></blockquote>
<span
class="m_7608694679485040253gmail-HOEnZb"><font
color="#888888"> <br>
<pre class="m_7608694679485040253gmail-m_-6501336469052660476moz-signature" cols="72">--
______________________________<wbr>______________________
Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)
Institut für Aerodynamik und Strömungstechnik | <a href="https://maps.google.com/?q=Bunsenstr.+10+%7C+37073+G%C3%B6ttingen&entry=gmail&source=g" target="_blank" moz-do-not-send="true">Bunsenstr. 10 | 37073 Göttingen</a>
Michael Werner
Telefon 0551 709-2627 | Telefax 0551 709-2811 | <a class="m_7608694679485040253gmail-m_-6501336469052660476moz-txt-link-abbreviated" href="mailto:Michael.Werner@dlr.de" target="_blank" moz-do-not-send="true">Michael.Werner@dlr.de</a>
DLR.de
</pre>
</font></span></div>
</blockquote>
</div>
<br>
<br clear="all">
<span class="HOEnZb"><font color="#888888">
<div><br>
</div>
-- <br>
<div
class="m_7608694679485040253gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for
granted before they begin their
experiments is infinitely more
interesting than any results to
which their experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a
href="http://www.caam.rice.edu/%7Emk51/"
target="_blank"
moz-do-not-send="true">https://www.cse.buffalo.edu/~<wbr>knepley/</a><br>
</div>
</div>
</div>
</div>
</div>
</font></span></div>
<span class="HOEnZb"><font color="#888888"> </font></span></div>
<span class="HOEnZb"><font color="#888888"> </font></span></blockquote>
<span class="HOEnZb"><font color="#888888"> <br>
<pre class="m_7608694679485040253moz-signature" cols="72">--
______________________________<wbr>______________________
Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)
Institut für Aerodynamik und Strömungstechnik | <a href="https://maps.google.com/?q=Bunsenstr.+10+%7C+37073+G%C3%B6ttingen&entry=gmail&source=g" moz-do-not-send="true">Bunsenstr. 10 | 37073 Göttingen</a>
Michael Werner
Telefon 0551 709-2627 | Telefax 0551 709-2811 | <a class="m_7608694679485040253moz-txt-link-abbreviated" href="mailto:Michael.Werner@dlr.de" target="_blank" moz-do-not-send="true">Michael.Werner@dlr.de</a>
DLR.de
</pre>
</font></span></div>
</blockquote>
</div>
<br>
<br clear="all">
<div><br>
</div>
-- <br>
<div class="gmail_signature" data-smartmail="gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for granted before
they begin their experiments is infinitely more
interesting than any results to which their
experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a href="http://www.caam.rice.edu/%7Emk51/"
target="_blank" moz-do-not-send="true">https://www.cse.buffalo.edu/~knepley/</a></div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<pre class="moz-signature" cols="72">
</pre>
</body>
</html>