<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    That's something I'm still struggling with. In the serial case, I
    can simply extract the values from the original grid, and since the
    ordering of the Jacobian is the same there is no problem. In the
    parallel case this is still a more or less open question. That's why
    I thought about reordering the Jacobian. As long as the position of
    the individual IDs is the same for both, I don't have to care about
    their absolute position.<br class="">
    <br class="">
    I also wanted to thank you for your previous answer, it seems that
    the application ordering might be what I'm looking for. However, in
    the meantime I stumbled about another problem, that I have to solve
    first. My new problem is, that I call the external code within the
    shell matrix' multiply call. But in a parallel case, this call
    obviously gets called once per process. So right now I'm trying to
    circumvent this, so it might take a while before I'm able to come
    back to the original problem...<br class="">
    <br class="">
    Kind regards,<br class="">
    Michael<br>
    <br>
    <div class="moz-cite-prefix">Am 16.10.2017 um 17:25 schrieb Praveen
      C:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAEvUdMLa1o0a+A2srd96-V6Bu+_7M+pxH-a20QuxbQoh2wXKiA@mail.gmail.com">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      <div dir="ltr">I am interested to learn more about how this works.
        How are the vectors created if the ids are not contiguous in a
        partition ?
        <div><br>
        </div>
        <div>Thanks</div>
        <div>praveen</div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Mon, Oct 16, 2017 at 2:02 PM,
          Stefano Zampini <span dir="ltr"><<a
              href="mailto:stefano.zampini@gmail.com" target="_blank"
              moz-do-not-send="true">stefano.zampini@gmail.com</a>></span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div dir="ltr"><br>
              <div class="gmail_extra"><br>
                <div class="gmail_quote">
                  <div>
                    <div class="h5">2017-10-16 10:26 GMT+03:00 Michael
                      Werner <span dir="ltr"><<a
                          href="mailto:michael.werner@dlr.de"
                          target="_blank" moz-do-not-send="true">michael.werner@dlr.de</a>></span>:<br>
                      <blockquote class="gmail_quote" style="margin:0px
                        0px 0px 0.8ex;border-left:1px solid
                        rgb(204,204,204);padding-left:1ex">Hello,<br>
                        <br>
                        I'm having trouble with parallelizing a
                        matrix-free code with PETSc. In this code, I use
                        an external CFD code to provide the
                        matrix-vector product for an iterative solver in
                        PETSc. To increase convergence rate, I'm using
                        an explicitly stored Jacobian matrix to
                        precondition the solver. This works fine for
                        serial runs. However, when I try to use multiple
                        processes, I face the problem that PETSc
                        decomposes the preconditioner matrix, and
                        probably also the shell matrix, in a different
                        way than the external CFD code decomposes the
                        grid.<br>
                        <br>
                        The Jacobian matrix is built in a way, that its
                        rows and columns correspond to the global IDs of
                        the individual points in my CFD mesh<br>
                        <br>
                        The CFD code decomposes the domain based on the
                        proximity of points to each other, so that the
                        resulting subgrids are coherent. However, since
                        its an unstructured grid, those subgrids are not
                        necessarily made up of points with successive
                        global IDs. This is a problem, since PETSc seems
                        to partition the matrix in  coherent slices.<br>
                        <br>
                        I'm not sure what the best approach to this
                        problem might be. Is it maybe possible to
                        exactly tell PETSc, which rows/columns it should
                        assign to the individual processes?<br>
                        <br>
                      </blockquote>
                      <div><br>
                      </div>
                    </div>
                  </div>
                  <div>If you are explicitly setting the values in your
                    Jacobians via MatSetValues(), you can create a
                    ISLocalToGlobalMapping </div>
                  <div><br>
                  </div>
                  <div><a
href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/IS/ISLocalToGlobalMappingCreate.html"
                      target="_blank" moz-do-not-send="true">http://www.mcs.anl.gov/petsc/<wbr>petsc-current/docs/<wbr>manualpages/IS/<wbr>ISLocalToGlobalMappingCreate.<wbr>html</a></div>
                  <div><br>
                  </div>
                  <div>that maps the numbering you use for the Jacobians
                    to their counterpart in the CFD ordering, then call
                    MatSetLocalToGlobalMapping and then use
                    MatSetValuesLocal with the same arguments you are
                    calling MatSetValues now.</div>
                  <div><br>
                  </div>
                  <div>Otherwise, you can play with the application
                    ordering  <a
href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/AO/index.html"
                      target="_blank" moz-do-not-send="true">http://www.mcs.anl.gov/petsc/<wbr>petsc-current/docs/<wbr>manualpages/AO/index.html</a></div>
                  <span class="HOEnZb"><font color="#888888">
                      <div> </div>
                    </font></span></div>
                <span class="HOEnZb"><font color="#888888"><br>
                    <br clear="all">
                    <div><br>
                    </div>
                    -- <br>
                    <div class="m_-5153143099617110787gmail_signature">Stefano</div>
                  </font></span></div>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 

____________________________________________________

Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)
Institut für Aerodynamik und Strömungstechnik | Bunsenstr. 10 | 37073 Göttingen

Michael Werner 
Telefon 0551 709-2627 | Telefax 0551 709-2811 | <a class="moz-txt-link-abbreviated" href="mailto:Michael.Werner@dlr.de">Michael.Werner@dlr.de</a>
DLR.de









</pre>
  </body>
</html>