<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Tue, Oct 17, 2017 at 5:46 AM, Michael Werner <span dir="ltr"><<a href="mailto:michael.werner@dlr.de" target="_blank">michael.werner@dlr.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF">
    I'm not sure what you mean with this question?<br>
    The external CFD code, if that was what you referred to, can be run
    in parallel.<br></div></blockquote><div><br></div><div>Then why is it a problem that "in a parallel case, this call obviously gets called once per process"?</div><div><br></div><div>   Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF">
    <div class="gmail-m_-6501336469052660476moz-cite-prefix">Am 17.10.2017 um 11:11 schrieb Matthew
      Knepley:<br>
    </div>
    <blockquote type="cite">
      
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">On Tue, Oct 17, 2017 at 4:21 AM,
            Michael Werner <span dir="ltr"><<a href="mailto:michael.werner@dlr.de" target="_blank">michael.werner@dlr.de</a>></span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
              <div bgcolor="#FFFFFF"> That's something
                I'm still struggling with. In the serial case, I can
                simply extract the values from the original grid, and
                since the ordering of the Jacobian is the same there is
                no problem. In the parallel case this is still a more or
                less open question. That's why I thought about
                reordering the Jacobian. As long as the position of the
                individual IDs is the same for both, I don't have to
                care about their absolute position.<br>
                <br>
                I also wanted to thank you for your previous answer, it
                seems that the application ordering might be what I'm
                looking for. However, in the meantime I stumbled about
                another problem, that I have to solve first. My new
                problem is, that I call the external code within the
                shell matrix' multiply call. But in a parallel case,
                this call obviously gets called once per process. So
                right now I'm trying to circumvent this, so it might
                take a while before I'm able to come back to the
                original problem...<br>
              </div>
            </blockquote>
            <div><br>
            </div>
            <div>I am not understanding. Is your original code parallel?</div>
            <div><br>
            </div>
            <div>  Thanks,</div>
            <div><br>
            </div>
            <div>     Matt</div>
            <div> </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
              <div bgcolor="#FFFFFF"> Kind regards,<br>
                Michael<br>
                <br>
                <div class="gmail-m_-6501336469052660476m_-5791166958946276202moz-cite-prefix">Am
                  16.10.2017 um 17:25 schrieb Praveen C:<br>
                </div>
                <blockquote type="cite">
                  <div dir="ltr">I am interested to learn more about how
                    this works. How are the vectors created if the ids
                    are not contiguous in a partition ?
                    <div><br>
                    </div>
                    <div>Thanks</div>
                    <div>praveen</div>
                  </div>
                  <div class="gmail_extra"><br>
                    <div class="gmail_quote">On Mon, Oct 16, 2017 at
                      2:02 PM, Stefano Zampini <span dir="ltr"><<a href="mailto:stefano.zampini@gmail.com" target="_blank">stefano.zampini@gmail.com</a>></span>
                      wrote:<br>
                      <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                        <div dir="ltr"><br>
                          <div class="gmail_extra"><br>
                            <div class="gmail_quote">
                              <div>
                                <div class="gmail-m_-6501336469052660476m_-5791166958946276202h5">2017-10-16
                                  10:26 GMT+03:00 Michael Werner <span dir="ltr"><<a href="mailto:michael.werner@dlr.de" target="_blank">michael.werner@dlr.de</a>></span>:<br>
                                  <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello,<br>
                                    <br>
                                    I'm having trouble with
                                    parallelizing a matrix-free code
                                    with PETSc. In this code, I use an
                                    external CFD code to provide the
                                    matrix-vector product for an
                                    iterative solver in PETSc. To
                                    increase convergence rate, I'm using
                                    an explicitly stored Jacobian matrix
                                    to precondition the solver. This
                                    works fine for serial runs. However,
                                    when I try to use multiple
                                    processes, I face the problem that
                                    PETSc decomposes the preconditioner
                                    matrix, and probably also the shell
                                    matrix, in a different way than the
                                    external CFD code decomposes the
                                    grid.<br>
                                    <br>
                                    The Jacobian matrix is built in a
                                    way, that its rows and columns
                                    correspond to the global IDs of the
                                    individual points in my CFD mesh<br>
                                    <br>
                                    The CFD code decomposes the domain
                                    based on the proximity of points to
                                    each other, so that the resulting
                                    subgrids are coherent. However,
                                    since its an unstructured grid,
                                    those subgrids are not necessarily
                                    made up of points with successive
                                    global IDs. This is a problem, since
                                    PETSc seems to partition the matrix
                                    in  coherent slices.<br>
                                    <br>
                                    I'm not sure what the best approach
                                    to this problem might be. Is it
                                    maybe possible to exactly tell
                                    PETSc, which rows/columns it should
                                    assign to the individual processes?<br>
                                    <br>
                                  </blockquote>
                                  <div><br>
                                  </div>
                                </div>
                              </div>
                              <div>If you are explicitly setting the
                                values in your Jacobians via
                                MatSetValues(), you can create a
                                ISLocalToGlobalMapping </div>
                              <div><br>
                              </div>
                              <div><a href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/IS/ISLocalToGlobalMappingCreate.html" target="_blank">http://www.mcs.anl.gov/petsc/p<wbr>etsc-current/docs/manualpages/<wbr>IS/ISLocalToGlobalMappingCreat<wbr>e.html</a></div>
                              <div><br>
                              </div>
                              <div>that maps the numbering you use for
                                the Jacobians to their counterpart in
                                the CFD ordering, then call
                                MatSetLocalToGlobalMapping and then use
                                MatSetValuesLocal with the same
                                arguments you are calling MatSetValues
                                now.</div>
                              <div><br>
                              </div>
                              <div>Otherwise, you can play with the
                                application ordering  <a href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/AO/index.html" target="_blank">http://www.mcs.anl.gov/petsc/p<wbr>etsc-current/docs/manualpages/<wbr>AO/index.html</a></div>
                              <span class="gmail-m_-6501336469052660476m_-5791166958946276202HOEnZb"><font color="#888888">
                                  <div> </div>
                                </font></span></div>
                            <span class="gmail-m_-6501336469052660476m_-5791166958946276202HOEnZb"><font color="#888888"><br>
                                <br clear="all">
                                <span class="gmail-m_-6501336469052660476HOEnZb"><font color="#888888">
                                    <div><br>
                                    </div>
                                    -- <br>
                                    <div class="gmail-m_-6501336469052660476m_-5791166958946276202m_-5153143099617110787gmail_signature">Stefano</div>
                                  </font></span></font></span></div>
                          <span class="gmail-m_-6501336469052660476HOEnZb"><font color="#888888"> </font></span></div>
                        <span class="gmail-m_-6501336469052660476HOEnZb"><font color="#888888"> </font></span></blockquote>
                      <span class="gmail-m_-6501336469052660476HOEnZb"><font color="#888888"> </font></span></div>
                    <span class="gmail-m_-6501336469052660476HOEnZb"><font color="#888888"> <br>
                      </font></span></div>
                  <span class="gmail-m_-6501336469052660476HOEnZb"><font color="#888888"> </font></span></blockquote>
                <span class="gmail-m_-6501336469052660476HOEnZb"><font color="#888888"> <br>
                    <pre class="gmail-m_-6501336469052660476m_-5791166958946276202moz-signature" cols="72">-- 

______________________________<wbr>______________________

Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)
Institut für Aerodynamik und Strömungstechnik | <a href="https://maps.google.com/?q=Bunsenstr.+10+%7C+37073+G%C3%B6ttingen&entry=gmail&source=g" target="_blank">Bunsenstr. 10 | 37073 Göttingen</a>

Michael Werner 
Telefon 0551 709-2627 | Telefax 0551 709-2811 | <a class="gmail-m_-6501336469052660476m_-5791166958946276202moz-txt-link-abbreviated" href="mailto:Michael.Werner@dlr.de" target="_blank">Michael.Werner@dlr.de</a>
DLR.de









</pre>
                  </font></span></div>
            </blockquote>
          </div>
          <br>
          <br clear="all"><span class="gmail-HOEnZb"><font color="#888888">
          <div><br>
          </div>
          -- <br>
          <div class="gmail-m_-6501336469052660476gmail_signature">
            <div dir="ltr">
              <div>
                <div dir="ltr">
                  <div>What most experimenters take for granted before
                    they begin their experiments is infinitely more
                    interesting than any results to which their
                    experiments lead.<br>
                    -- Norbert Wiener</div>
                  <div><br>
                  </div>
                  <div><a href="http://www.caam.rice.edu/%7Emk51/" target="_blank">https://www.cse.buffalo.edu/~<wbr>knepley/</a><br>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </font></span></div><span class="gmail-HOEnZb"><font color="#888888">
      </font></span></div><span class="gmail-HOEnZb"><font color="#888888">
    </font></span></blockquote><span class="gmail-HOEnZb"><font color="#888888">
    <br>
    <pre class="gmail-m_-6501336469052660476moz-signature" cols="72">-- 

______________________________<wbr>______________________

Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)
Institut für Aerodynamik und Strömungstechnik | <a href="https://maps.google.com/?q=Bunsenstr.+10+%7C+37073+G%C3%B6ttingen&entry=gmail&source=g">Bunsenstr. 10 | 37073 Göttingen</a>

Michael Werner 
Telefon 0551 709-2627 | Telefax 0551 709-2811 | <a class="gmail-m_-6501336469052660476moz-txt-link-abbreviated" href="mailto:Michael.Werner@dlr.de" target="_blank">Michael.Werner@dlr.de</a>
DLR.de









</pre>
  </font></span></div>

</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.caam.rice.edu/~mk51/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div>
</div></div>