<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    Sorry for my late response.<br>
    <br>
    I found out how to solve the issue. I had to reorder the DOF
    ordering to ensure the matrix system is continuous:
<a class="moz-txt-link-freetext" href="https://urldefense.us/v3/__https://github.com/matnoel/EasyFEA/blob/dev/EasyFEA/Simulations/Solvers.py*L523-L618__;Iw!!G_uCfscf7eWS!ZRIBid8MQtxlDGuOHlzwyX677gChgXqdcePniPOa8Samn9v3Sv_ru-0neC8JQXK74zKh9uO9IqyyE9LzQGl-haOxRB1bLFY$">https://github.com/matnoel/EasyFEA/blob/dev/EasyFEA/Simulations/Solvers.py#L523-L618</a> <br>
    <br>
    Best regards,<br>
    Matthieu Noel<br>
    <br>
    <div class="moz-cite-prefix">Le 11/03/2026 à 21:56, Barry Smith a
      écrit :<br>
    </div>
    <blockquote type="cite"
      cite="mid:B8CE7908-3695-439A-9090-862766457636@petsc.dev">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div><br>
      </div>
         Does it compute the “correct” result with one MPI process? 
      <div>
        <div><br>
          <blockquote type="cite">
            <div>On Mar 10, 2026, at 4:29 AM, Matthieu Noel
              <a class="moz-txt-link-rfc2396E" href="mailto:matthieu.noel@inria.fr"><matthieu.noel@inria.fr></a> wrote:</div>
            <br class="Apple-interchange-newline">
            <div>
              <meta http-equiv="content-type"
                content="text/html; charset=UTF-8">
              <div>
                <p>Dear PETSc users,</p>
                <p>My name is Matthieu Noel, and I am a research
                  software engineer at INRIA.<br>
                  <br>
                  During my PhD thesis, I developed <strong>EasyFEA</strong>,
                  an open-source finite element analysis tool documented
                  here: <a class="moz-txt-link-freetext"
href="https://urldefense.us/v3/__https://easyfea.readthedocs.io/en/stable/__;!!G_uCfscf7eWS!flSI4RbIKcgCgsNyOejSxJzr9vPWftTgzTVo2hSxHxa1AjYCAMppGknY5M5EGs9muBjuX4s8jfd0PCpQUkoOjKZWz9tvihE$"
                    moz-do-not-send="true">https://easyfea.readthedocs.io/en/stable/</a>.</p>
                <p>I am currently working on <strong>Issue #26</strong>
                  (<a class="moz-txt-link-freetext"
href="https://urldefense.us/v3/__https://github.com/matnoel/EasyFEA/issues/26__;!!G_uCfscf7eWS!flSI4RbIKcgCgsNyOejSxJzr9vPWftTgzTVo2hSxHxa1AjYCAMppGknY5M5EGs9muBjuX4s8jfd0PCpQUkoOjKZWmhF-Su0$"
                    moz-do-not-send="true">https://github.com/matnoel/EasyFEA/issues/26</a>),
                  where I aim to use <strong>petsc4py</strong> to solve
                  a simple stationary thermal problem in parallel.<br>
                  <br>
                  The main challenge is <strong>constructing and
                    solving the parallel matrix system</strong> (<code
class="font-weight-400 bg-state-ghost-hover rounded-md p-1 text-sm whitespace-normal"
                    data-testid="code-block">Ax = b</code>).</p>
                <h3>Current Status:</h3>
                <ul>
                  <li><strong>Mesh partitioning</strong> has been
                    successfully performed using <strong>GMSH</strong>.</li>
                  <li>EasyFEA provides the <strong>assembled matrix
                      system</strong> (<code
class="font-weight-400 bg-state-ghost-hover rounded-md p-1 text-sm whitespace-normal"
                      data-testid="code-block">Ax = b</code>) with
                    applied boundary conditions.</li>
                  <li>My current implementation using <strong>petsc4py</strong>
                    runs without errors, but the results obtained are
                    incorrect.</li>
                </ul>
                <p><span id="cid:part1.I6q4VcHx.fv30lC2y@inria.fr"><DbhO7xbQSISvapJk.png></span><br>
                  I suspect the problem lies in how the <strong>parallel
                    matrix and vectors are assembled or solved</strong>,
                  particularly in handling <strong>ghost DOFs</strong>
                  and ensuring proper communication between MPI ranks.</p>
                <p>I would be extremely grateful for any guidance,
                  resources, or examples you could share to help me
                  address this issue.<br>
                  My current development branch is available here: <a
                    class="moz-txt-link-freetext"
href="https://urldefense.us/v3/__https://github.com/matnoel/EasyFEA/tree/26_mpi__;!!G_uCfscf7eWS!flSI4RbIKcgCgsNyOejSxJzr9vPWftTgzTVo2hSxHxa1AjYCAMppGknY5M5EGs9muBjuX4s8jfd0PCpQUkoOjKZWW8q1-8I$"
                    moz-do-not-send="true">https://github.com/matnoel/EasyFEA/tree/26_mpi</a>.</p>
                <p>The PETSc function can be found at this link:
                  <a class="moz-txt-link-freetext"
href="https://urldefense.us/v3/__https://github.com/matnoel/EasyFEA/blob/26_mpi/EasyFEA/Simulations/Solvers.py*L649-L767__;Iw!!G_uCfscf7eWS!flSI4RbIKcgCgsNyOejSxJzr9vPWftTgzTVo2hSxHxa1AjYCAMppGknY5M5EGs9muBjuX4s8jfd0PCpQUkoOjKZWlmD8DH8$"
                    moz-do-not-send="true">https://github.com/matnoel/EasyFEA/blob/26_mpi/EasyFEA/Simulations/Solvers.py#L649-L767</a></p>
                <p>I am currently running the script in the attached
                  files.</p>
                <p><br>
                  Thank you in advance for your time and expertise.<br>
                  <br>
                </p>
                <p>Best regards,<br>
                  Matthieu Noel<br>
                  <br>
                  <br>
                </p>
                <p><br>
                </p>
              </div>
              <span id="cid:74D8A4BB-04D9-4684-9397-BC1FADA1BFA7"><Thermal1.py></span></div>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
  </body>
</html>