<div dir="ltr"><div>Hi Matthew,</div><div>Thanks for your reply!</div><div><br></div><div>Let me precise what I mean by defining few questions:</div><div><br></div><div>1. In order to obtain a parallel execution of simple_code.py, do I need to go with mpiexec python3 simple_code.py, or I can just launch python3 simple_code.py?</div><div>2. This simple_code.py consists of 2 parts: a) preparation of matrix b) solving the system of linear equations with PETSc. If I launch mpirun (or mpiexec) -np 8 python3 simple_code.py, I suppose that I will basically obtain 8 matrices and 8 systems to solve. However, I need to prepare only one matrix, but launch this code in parallel on 8 processors.</div><div>In fact, here attached you will find a similar code (scipy_code.py) with only one difference: the system of linear equations is solved with scipy. So when I solve it, I can clearly see that the solution is obtained in a parallel way. However, I do not use the command mpirun (or mpiexec). I just go with python3 scipy_code.py.</div><div>In this case, the first part (creation of the sparse matrix) is not parallel, whereas the solution of system is found in a parallel way.</div><div>So my question is, Do you think that it s possible to have the same behavior with PETSC? And what do I need for this?<br></div><div><br></div><div>I am asking this because for my colleague it worked! It means that he launches the simple_code.py on his computer using the command python3 simple_code.py (and not mpi-smth python3 simple_code.py) and he obtains a parallel execution of the same code.<br></div><div><br></div><div>Thanks for your help!</div><div>Ivan<br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr">On Thu, Nov 15, 2018 at 11:54 AM Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Thu, Nov 15, 2018 at 4:53 AM Ivan Voznyuk via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><p>Dear PETSC community,</p>
<p>I have a question regarding the parallel execution of petsc4py.</p>
<p>I have a simple code (here attached simple_code.py) which solves a
system of linear equations Ax=b using petsc4py. To execute it, I use the
command python3 simple_code.py which yields a sequential performance.
With a colleague of my, we launched this code on his computer, and this
time the execution was in parallel. Although, he used the same command
python3 simple_code.py (without mpirun, neither mpiexec).</p>
<p></p></div></blockquote><div>I am not sure what you mean. To run MPI programs in parallel, you need a launcher like mpiexec or mpirun. There are Python programs (like nemesis) that use the launcher API directly (called PMI), but that is not part of petsc4py.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><p>My configuration: Ubuntu x86_64 Ubuntu 16.04, Intel Core i7, PETSc 3.10.2, PETSC_ARCH=arch-linux2-c-debug, petsc4py 3.10.0 in virtualenv <br>
</p>
<p>In order to parallelize it, I have already tried:<br>
- use 2 different PCs<br>
- use Ubuntu 16.04, 18.04<br>
- use different architectures (arch-linux2-c-debug, linux-gnu-c-debug, etc)<br>
- ofc use different configurations (my present config can be found in make.log that I attached here)<br>
- mpi from mpich, openmpi</p>
<p>Nothing worked.</p>
<p>Do you have any ideas?</p>
<p>Thanks and have a good day,<br>
Ivan</p>
<br>-- <br><div dir="ltr" class="m_9043555073033899979m_4831720893541188530gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr">Ivan VOZNYUK<div>PhD in Computational Electromagnetics</div></div></div></div></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="m_9043555073033899979gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr">Ivan VOZNYUK<div>PhD in Computational Electromagnetics</div><div>+33 (0)6.95.87.04.55</div><div><a href="https://ivanvoznyukwork.wixsite.com/webpage" target="_blank">My webpage</a><br></div><div><a href="http://linkedin.com/in/ivan-voznyuk-b869b8106" target="_blank">My LinkedIn</a></div></div></div></div></div>