<div dir="ltr"><div dir="ltr">On Fri, Feb 7, 2025 at 8:20 AM <a href="mailto:medane.tchakorom@univ-fcomte.fr">medane.tchakorom@univ-fcomte.fr</a> <<a href="mailto:medane.tchakorom@univ-fcomte.fr">medane.tchakorom@univ-fcomte.fr</a>> wrote:</div><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Re:<br>
Please find below the output from the previous code, running on only one processor.<br>
<br>
Mat Object: 1 MPI process<br>
type: seqdense<br>
7.2003197397953400e-01 3.9777780919128602e-01 9.8405227390177075e-01 1.4405427480322786e-01<br>
6.1793966542126100e-02 7.3036588248200474e-02 7.3851607000756303e-01 9.9650445216117589e-01<br>
1.0022337819588500e-02 1.0386628927366459e-01 4.0114727059134836e-01 1.0677308875937896e-01<br>
1.4463931936456476e-01 2.5078039364333193e-01 5.2764865382548720e-01 9.8905332488367748e-01<br>
<br>
buffer[0] = 7.200320e-01<br>
buffer[1] = 6.179397e-02<br>
buffer[2] = 1.002234e-02<br>
buffer[3] = 1.446393e-01<br>
buffer[4] = 0.000000e+00<br>
buffer[5] = 0.000000e+00<br>
buffer[6] = 0.000000e+00<br>
buffer[7] = 0.000000e+00<br>
buffer[8] = 3.977778e-01<br>
buffer[9] = 7.303659e-02<br>
buffer[10] = 1.038663e-01<br>
buffer[11] = 2.507804e-01<br>
buffer[12] = 0.000000e+00<br>
buffer[13] = 0.000000e+00<br>
buffer[14] = 0.000000e+00<br>
buffer[15] = 0.000000e+00<br>
<br>
Mat Object: 1 MPI process<br>
type: seqdense<br>
7.2003197397953400e-01 3.9777780919128602e-01 9.8405227390177075e-01 1.4405427480322786e-01<br>
6.1793966542126100e-02 7.3036588248200474e-02 7.3851607000756303e-01 9.9650445216117589e-01<br>
1.0022337819588500e-02 1.0386628927366459e-01 4.0114727059134836e-01 1.0677308875937896e-01<br>
1.4463931936456476e-01 2.5078039364333193e-01 5.2764865382548720e-01 9.8905332488367748e-01<br>
0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00<br>
0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00<br>
0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00<br>
0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00<br>
<br>
<br>
I was expecting to get in “buffer”, only the data entries from R_part. Please, let me know if this is the excepted behavior and I’am missing something.<br></blockquote><div><br></div><div>As Jose already pointed out, SubMatrix() does not copy. It gives you a Mat front end to the same data, but with changed sizes. In this case, the LDA is 4, not 2, so when you iterate over the values, you skip over the ones you don't want.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Thanks,<br>
Medane<br>
<br>
<br>
<br>
> On 7 Feb 2025, at 11:34, Pierre Jolivet <<a href="mailto:pierre@joliv.et" target="_blank">pierre@joliv.et</a>> wrote:<br>
> <br>
> <br>
> <br>
>> On 7 Feb 2025, at 11:05 AM, <a href="mailto:medane.tchakorom@univ-fcomte.fr" target="_blank">medane.tchakorom@univ-fcomte.fr</a> wrote:<br>
>> <br>
>> <br>
>> Dear all,<br>
>> <br>
>> I have been experiencing incoherent data entries from this code below, when printing the array. Maybe I’am doing something wrong.<br>
> <br>
> What is incoherent?<br>
> Everything looks OK to me.<br>
> <br>
> Thanks,<br>
> Pierre<br>
> <br>
>> ----------------<br>
>> <br>
>> PetscInt nlines = 8; // lines<br>
>> PetscInt ncols = 4; // columns<br>
>> PetscMPIInt rank;<br>
>> PetscMPIInt size;<br>
>> <br>
>> // Initialize PETSc<br>
>> PetscCall(PetscInitialize(&argc, &args, NULL, NULL));<br>
>> PetscCallMPI(MPI_Comm_rank(MPI_COMM_WORLD, &rank));<br>
>> PetscCallMPI(MPI_Comm_size(MPI_COMM_WORLD, &size));<br>
>> <br>
>> Mat R_full;<br>
>> Mat R_part;<br>
>> PetscInt idx_first_row = 0;<br>
>> PetscInt idx_one_plus_last_row = nlines / 2;<br>
>> PetscCall(MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, nlines, ncols, NULL, &R_full));<br>
>> <br>
>> // Get sub matrix<br>
>> PetscCall(MatDenseGetSubMatrix(R_full, idx_first_row, idx_one_plus_last_row, PETSC_DECIDE, PETSC_DECIDE, &R_part));<br>
>> // Add entries to sub matrix<br>
>> MatSetRandom(R_part, NULL);<br>
>> //View sub matrix<br>
>> PetscCall(MatView(R_part, PETSC_VIEWER_STDOUT_WORLD));<br>
>> <br>
>> // Get array from sub matrix and print entries<br>
>> PetscScalar *buffer;<br>
>> PetscCall(MatDenseGetArray(R_part, &buffer));<br>
>> PetscInt idx_end = (nlines/2) * ncols;<br>
>> <br>
>> for (int i = 0; i < idx_end; i++)<br>
>> {<br>
>> PetscPrintf(PETSC_COMM_SELF, "buffer[%d] = %e \n", i, buffer[i]);<br>
>> }<br>
>> <br>
>> //Restore array to sub matrix<br>
>> PetscCall(MatDenseRestoreArray(R_part, &buffer));<br>
>> // Restore sub matrix<br>
>> PetscCall(MatDenseRestoreSubMatrix(R_full, &R_part));<br>
>> // View the initial matrix<br>
>> PetscCall(MatView(R_full, PETSC_VIEWER_STDOUT_WORLD));<br>
>> <br>
>> PetscCall(MatDestroy(&R_full));<br>
>> <br>
>> PetscCall(PetscFinalize());<br>
>> return 0;<br>
>> <br>
>> ----------------<br>
>> <br>
>> <br>
>> Thanks<br>
>> Medane<br>
> <br>
> <br>
<br>
</blockquote></div><div><br clear="all"></div><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!c2IunCO2oQ3I91jAU5GYm2XQbPfgQfcl0n_uf1fsjnqd7gGNf1YDMYee5YkTRcQAfGtUxSZxDS4kWhYE-4ae$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>