<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:16px"><div id="yui_3_16_0_1_1427917121037_8170" dir="ltr"><span id="yui_3_16_0_1_1427917121037_8868">Thanks Barry. Attached is the driver program, the binary mat file, and the corresponding mtx file.</span></div><div id="yui_3_16_0_1_1427917121037_8170" dir="ltr"><span><br></span></div><div id="yui_3_16_0_1_1427917121037_8170" dir="ltr"><span id="yui_3_16_0_1_1427917121037_8869">Runtime command used:</span></div><div id="yui_3_16_0_1_1427917121037_8170" dir="ltr"><span><br></span></div><div id="yui_3_16_0_1_1427917121037_8170" dir="ltr" class="" style="">sierra324@monteiro:time srun -n 2 -ppdebug ./petsc-mpibaij-unequalrows -fin trefethen.dat -matload_block_size 1</div> <div id="yui_3_16_0_1_1427917121037_8170" dir="ltr" class="" style=""><span class="" style=""><br class="" style=""></span></div><br><div class="qtdSeparateBR"><br><br></div><div class="yahoo_quoted" style="display: block;"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 16px;"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 16px;"> <div dir="ltr"> <font size="2" face="Arial"> On Wednesday, April 1, 2015 12:28 PM, Barry Smith <bsmith@mcs.anl.gov> wrote:<br> </font> </div> <br><br> <div class="y_msg_container"><br> Send a data file you generated and your reader program and we'll debug it.<br><br> Barry<br><br>> On Apr 1, 2015, at 2:18 PM, Steena M <<a ymailto="mailto:stm8086@yahoo.com" href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>> wrote:<br>> <br>> Thanks Barry. I removed the Preallocation calls. It is still complaining about the malloc and incorrect data in the matrix file. I generate binary matrix files using PETSc's pythonscript to loop through a set of UFL sparse matrices. For this use case:<br>> <br>> mtx_mat = scipy.io.mmread('trefethen.mtx')<br>> PetscBinaryIO.PetscBinaryIO().writeMatSciPy(open('trefnew.dat','w'), mtx_mat)<br>> <br>> <br>> <br>> <br>> <br>> <br>> On Tuesday, March 31, 2015 9:15 PM, Barry Smith <<a ymailto="mailto:bsmith@mcs.anl.gov" href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>> <br>> <br>> <br>> You should not need to call any preallocation routines when using MatLoad()<br>> <br>> <br>> How did you generate the file? Are you sure it has the correct information for the matrix? <br>> <br>> Barry<br>> <br>> <br>> <br>> > On Mar 31, 2015, at 11:05 PM, Steena M <<a ymailto="mailto:stm8086@yahoo.com" href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>> wrote:<br>> > <br>> > Thanks Matt. I'm still getting the malloc error <br>> > <br>> > [0]PETSC ERROR: Argument out of range!<br>> > [0]PETSC ERROR: New nonzero at (2,18) caused a malloc!<br>> > <br>> > and <br>> > <br>> > a new incorrect matrix file error:<br>> > <br>> > [0]PETSC ERROR: Unexpected data in file!<br>> > [0]PETSC ERROR: not matrix object!<br>> > <br>> > Maybe the order of calls is mixed up. This is the code snippet:<br>> > <br>> > if (rank ==0)<br>> > {<br>> > PetscPrintf (PETSC_COMM_WORLD,"\n On rank %d ", rank);<br>> > <br>> > CHKERRQ(MatSetSizes(A, 15, PETSC_DETERMINE, 20, 20));<br>> > CHKERRQ(MatSetType(A, MATMPIBAIJ));<br>> > CHKERRQ( MatMPIBAIJSetPreallocation(A,1,1,NULL,1,NULL));<br>> > CHKERRQ( MatLoad(A,fd)); <br>> > CHKERRQ(MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE));<br>> > }<br>> > <br>> > else<br>> > {<br>> > PetscPrintf (PETSC_COMM_WORLD,"\n On rank %d ", rank);<br>> > <br>> > CHKERRQ( MatSetSizes(A, 5, PETSC_DETERMINE, 20, 20) );<br>> > CHKERRQ(MatSetType(A, MATMPIBAIJ));<br>> > CHKERRQ( MatMPIBAIJSetPreallocation(A,1,1,NULL,1,NULL));<br>> > CHKERRQ(MatLoad(A,fd));<br>> > CHKERRQ(MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE));<br>> > }<br>> > <br>> > Is there something I'm missing? <br>> > <br>> > Thanks,<br>> > Steena<br>> > <br>> > <br>> > <br>> > <br>> > <br>> > On Tuesday, March 31, 2015 6:10 PM, Matthew Knepley <<a ymailto="mailto:knepley@gmail.com" href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br>> > <br>> > <br>> > On Tue, Mar 31, 2015 at 6:51 PM, Steena M <<a ymailto="mailto:stm8086@yahoo.com" href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>> wrote:<br>> > Thanks Barry. I'm still getting the malloc error with NULL. Is there a way to distribute the matrix without explicit preallocation? Different matrices will be loaded during runtime and assigning preallocation parameters would mean an additional preprocessing step.<br>> > <br>> > 1) MatSetOption(MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE)<br>> > <br>> > 2) Note that this is never ever ever more efficient than making another pass and preallocating<br>> > <br>> > Thanks,<br>> > <br>> > Matt<br>> > --------------------------------------------<br>> > On Sun, 3/29/15, Barry Smith <<a ymailto="mailto:bsmith@mcs.anl.gov" href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>> > <br>> > Subject: Re: [petsc-users] Unequal sparse matrix row distribution for MPI MatMult<br>> > To: "Steena M" <<a ymailto="mailto:stm8086@yahoo.com" href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>><br>> > Cc: "Matthew Knepley" <<a ymailto="mailto:knepley@gmail.com" href="mailto:knepley@gmail.com">knepley@gmail.com</a>>, <a ymailto="mailto:petsc-users@mcs.anl.gov" href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>> > Date: Sunday, March 29, 2015, 9:26 PM<br>> > <br>> > <br>> > > On<br>> > Mar 29, 2015, at 11:05 PM, Steena M <<a ymailto="mailto:stm8086@yahoo.com" href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>><br>> > wrote:<br>> > ><br>> > > Thanks<br>> > Matt. I used PETSC_DETERMINE but I'm now getting an<br>> > allocation-based error:<br>> > ><br>> > > [0]PETSC ERROR: ---------------------<br>> > Error Message ------------------------------------<br>> > > [0]PETSC ERROR: Argument out of range!<br>> > > [0]PETSC ERROR: New nonzero at (2,18)<br>> > caused a malloc!<br>> > > [0]PETSC ERROR:<br>> > ------------------------------------------------------------------------<br>> > ><br>> > > I tried<br>> > preallocating on each rank for the diagonal and off diagonal<br>> > section of the matrix as the next step My current<br>> > approximations for preallocation<br>> > ><br>> > > CHKERRQ(<br>> > MatMPIBAIJSetPreallocation(A,1,5,PETSC_DEFAULT,5,PETSC_DEFAULT)); <br>> > <br>> > <br>> > These<br>> > arguments where you pass PETSC_DEFAULT are expecting a<br>> > pointer not an integer. You can pass NULL in those<br>> > locations. Though it is better to provide the correct<br>> > preallocation rather than some defaults.<br>> > <br>> > Barry<br>> > <br>> > ><br>> > > are throwing segmentation errors.<br>> > ><br>> > > [0]PETSC ERROR: <br>> > Caught signal number 11 SEGV: Segmentation Violation,<br>> > probably memory access out of range<br>> > ><br>> > > Any insights into what I'm doing<br>> > wrong?<br>> > ><br>> > > Thanks,<br>> > > Steena<br>> > ><br>> > ><br>> > ><br>> > > On Sun, 3/29/15, Matthew Knepley <<a ymailto="mailto:knepley@gmail.com" href="mailto:knepley@gmail.com">knepley@gmail.com</a>><br>> > wrote:<br>> > ><br>> > > Subject:<br>> > Re: [petsc-users] Unequal sparse matrix row distribution for<br>> > MPI MatMult<br>> > > To: "Steena M"<br>> > <<a ymailto="mailto:stm8086@yahoo.com" href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>><br>> > > Cc: "Barry Smith" <<a ymailto="mailto:bsmith@mcs.anl.gov" href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>>,<br>> > <a ymailto="mailto:petsc-users@mcs.anl.gov" href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>> > > Date: Sunday, March 29, 2015, 10:02 PM<br>> > ><br>> > > On Sun, Mar 29, 2015<br>> > at<br>> > > 9:56 PM, Steena M <<a ymailto="mailto:stm8086@yahoo.com" href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>><br>> > > wrote:<br>> > > Hi<br>> > > Barry,<br>> > ><br>> > ><br>> > ><br>> > > I am trying to partition a 20 row and 20<br>> > col sparse matrix<br>> > > between two procs<br>> > such that proc 0 has 15 rows and 20 cols<br>> > > and proc 1 has 5 rows and 20 cols. The<br>> > code snippet:<br>> > ><br>> > ><br>> > ><br>> > ><br>> > ><br>> > ><br>> > ><br>> > > <br>> > CHKERRQ(MatCreate(PETSC_COMM_WORLD,&A));<br>> > //<br>> > > at runtime: -matload_block_size 1<br>> > ><br>> > ><br>> > ><br>> > ><br>> > ><br>> > > <br>> > if (rank ==0)<br>> > ><br>> > > {<br>> > ><br>> > > <br>> > CHKERRQ( MatSetSizes(A, 15, 20, 20,<br>> > > 20) ); //rank 0 gets 75% of the rows<br>> > ><br>> > > <br>> > CHKERRQ( MatSetType(A, MATMPIBAIJ)<br>> > > );<br>> > ><br>> > > CHKERRQ(<br>> > MatLoad(A,fd) );<br>> > ><br>> > > }<br>> > ><br>> > ><br>> > ><br>> > > else<br>> > ><br>> > > <br>> > {<br>> > ><br>> > > <br>> > CHKERRQ( MatSetSizes(A, 5,<br>> > 20, 20,<br>> > > 20) ); //rank 1 gets 25% of the<br>> > rows<br>> > ><br>> > > <br>> > CHKERRQ( MatSetType(A, MATMPIBAIJ)<br>> > > );<br>> > ><br>> > > CHKERRQ(<br>> > MatLoad(A,fd) );<br>> > ><br>> > > }<br>> > ><br>> > <br>> > ><br>> > ><br>> > > This throws the following error (probably<br>> > from psplit.c):<br>> > ><br>> > ><br>> > [1]PETSC ERROR: --------------------- Error Message<br>> > > ------------------------------------<br>> > ><br>> > > [1]PETSC ERROR:<br>> > Nonconforming object sizes!<br>> > ><br>> > > [1]PETSC ERROR: Sum of local lengths 40<br>> > does not equal<br>> > > global length 20, my<br>> > local length 20<br>> > ><br>> > > likely a call to<br>> > VecSetSizes() or MatSetSizes() is<br>> > ><br>> > wrong.<br>> > ><br>> > > See<br>> > http://www.mcs.anl.gov/petsc/documentation/faq.html#split!<br>> > ><br>> > ><br>> > ><br>> > > This error printout<br>> > doesn't quite make sense to me.<br>> > ><br>> > I'm trying to specify a total matrix size of 20x20...<br>> > I<br>> > > haven't yet figured out where the<br>> > '40' comes<br>> > > from in the error<br>> > message.<br>> > ><br>> > ><br>> > ><br>> > > Any thoughts on what<br>> > might be going wrong?<br>> > ><br>> > > Its the column specification. Just<br>> > > use PETSC_DETERMINE for the local columns<br>> > since all our<br>> > > sparse matrixformats are<br>> > row divisions<br>> > > anyway.<br>> > > <br>> > > Thanks,<br>> > > <br>> > > Matt<br>> > > Thanks in advance,<br>> > ><br>> > <br>> > > Steena<br>> > ><br>> > ><br>> > ><br>> > ><br>> > ><br>> > ><br>> > ><br>> > ><br>> > --------------------------------------------<br>> > ><br>> > > On Sun, 3/22/15,<br>> > Barry Smith <<a ymailto="mailto:bsmith@mcs.anl.gov" href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>><br>> > > wrote:<br>> > ><br>> > ><br>> > ><br>> > > Subject: Re: [petsc-users] Unequal<br>> > sparse matrix row<br>> > > distribution for MPI<br>> > MatMult<br>> > ><br>> > > To:<br>> > "Steena M" <<a ymailto="mailto:stm8086@yahoo.com" href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>><br>> > ><br>> > > Cc: <a ymailto="mailto:petsc-users@mcs.anl.gov" href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>> > ><br>> > > Date: Sunday,<br>> > March 22, 2015, 3:58 PM<br>> > ><br>> > ><br>> > ><br>> > ><br>> > ><br>> > > <br>> > ><br>> > > Steena,<br>> > ><br>> > ><br>> > ><br>> > > I am<br>> > ><br>> > <br>> > > a little unsure of your question.<br>> > <br>> > ><br>> > ><br>> > ><br>> > > 1)<br>> > you can create a MPIBAIJ<br>> > ><br>> > > matrix with any distribution of block<br>> > rows per process<br>> > > you<br>> > ><br>> > > want, just set the<br>> > local row size for each process to<br>> > ><br>> > be<br>> > ><br>> > > what you<br>> > like. Use MatCreateVecs() to get<br>> > ><br>> > correspondingly<br>> > ><br>> > > <br>> > laid out vectors.<br>> > ><br>> > ><br>> > <br>> > ><br>> > > <br>> > or 2) if you have a MPIBAIJ<br>> > ><br>> > > matrix with<br>> > "equal" row layout and you want a<br>> > > new<br>> > ><br>> > > one with uneven row layout you can<br>> > simply use<br>> > ><br>> > > <br>> > MatGetSubMatrix() to create that new matrix.<br>> > ><br>> > ><br>> > ><br>> > > Barry<br>> > ><br>> > ><br>> > ><br>> > > Unless you have<br>> > another reason to have the<br>> > ><br>> > > matrix with an equal number row layout I<br>> > would just<br>> > > generate<br>> > ><br>> > > the matrix with<br>> > the layout you want.<br>> > ><br>> > ><br>> > ><br>> > ><br>> > ><br>> > > > On Mar 22, 2015, at 5:50 PM, Steena<br>> > M<br>> > ><br>> > > <<a ymailto="mailto:stm8086@yahoo.com" href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>><br>> > ><br>> > > wrote:<br>> > ><br>> > > ><br>> > ><br>> > > > Hello,<br>> > ><br>> > > ><br>> > ><br>> > > > I need to<br>> > distribute<br>> > ><br>> > > a<br>> > sparse matrix such that each proc owns an unequal<br>> > > number<br>> > ><br>> > > of blocked rows before I proceed with<br>> > MPI MatMult. My<br>> > ><br>> > > <br>> > initial thoughts on doing this:<br>> > ><br>> > > ><br>> > ><br>> > > > 1) Use MatGetSubMatrices() on the<br>> > test<br>> > ><br>> > > MATMPIBAIJ<br>> > matrix to produce a new matrix where each<br>> > > proc<br>> > ><br>> > > has an unequal number of rows.<br>> > ><br>> > > ><br>> > ><br>> > > > 2) Provide<br>> > scatter context for vector X<br>> > ><br>> > > (for MatMult )using IS iscol from<br>> > MatGetSubMatrices()<br>> > > while<br>> > ><br>> > > creating the<br>> > vector X.<br>> > ><br>> > > <br>> > ><br>> > ><br>> > > > 3)<br>> > Call MatMult()<br>> > ><br>> > > <br>> > ><br>> > ><br>> > > > Will<br>> > MatMult_MPIBAIJ continue to scatter<br>> > ><br>> > > this matrix and vector such that each<br>> > proc will own an<br>> > > equal<br>> > ><br>> > > number of matrix<br>> > rows and corresponding diagonal vector<br>> > ><br>> > <br>> > > elements? Should I write my own<br>> > MPIMatMult function to<br>> > ><br>> > > retain my redistribution of the matrix<br>> > and vector?<br>> > ><br>> > > <br>> > ><br>> > ><br>> > > ><br>> > Thanks in<br>> > ><br>> > > <br>> > advance,<br>> > ><br>> > > ><br>> > Steena<br>> > ><br>> > ><br>> > ><br>> > ><br>> > ><br>> > ><br>> > > --<br>> > > What most<br>> > experimenters<br>> > > take for granted before<br>> > they begin their experiments is<br>> > ><br>> > infinitely more interesting than any results to which<br>> > their<br>> > > experiments lead.<br>> > > -- Norbert<br>> > > Wiener<br>> > <br>> > ><br>> > <br>> > <br>> > <br>> > <br>> > -- <br>> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>> > -- Norbert Wiener<br>> > <br>> > <br>> <br>> <br><br><br></div> </div> </div> </div></div></body></html>