<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Tue, Mar 31, 2015 at 6:51 PM, Steena M <span dir="ltr"><<a href="mailto:stm8086@yahoo.com" target="_blank">stm8086@yahoo.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Thanks Barry. I'm still getting the malloc error with NULL. Is there a way to distribute the matrix without explicit preallocation? Different matrices will be loaded during runtime and assigning preallocation parameters would mean an additional preprocessing step.<br></blockquote><div><br></div><div>1) MatSetOption(<span style="color:rgb(0,0,0);font-family:Times;font-size:medium">MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE)</span></div><div> </div><div>2) Note that this is never ever ever more efficient than making another pass and preallocating</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
--------------------------------------------<br>
On Sun, 3/29/15, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
<br>
Subject: Re: [petsc-users] Unequal sparse matrix row distribution for MPI MatMult<br>
To: "Steena M" <<a href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>><br>
Cc: "Matthew Knepley" <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>>, <a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>
Date: Sunday, March 29, 2015, 9:26 PM<br>
<br>
<br>
> On<br>
Mar 29, 2015, at 11:05 PM, Steena M <<a href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>><br>
wrote:<br>
><br>
> Thanks<br>
Matt. I used PETSC_DETERMINE but I'm now getting an<br>
allocation-based error:<br>
><br>
> [0]PETSC ERROR: ---------------------<br>
Error Message ------------------------------------<br>
> [0]PETSC ERROR: Argument out of range!<br>
> [0]PETSC ERROR: New nonzero at (2,18)<br>
caused a malloc!<br>
> [0]PETSC ERROR:<br>
------------------------------------------------------------------------<br>
><br>
> I tried<br>
preallocating on each rank for the diagonal and off diagonal<br>
section of the matrix as the next step My current<br>
approximations for preallocation<br>
><br>
> CHKERRQ(<br>
MatMPIBAIJSetPreallocation(A,1,5,PETSC_DEFAULT,5,PETSC_DEFAULT)); <br>
<br>
<br>
These<br>
arguments where you pass PETSC_DEFAULT are expecting a<br>
pointer not an integer. You can pass NULL in those<br>
locations. Though it is better to provide the correct<br>
preallocation rather than some defaults.<br>
<br>
Barry<br>
<br>
><br>
> are throwing segmentation errors.<br>
><br>
> [0]PETSC ERROR: <br>
Caught signal number 11 SEGV: Segmentation Violation,<br>
probably memory access out of range<br>
><br>
> Any insights into what I'm doing<br>
wrong?<br>
><br>
> Thanks,<br>
> Steena<br>
><br>
><br>
><br>
> On Sun, 3/29/15, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>><br>
wrote:<br>
><br>
> Subject:<br>
Re: [petsc-users] Unequal sparse matrix row distribution for<br>
MPI MatMult<br>
> To: "Steena M"<br>
<<a href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>><br>
> Cc: "Barry Smith" <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>>,<br>
<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>
> Date: Sunday, March 29, 2015, 10:02 PM<br>
><br>
> On Sun, Mar 29, 2015<br>
at<br>
> 9:56 PM, Steena M <<a href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>><br>
> wrote:<br>
> Hi<br>
> Barry,<br>
><br>
><br>
><br>
> I am trying to partition a 20 row and 20<br>
col sparse matrix<br>
> between two procs<br>
such that proc 0 has 15 rows and 20 cols<br>
> and proc 1 has 5 rows and 20 cols. The<br>
code snippet:<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
> <br>
CHKERRQ(MatCreate(PETSC_COMM_WORLD,&A));<br>
//<br>
> at runtime: -matload_block_size 1<br>
><br>
><br>
><br>
><br>
><br>
> <br>
if (rank ==0)<br>
><br>
> {<br>
><br>
> <br>
CHKERRQ( MatSetSizes(A, 15, 20, 20,<br>
> 20) ); //rank 0 gets 75% of the rows<br>
><br>
> <br>
CHKERRQ( MatSetType(A, MATMPIBAIJ)<br>
> );<br>
><br>
> CHKERRQ(<br>
MatLoad(A,fd) );<br>
><br>
> }<br>
><br>
><br>
><br>
> else<br>
><br>
> <br>
{<br>
><br>
> <br>
CHKERRQ( MatSetSizes(A, 5,<br>
20, 20,<br>
> 20) ); //rank 1 gets 25% of the<br>
rows<br>
><br>
> <br>
CHKERRQ( MatSetType(A, MATMPIBAIJ)<br>
> );<br>
><br>
> CHKERRQ(<br>
MatLoad(A,fd) );<br>
><br>
> }<br>
><br>
<br>
><br>
><br>
> This throws the following error (probably<br>
from psplit.c):<br>
><br>
><br>
[1]PETSC ERROR: --------------------- Error Message<br>
> ------------------------------------<br>
><br>
> [1]PETSC ERROR:<br>
Nonconforming object sizes!<br>
><br>
> [1]PETSC ERROR: Sum of local lengths 40<br>
does not equal<br>
> global length 20, my<br>
local length 20<br>
><br>
> likely a call to<br>
VecSetSizes() or MatSetSizes() is<br>
><br>
wrong.<br>
><br>
> See<br>
<a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#split" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html#split</a>!<br>
><br>
><br>
><br>
> This error printout<br>
doesn't quite make sense to me.<br>
><br>
I'm trying to specify a total matrix size of 20x20...<br>
I<br>
> haven't yet figured out where the<br>
'40' comes<br>
> from in the error<br>
message.<br>
><br>
><br>
><br>
> Any thoughts on what<br>
might be going wrong?<br>
><br>
> Its the column specification. Just<br>
> use PETSC_DETERMINE for the local columns<br>
since all our<br>
> sparse matrixformats are<br>
row divisions<br>
> anyway.<br>
> <br>
> Thanks,<br>
> <br>
> Matt<br>
> Thanks in advance,<br>
><br>
<br>
> Steena<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
--------------------------------------------<br>
><br>
> On Sun, 3/22/15,<br>
Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>><br>
> wrote:<br>
><br>
><br>
><br>
> Subject: Re: [petsc-users] Unequal<br>
sparse matrix row<br>
> distribution for MPI<br>
MatMult<br>
><br>
> To:<br>
"Steena M" <<a href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>><br>
><br>
> Cc: <a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>
><br>
> Date: Sunday,<br>
March 22, 2015, 3:58 PM<br>
><br>
><br>
><br>
><br>
><br>
> <br>
><br>
> Steena,<br>
><br>
><br>
><br>
> I am<br>
><br>
<br>
> a little unsure of your question.<br>
<br>
><br>
><br>
><br>
> 1)<br>
you can create a MPIBAIJ<br>
><br>
> matrix with any distribution of block<br>
rows per process<br>
> you<br>
><br>
> want, just set the<br>
local row size for each process to<br>
><br>
be<br>
><br>
> what you<br>
like. Use MatCreateVecs() to get<br>
><br>
correspondingly<br>
><br>
> <br>
laid out vectors.<br>
><br>
><br>
<br>
><br>
> <br>
or 2) if you have a MPIBAIJ<br>
><br>
> matrix with<br>
"equal" row layout and you want a<br>
> new<br>
><br>
> one with uneven row layout you can<br>
simply use<br>
><br>
> <br>
MatGetSubMatrix() to create that new matrix.<br>
><br>
><br>
><br>
> Barry<br>
><br>
><br>
><br>
> Unless you have<br>
another reason to have the<br>
><br>
> matrix with an equal number row layout I<br>
would just<br>
> generate<br>
><br>
> the matrix with<br>
the layout you want.<br>
><br>
><br>
><br>
><br>
><br>
> > On Mar 22, 2015, at 5:50 PM, Steena<br>
M<br>
><br>
> <<a href="mailto:stm8086@yahoo.com">stm8086@yahoo.com</a>><br>
><br>
> wrote:<br>
><br>
> ><br>
><br>
> > Hello,<br>
><br>
> ><br>
><br>
> > I need to<br>
distribute<br>
><br>
> a<br>
sparse matrix such that each proc owns an unequal<br>
> number<br>
><br>
> of blocked rows before I proceed with<br>
MPI MatMult. My<br>
><br>
> <br>
initial thoughts on doing this:<br>
><br>
> ><br>
><br>
> > 1) Use MatGetSubMatrices() on the<br>
test<br>
><br>
> MATMPIBAIJ<br>
matrix to produce a new matrix where each<br>
> proc<br>
><br>
> has an unequal number of rows.<br>
><br>
> ><br>
><br>
> > 2) Provide<br>
scatter context for vector X<br>
><br>
> (for MatMult )using IS iscol from<br>
MatGetSubMatrices()<br>
> while<br>
><br>
> creating the<br>
vector X.<br>
><br>
> <br>
><br>
><br>
> > 3)<br>
Call MatMult()<br>
><br>
> <br>
><br>
><br>
> > Will<br>
MatMult_MPIBAIJ continue to scatter<br>
><br>
> this matrix and vector such that each<br>
proc will own an<br>
> equal<br>
><br>
> number of matrix<br>
rows and corresponding diagonal vector<br>
><br>
<br>
> elements? Should I write my own<br>
MPIMatMult function to<br>
><br>
> retain my redistribution of the matrix<br>
and vector?<br>
><br>
> <br>
><br>
><br>
> ><br>
Thanks in<br>
><br>
> <br>
advance,<br>
><br>
> ><br>
Steena<br>
><br>
><br>
><br>
><br>
><br>
><br>
> --<br>
> What most<br>
experimenters<br>
> take for granted before<br>
they begin their experiments is<br>
><br>
infinitely more interesting than any results to which<br>
their<br>
> experiments lead.<br>
> -- Norbert<br>
> Wiener<br>
><br>
<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>