<div dir="ltr">On Fri, Dec 28, 2012 at 4:01 PM, Jelena Slivka <span dir="ltr"><<a href="mailto:slivkaje@gmail.com" target="_blank">slivkaje@gmail.com</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div>Hello!<br></div>I have a few simple questions about PETSc about functions that I can't seem to find in the documentation:<br>
</div>1) Is there a way to automatically create a matrix in which all elements are the same scalar value a, e.g. something like ones(m,n) in Matlab?<br></div></div></div></div></blockquote><div><br></div><div style>That matrix (or any very low rank matrix) should not be stored explicitly as a dense matrix.<br>
<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>
</div>2) Is there an equivalent to Matlab .* operator?<br></div></div></div></blockquote><div><br></div><div style>There is not a MatPointwiseMult(). It could be added, but I'm not aware of a use for this operator outside of matrix misuse (using a Mat to represent an array of numbers that are not an operator, thus should really be a Vec, perhaps managed using DMDA).</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div></div>3) Is there a function that can create matrix C by appending matrices A and B? <br>
</div>Grateful in advance<br></div>
</blockquote></div><br></div><div class="gmail_extra" style>Block matrices can be manipulated efficiently using MATNEST, but there is a very high probability of misuse unless you really understand why that is an appropriate data structure. Much more likely, you should create a matrix of size C, then assemble the parts of A and B into it, perhaps using MatGetLocalSubMatrix() so that the assembly "looks" like assembling A and B separately.<br>
<br>Note that in parallel, you almost never want "concatenation" in the matrix sense of [A B; C D]. Instead, you want that there is some row and column permutation in which the operation would be concatenation, but in reality, the matrices are actually interleaved with some granularity so that both are well-distributed on the parallel machine.</div>
</div>