<div dir="ltr">Barry wanted to do sth about Plapack. He found some bugs. I don't know details. Of course, if I find some bugs in Petsc, I will report it. I benefit a lot from it. thank you for your work in Petsc :).<br>
<br>Regards,<br>Yujie<br><br><div class="gmail_quote">On Mon, Sep 22, 2008 at 7:29 PM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="Ih2E3d">On Mon, Sep 22, 2008 at 9:43 PM, Lisandro Dalcin <<a href="mailto:dalcinl@gmail.com">dalcinl@gmail.com</a>> wrote:<br>
> On Mon, Sep 22, 2008 at 11:03 PM, Yujie <<a href="mailto:recrusader@gmail.com">recrusader@gmail.com</a>> wrote:<br>
>> Dear Lisandro:<br>
>><br>
>> Barry has tried to establish an interface for Plapack. However, there are<br>
>> some bugs in Plapack. Therefore, it doesn't work.<br>
><br>
> Sorry, I didn't know about those Plapack issues.<br>
<br>
</div>Neither did I, and seeing as how I use it, this is interesting. Please<br>
please please<br>
report any bugs you find, because I have been using it without problems.<br>
<br>
Matt<br>
<div><div></div><div class="Wj3C7c"><br>
>> I am wondering if CG in<br>
>> Petsc can work with parallel dense matrix.<br>
><br>
> Of course it works. In fact, any other KSP should work. As Barry said,<br>
> The KSP methods are INDEPENDENT of the matrix format, try -pc_type<br>
> jacobi as preconditioner.<br>
><br>
>> When using the same matrix, which<br>
>> one is faster, sequential or parallel? thanks.<br>
><br>
> For a fixed-size matrix, you should get really good speedups iterating<br>
> in parallel. Of course, that would be even better if you can generate<br>
> the local rows of the matrix in each processor. If not, communicating<br>
> the matrix row from the 'master' to the 'slaves' could be a real<br>
> bootleneck (large data to compute at the master while slaves waiting,<br>
> large data to scatter from master to slaves), If you cannot avoid<br>
> dense matrices, then you should try hard to compute the local rows at<br>
> the owning processor.<br>
><br>
><br>
>><br>
>> On Mon, Sep 22, 2008 at 6:51 PM, Lisandro Dalcin <<a href="mailto:dalcinl@gmail.com">dalcinl@gmail.com</a>> wrote:<br>
>>><br>
>>> Well, any iterative solver will actually work, but expect a really<br>
>>> poor scalability :-). I believe (never used dense matrices) that you<br>
>>> could use a direct method (PLAPACK?), but again, be prepared for long<br>
>>> running times if your problem is (even moderately) large.<br>
>>><br>
>>> On Mon, Sep 22, 2008 at 10:35 PM, Yujie <<a href="mailto:recrusader@gmail.com">recrusader@gmail.com</a>> wrote:<br>
>>> > To my knowledge, PETsc doesn't provide parallel dense matrix-based<br>
>>> > solvers,<br>
>>> > such as for CG, GMRES and so on. If it is, how to deal with this<br>
>>> > problem?<br>
>>> > Thanks.<br>
>>> ><br>
>>> > Regards,<br>
>>> ><br>
>>> > Yujie<br>
>>> ><br>
>>><br>
>>><br>
>>><br>
>>> --<br>
>>> Lisandro Dalcín<br>
>>> ---------------<br>
>>> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)<br>
>>> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)<br>
>>> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)<br>
>>> PTLC - Güemes 3450, (3000) Santa Fe, Argentina<br>
>>> Tel/Fax: +54-(0)342-451.1594<br>
>>><br>
>><br>
>><br>
><br>
><br>
><br>
> --<br>
> Lisandro Dalcín<br>
> ---------------<br>
> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)<br>
> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)<br>
> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)<br>
> PTLC - Güemes 3450, (3000) Santa Fe, Argentina<br>
> Tel/Fax: +54-(0)342-451.1594<br>
><br>
><br>
<br>
<br>
<br>
</div></div><font color="#888888">--<br>
What most experimenters take for granted before they begin their<br>
experiments is infinitely more interesting than any results to which<br>
their experiments lead.<br>
-- Norbert Wiener<br>
<br>
</font></blockquote></div><br></div>