about KSP based on parallel dense matrix

Yujie recrusader at gmail.com
Tue Sep 23 11:09:08 CDT 2008


Yes, I use PETSc 2.3.3. In fact, I didn't use Plapack because Barry told me
it didn't work :). I will try it. thanks.

On Mon, Sep 22, 2008 at 9:20 PM, Richard Tran Mills <rmills at climate.ornl.gov
> wrote:

> Matt,
>
> I use it too.  Last time I checked, though, it seemed to be broken in
> petsc-dev: I could no longer build it using --download-plapack with
> configure.py.  (I think the problem is that no one finished migrating it to
> the new direct solver interface -- I haven't had time to investigate,
> though.)
>
> It was working for me in 2.3.3, though.  I assume Yujie is using the
> release version of PETSc?
>
> --Richard
>
> Matthew Knepley wrote:
>
>> On Mon, Sep 22, 2008 at 9:43 PM, Lisandro Dalcin <dalcinl at gmail.com>
>> wrote:
>>
>>> On Mon, Sep 22, 2008 at 11:03 PM, Yujie <recrusader at gmail.com> wrote:
>>>
>>>> Dear Lisandro:
>>>>
>>>> Barry has tried to establish an interface for Plapack. However, there
>>>> are
>>>> some bugs in Plapack. Therefore, it doesn't work.
>>>>
>>> Sorry, I didn't know about those Plapack issues.
>>>
>>
>> Neither did I, and seeing as how I use it, this is interesting. Please
>> please please
>> report any bugs you find, because I have been using it without problems.
>>
>>   Matt
>>
>>  I am wondering if CG in
>>>> Petsc can work with parallel dense matrix.
>>>>
>>> Of course it works. In fact, any other KSP should work. As Barry said,
>>> The KSP methods are INDEPENDENT of the matrix format, try -pc_type
>>> jacobi as preconditioner.
>>>
>>>  When using the same matrix, which
>>>> one is faster, sequential or parallel? thanks.
>>>>
>>> For a fixed-size matrix, you should get really good speedups iterating
>>> in parallel. Of course, that would be even better if you can generate
>>> the local rows of the matrix in each processor. If not, communicating
>>> the matrix row from the 'master' to the 'slaves' could be a real
>>> bootleneck (large data to compute at the master while slaves waiting,
>>> large data to scatter from master to slaves), If you cannot avoid
>>> dense matrices, then you should try hard to compute the local rows at
>>> the owning processor.
>>>
>>>
>>>  On Mon, Sep 22, 2008 at 6:51 PM, Lisandro Dalcin <dalcinl at gmail.com>
>>>> wrote:
>>>>
>>>>> Well, any iterative solver will actually work, but expect a really
>>>>> poor scalability :-). I believe (never used dense matrices) that you
>>>>> could use a direct method (PLAPACK?), but again, be prepared for long
>>>>> running times if your problem is (even moderately) large.
>>>>>
>>>>> On Mon, Sep 22, 2008 at 10:35 PM, Yujie <recrusader at gmail.com> wrote:
>>>>>
>>>>>> To my knowledge, PETsc doesn't provide parallel dense matrix-based
>>>>>> solvers,
>>>>>> such as for CG, GMRES and so on. If it is, how to deal with this
>>>>>> problem?
>>>>>> Thanks.
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Yujie
>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> Lisandro Dalcín
>>>>> ---------------
>>>>> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
>>>>> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
>>>>> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
>>>>> PTLC - Güemes 3450, (3000) Santa Fe, Argentina
>>>>> Tel/Fax: +54-(0)342-451.1594
>>>>>
>>>>>
>>>>
>>>
>>> --
>>> Lisandro Dalcín
>>> ---------------
>>> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
>>> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
>>> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
>>> PTLC - Güemes 3450, (3000) Santa Fe, Argentina
>>> Tel/Fax: +54-(0)342-451.1594
>>>
>>>
>>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20080923/bc3a36c3/attachment.htm>


More information about the petsc-users mailing list