[petsc-users] MatMultTranspose memory usage
Karl Lin
karl.linkui at gmail.com
Wed Jul 17 17:26:30 CDT 2019
MatCreateMPIAIJMKL
parallel and sequential exhibit the same behavior. In fact, we found that
doing matmult will increase the memory by the size of matrix as well.
On Wed, Jul 17, 2019 at 4:55 PM Zhang, Hong <hzhang at mcs.anl.gov> wrote:
> Karl:
> What matrix format do you use? Run it in parallel or sequential?
> Hong
>
> We used /proc/self/stat to track the resident set size during program run,
>> and we saw the resident set size jumped by the size of the matrix right
>> after we did matmulttranspose.
>>
>> On Wed, Jul 17, 2019 at 12:04 PM hong--- via petsc-users <
>> petsc-users at mcs.anl.gov> wrote:
>>
>>> Kun:
>>> How do you know 'MatMultTranpose creates an extra memory copy of matrix'?
>>> Hong
>>>
>>> Hi,
>>>>
>>>>
>>>>
>>>> I was using MatMultTranpose and MatMult to solver a linear system.
>>>>
>>>>
>>>>
>>>> However we found out, MatMultTranpose create an extra memory copy of
>>>> matrix for its operation. This extra memory copy is not stated everywhere
>>>> in petsc manual.
>>>>
>>>>
>>>>
>>>> This basically double my memory requirement to solve my system.
>>>>
>>>>
>>>>
>>>> I remember mkl’s routine can do inplace matrix transpose vector
>>>> product, without transposing the matrix itself.
>>>>
>>>>
>>>>
>>>> Is this always the case? Or there is way to make petsc to do inplace
>>>> matrix transpose vector product.
>>>>
>>>>
>>>>
>>>> Any help is greatly appreciated.
>>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Kun
>>>>
>>>>
>>>>
>>>> Schlumberger-Private
>>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190717/716cc813/attachment.html>
More information about the petsc-users
mailing list