[petsc-users] The memory of GPU isn't enough for the matrix, can the matrix divide into several parts automatically?
Azamat Mametjanov
azamat.mametjanov at gmail.com
Sat Aug 4 01:52:27 CDT 2012
It would probably be best to divide it manually at the moment. Automatic
pipelining of a matrix for a given GPU (2GB memory here) can be handled by
an autotuner.
On Sat, Aug 4, 2012 at 12:24 AM, Xiangze Zeng <zengshixiangze at 163.com>wrote:
> Dear all,
>
> When I use GPU to solve the equations, it appears the error:
>
> terminate called after throwing an instance of
> 'thrust::system::detail::bad_alloc'
> what(): std::bad_alloc: out of memory
>
> When the system is smaller, it doesn't appear this error. And I'm sure the
> memory is enough for the larger system, is it the memory of the GPU that
> leads to this error? The GPU I use is Nvidia Quadro 4000. The larger system
> is 3847957x3847957, with 109189295 non-0 elements.
>
> And what can we do if the memory of the GPU isn't enough?
>
> Thank you!
> Sincerely,
> Zeng Xiangze
> --
> Mailbox 379, School of Physics
> Shandong University
> 27 South Shanda Road, Jinan, Shandong, P.R.China, 250100
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120804/0f991624/attachment.html>
More information about the petsc-users
mailing list