[petsc-users] Error - Out of memory. This could be due to allocating too large an object or bleeding by not properly ...

TAY wee-beng zonexo at gmail.com
Wed Feb 24 01:54:47 CST 2016


On 24/2/2016 10:28 AM, Matthew Knepley wrote:
> On Tue, Feb 23, 2016 at 7:50 PM, TAY wee-beng <zonexo at gmail.com 
> <mailto:zonexo at gmail.com>> wrote:
>
>     Hi,
>
>     I got this error (also attached, full) when running my code. It
>     happens after a few thousand time steps.
>
>     The strange thing is that for 2 different clusters, it stops at 2
>     different time steps.
>
>     I wonder if it's related to DM since this happens after I added DM
>     into my code.
>
>     In this case, how can I find out the error? I'm thinking valgrind
>     may take very long and gives too many false errors.
>
>
> It is very easy to find leaks. You just run a few steps with 
> -malloc_dump and see what is left over.
>
>    Matt
Hi Matt,

Do you mean running my a.out with the -malloc_dump and stop after a few 
time steps?

What and how should I "see" then?

>
>     -- 
>     Thank you
>
>     Yours sincerely,
>
>     TAY wee-beng
>
>
>
>
> -- 
> What most experimenters take for granted before they begin their 
> experiments is infinitely more interesting than any results to which 
> their experiments lead.
> -- Norbert Wiener

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160224/ddd5d8d0/attachment.html>


More information about the petsc-users mailing list