[petsc-users] same code, different machine, different performance

张国熙 altriaex86 at gmail.com
Wed Jul 30 07:32:46 CDT 2014


Thank you for answering.
I have one more thing to ask.
The first computer I used is i5 2.4g
dualcore.
The second I used is i7 3.2g quad core.

Both sequencial case. My codes tend to be slower on the second one(compared
with a same ARPACK/UMFPACK combination) but faster on the first one.

I thought at least the two comparison should be similar for the second
machine is faster.

Is there any options that may affect performance on different machine in
PETSc or SLEPc?

Thanks a lot.
2014年7月30日 下午8:54于 "Matthew Knepley" <knepley at gmail.com>写道:

> On Wed, Jul 30, 2014 at 2:10 AM, 张国熙 <altriaex86 at gmail.com> wrote:
>
>> I am using putty so I could only give you screenshots.
>> The attachments are full output.
>>
>
> As we can see below, the memory bus on this machine can only really drive
> 1.5 processors. For that algorithm, you will see some speedup on 2 procs,
> and nothing after that. It would be nice if manufacturers reported STREAMS
> along with their CPU benchmarks.
>
>   Thanks,
>
>      Matt
>
>
>> The output
>> [image: 内嵌图片 1]
>>
>>
>> 2014-07-30 16:44 GMT+10:00 Jed Brown <jed at jedbrown.org>:
>>
>>> 张国熙 <altriaex86 at gmail.com> writes:
>>>
>>> > Do you mean compling and running the file "noname" attached?
>>> > Is it a source code? After opened it I only see the following.
>>> >
>>> > *-----BEGIN PGP SIGNATURE-----*
>>>
>>> It isn't called "noname", it is a PGP/MIME signature
>>> (application/pgp-signature) and your mail client refuses to recognize a
>>> 15-year old standard and instead writes "noname".  Just ignore my
>>> signatures and do what I suggested, then reply-all (do not drop
>>> petsc-users).
>>>
>>> > *Version: GnuPG v2*
>>> >
>>> > *iQIcBAEBAgAGBQJT2JCbAAoJEM+2iNHeMalNOhQQAJ9xVEBPCJhEB3PkVhCwNg/e*
>>> > *djetm4dAcWoRtv262lGU+yoe1W88WVNQBrHyJCXn7SBZ7sYvUM7lUgDxbBhWIILj*
>>> > *CwasnKDj2/r/TjmNmxwaP/xtKAJya6lfzKCQr6wymOQRCikGGCKZxNz+XfD5HI7j*
>>> > *wF9kjfUI/oDTm7beBi21z+GXaDHx0uq9rIIrn2kG2Bkv2OFYbAa/YKi6SuPxF7aa*
>>> > *kxbEchusp9WPChBqlmB6e+E991YwvUPQuaeXdN9do1YLuk9SPZYVmVbj7fdBrkB6*
>>> > *tfgoJIQW80mOCJFu13HqA22MzaQikaztW4B8zRguL1YIDhKmu+moi8gvlbkH+HjZ*
>>> > *L+PuIIKaozlkiHSy7KzhHpP8m6JuJqNoI7nkMwd2Ye4AF9xUFBiuzf426xafroF0*
>>> > *cLXLR1AMFXpRy4fRwZpiFdiyu4ats6jPOQ2WsJhS147WldShG+dMgo6acWaWq30v*
>>> > *RPqyWJV478cjZ8ZFsf6hdHoiMjfqPDxi4XF+b0+OeY+gM8CG+1h9HYpo6E+ALo1Q*
>>> > *IVKLI5cLfvshACrCPY1vnehDYSJMCPD91X+zlfjm/wrWb4Ci+RH+WGovIW0Smb72*
>>> > *NaGu1EDqZnUIFbjQA8wX1/E31uwVmr18Z2OJnqaWaas4Fnc53aro02Lya2Xn7/yf*
>>> > */kBlARwztc6/U6oUHFf+*
>>> > *=qGvR*
>>> > *-----END PGP SIGNATURE-----*
>>> >
>>> >
>>> > 2014-07-30 16:28 GMT+10:00 Jed Brown <jed at jedbrown.org>:
>>>
>>> >
>>> >> 张国熙 <altriaex86 at gmail.com> writes:
>>> >>
>>> >> > Hi,all
>>> >> >
>>> >> > Recently I am using SLEPc to solve an sparse eigenvalue problem. I
>>> use
>>> >> > krylov-shur and mumps as solvers.
>>> >> >
>>> >> > My codes work well my corei5 dual core laptop. Compared to my old
>>> >> > sequential solution ARPACK/UMFPACK, running with 2 process could
>>> give me
>>> >> > 1.3x speedup. However when I install my code on another station
>>> with quad
>>> >> > core CPU, the performance change significantly. My parallel code
>>> runs 2-3
>>> >> > times slower than sequential code.
>>> >>
>>> >> Run "make streams NPMAX=4" and send the output.
>>> >>
>>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140730/f537a5cc/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 68095 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140730/f537a5cc/attachment-0001.png>


More information about the petsc-users mailing list