[petsc-users] same code, different machine, different performance

张国熙 altriaex86 at gmail.com
Wed Jul 30 02:10:13 CDT 2014


I am using putty so I could only give you screenshots.
The attachments are full output.
The output
[image: 内嵌图片 1]


2014-07-30 16:44 GMT+10:00 Jed Brown <jed at jedbrown.org>:

> 张国熙 <altriaex86 at gmail.com> writes:
>
> > Do you mean compling and running the file "noname" attached?
> > Is it a source code? After opened it I only see the following.
> >
> > *-----BEGIN PGP SIGNATURE-----*
>
> It isn't called "noname", it is a PGP/MIME signature
> (application/pgp-signature) and your mail client refuses to recognize a
> 15-year old standard and instead writes "noname".  Just ignore my
> signatures and do what I suggested, then reply-all (do not drop
> petsc-users).
>
> > *Version: GnuPG v2*
> >
> > *iQIcBAEBAgAGBQJT2JCbAAoJEM+2iNHeMalNOhQQAJ9xVEBPCJhEB3PkVhCwNg/e*
> > *djetm4dAcWoRtv262lGU+yoe1W88WVNQBrHyJCXn7SBZ7sYvUM7lUgDxbBhWIILj*
> > *CwasnKDj2/r/TjmNmxwaP/xtKAJya6lfzKCQr6wymOQRCikGGCKZxNz+XfD5HI7j*
> > *wF9kjfUI/oDTm7beBi21z+GXaDHx0uq9rIIrn2kG2Bkv2OFYbAa/YKi6SuPxF7aa*
> > *kxbEchusp9WPChBqlmB6e+E991YwvUPQuaeXdN9do1YLuk9SPZYVmVbj7fdBrkB6*
> > *tfgoJIQW80mOCJFu13HqA22MzaQikaztW4B8zRguL1YIDhKmu+moi8gvlbkH+HjZ*
> > *L+PuIIKaozlkiHSy7KzhHpP8m6JuJqNoI7nkMwd2Ye4AF9xUFBiuzf426xafroF0*
> > *cLXLR1AMFXpRy4fRwZpiFdiyu4ats6jPOQ2WsJhS147WldShG+dMgo6acWaWq30v*
> > *RPqyWJV478cjZ8ZFsf6hdHoiMjfqPDxi4XF+b0+OeY+gM8CG+1h9HYpo6E+ALo1Q*
> > *IVKLI5cLfvshACrCPY1vnehDYSJMCPD91X+zlfjm/wrWb4Ci+RH+WGovIW0Smb72*
> > *NaGu1EDqZnUIFbjQA8wX1/E31uwVmr18Z2OJnqaWaas4Fnc53aro02Lya2Xn7/yf*
> > */kBlARwztc6/U6oUHFf+*
> > *=qGvR*
> > *-----END PGP SIGNATURE-----*
> >
> >
> > 2014-07-30 16:28 GMT+10:00 Jed Brown <jed at jedbrown.org>:
> >
> >> 张国熙 <altriaex86 at gmail.com> writes:
> >>
> >> > Hi,all
> >> >
> >> > Recently I am using SLEPc to solve an sparse eigenvalue problem. I use
> >> > krylov-shur and mumps as solvers.
> >> >
> >> > My codes work well my corei5 dual core laptop. Compared to my old
> >> > sequential solution ARPACK/UMFPACK, running with 2 process could give
> me
> >> > 1.3x speedup. However when I install my code on another station with
> quad
> >> > core CPU, the performance change significantly. My parallel code runs
> 2-3
> >> > times slower than sequential code.
> >>
> >> Run "make streams NPMAX=4" and send the output.
> >>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140730/c84d8ea2/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 68095 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140730/c84d8ea2/attachment-0003.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 1.png
Type: image/png
Size: 147467 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140730/c84d8ea2/attachment-0004.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 2.png
Type: image/png
Size: 72913 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140730/c84d8ea2/attachment-0005.png>


More information about the petsc-users mailing list