[petsc-dev] CUDA sm_13 for double
Barry Smith
bsmith at mcs.anl.gov
Wed Jan 12 21:09:42 CST 2011
Thanks. But Yikes that tool doesn't even exist on the Apple!
I'll bug the NVIDIA guys,
Barry
On Jan 12, 2011, at 3:48 AM, Filippo Spiga wrote:
> Dear Barry,
> I guess the best is to probe directly the GPUs instaled on the system using the NVIDIA CLI tool called nvidia-smi. Command line options "-a" or "-q" report the model. The problem will be parse the information in the sdtout. Maybe we can ping NVIDIA to improve the output to be more "machine-readable". It changes if drivers change. It can be very annoying (I am speaking for direct experience)....
>
> Regards
>
> --
> Filippo SPIGA, MSc Computer Science
> ~ homepage: http://tinyurl.com/fspiga ~
>
> «Nobody will drive us out of Cantor's paradise.»
> -- David Hilbert
>
>
> On Tue, Jan 11, 2011 at 11:44 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> On Jan 11, 2011, at 4:55 PM, Lisandro Dalcin wrote:
>
> > I think the two lines below (config/PETSc/packages/cuda.py ) are wrong:
> >
> > if self.scalartypes.precision == 'double':
> > self.setCompilers.addCompilerFlag('-arch sm_13')
> >
> > What if your GPU is sm_20?
>
> This is a hack to get things to work. We've love for you to tell us the correct solution. Should it try to set the arch to the highest one supported by the system? (If so how do we find out the highest)?
>
> Thanks
>
> Barry
>
>
> >
> > --
> > Lisandro Dalcin
> > ---------------
> > CIMEC (INTEC/CONICET-UNL)
> > Predio CONICET-Santa Fe
> > Colectora RN 168 Km 472, Paraje El Pozo
> > Tel: +54-342-4511594 (ext 1011)
> > Tel/Fax: +54-342-4511169
>
>
More information about the petsc-dev
mailing list