[petsc-dev] configuring hypre on batch system

Mark Adams mfadams at lbl.gov
Fri Jan 9 06:39:35 CST 2015


Shri, I set:

-threadcomm_nthreads 8
-threadcomm_affinities 0 1 2 3 4 5 6 7
-threadcomm_type openmp

And I get this.  Any ideas?

Thanks,
Mark

[0]PETSC ERROR: --------------------- Error Message
--------------------------------------------------------------
[0]PETSC ERROR: Nonconforming object sizes
[0]PETSC ERROR: Must set affinities for all threads, Threads = 8, Core
affinities set = 1
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.5.2-1345-g927ffcc  GIT
Date: 2015-01-08 16:04:39 -0700
[0]PETSC ERROR: ./xgc2 on a arch-titan-opt-pgi named nid09736 by adams Fri
Jan  9 07:36:43 2015
[0]PETSC ERROR: Configure options --COPTFLAGS="-mp -fast"
--CXXOPTFLAGS="-mp -fast" --FOPTFLAGS="-mp -fast" --download-hypre
--download-metis --download-parmetis --with-cc=cc --with-clib-autodetect=0
--with-cxx=CC --with-cxxlib-autodetect=0 --with-fc=ftn
--with-fortranlib-autodetect=0 --with-shared-libraries=0
--known-mpi-shared-libraries=1 --with-x=0 --with-debugging=0
PETSC_ARCH=arch-titan-opt-pgi
PETSC_DIR=/lustre/atlas2/env003/scratch/adams/petsc2
[0]PETSC ERROR: [1]PETSC ERROR: #1 PetscThreadCommSetAffinities() line 431
in
/lustre/atlas2/env003/scratch/adams/petsc2/src/sys/threadcomm/interface/threadcomm.c
[0]PETSC ERROR: #2 PetscThreadCommWorldInitialize() line 1231 in
/lustre/atlas2/env003/scratch/adams/petsc2/src/sys/threadcomm/interface/threadcomm.c
--------------------- Error Message
--------------------------------------------------------------
[0]PETSC ERROR: #3 PetscGetThreadCommWorld() line 82 in
/lustre/atlas2/env003/scratch/adams/petsc2/src/sys/threadcomm/interface/threadcomm.c



On Fri, Jan 9, 2015 at 12:25 AM, Abhyankar, Shrirang G. <abhyshr at mcs.anl.gov
> wrote:

>  Mark,
>   The input for -threadcomm_affinities are the processor numbers
>
>  So -threadcomm_nthreads 4
>      -threadcomm_affinities 0 1 2 3
>
>  will pin the 4 threads to processors 0,1,2,3. Unfortunately, there is no
> standardization of processor number mapping on physical and/or logical
> cores (it is decided by the OS I think). For example, on one node with two
> quad-core CPUs (total 8 processors, no hyperthreading), the 1st CPU may
> have processor numbers 0,1,3,5, while the other 2,4,6,8. On another node
> with similar hardware, the processor numbers may be 0,1,2,3 on the 1st CPU
> and 4,5,6,7 on the second. Hence, tools like likwid or hwloc are very
> helpful for getting the hardware layout. You may also obtain this info by
> looking at /proc/cupinfo on linux.
>
>  Shri
>  From: Mark Adams <mfadams at lbl.gov>
> Date: Thu, 8 Jan 2015 21:43:30 -0500
> To: barry smith <bsmith at mcs.anl.gov>
> Cc: petsc-dev mailing list <petsc-dev at mcs.anl.gov>
> Subject: Re: [petsc-dev] configuring hypre on batch system
>
>
>> > -threadcomm_affinities 0 1 2 3 4 5 6 7  ?????
>>
>> I    don't know what the flag is here
>>
>>
>  Neither do I.  The web page
> http://www.mcs.anl.gov/petsc/features/threads.html says:
>
>
>    -
>    -threadcomm_affinities <list_of_affinities>: Sets the core affinities
>    of threads
>
>  I'm not sure what to put here ...
>
>
>
>> > -threadcomm_type openmp
>> >
>> > Then would I get threaded MatVec and other CG + MG stuff?  I know this
>> will not be faster but I just need data to corroborate what we all know.
>> And I don't care about setup.
>>
>>   Depends on the smoother, we don't have any threaded SOR, if you using
>> Jacobi + Cheyby it will be threaded.
>>
>>
>  Oh right, thanks,
>
>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20150109/d3b515e7/attachment.html>


More information about the petsc-dev mailing list