[petsc-dev] GAMG error with MKL

Smith, Barry F. bsmith at mcs.anl.gov
Thu Jul 5 01:38:16 CDT 2018


   Jed,

     You could use your same argument to argue PETSc should do "something" to help people who have (rightly or wrongly) chosen to code their application in High Performance Fortran or any other similar inane parallel programming model.

   Barry



> On Jul 4, 2018, at 11:51 PM, Jed Brown <jed at jedbrown.org> wrote:
> 
> Matthew Knepley <knepley at gmail.com> writes:
> 
>> On Wed, Jul 4, 2018 at 4:51 PM Jeff Hammond <jeff.science at gmail.com> wrote:
>> 
>>> On Wed, Jul 4, 2018 at 6:31 AM Matthew Knepley <knepley at gmail.com> wrote:
>>> 
>>>> On Tue, Jul 3, 2018 at 10:32 PM Jeff Hammond <jeff.science at gmail.com>
>>>> wrote:
>>>> 
>>>>> 
>>>>> 
>>>>> On Tue, Jul 3, 2018 at 4:35 PM Mark Adams <mfadams at lbl.gov> wrote:
>>>>> 
>>>>>> On Tue, Jul 3, 2018 at 1:00 PM Richard Tran Mills <rtmills at anl.gov>
>>>>>> wrote:
>>>>>> 
>>>>>>> Hi Mark,
>>>>>>> 
>>>>>>> I'm glad to see you trying out the AIJMKL stuff. I think you are the
>>>>>>> first person trying to actually use it, so we are probably going to expose
>>>>>>> some bugs and also some performance issues. My somewhat limited testing has
>>>>>>> shown that the MKL sparse routines often perform worse than our own
>>>>>>> implementations in PETSc.
>>>>>>> 
>>>>>> 
>>>>>> My users just want OpenMP.
>>>>>> 
>>>>>> 
>>>>> 
>>>>> Why not just add OpenMP to PETSc? I know certain developers hate it, but
>>>>> it is silly to let a principled objection stand in the way of enabling users
>>>>> 
>>>> 
>>>> "if that would deliver the best performance for NERSC users."
>>>> 
>>>> You have answered your own question.
>>>> 
>>> 
>>> Please share the results of your experiments that prove OpenMP does not
>>> improve performance for Mark’s users.
>>> 
>> 
>> Oh God. I am supremely uninterested in minutely proving yet again that
>> OpenMP is not better than MPI.
>> There are already countless experiments. One more will not add anything of
>> merit.
> 
> Jeff assumes an absurd null hypothesis, Matt selfishly believes that
> users should modify their code/execution environment to subscribe to a
> more robust and equally performant approach, and the MPI forum abdicates
> by stalling on endpoints.  How do we resolve this?
> 
>>> Also we are not in the habit of fucking up our codebase in order to follow
>>>> some fad.
>>>> 
>>> 
>>> If you can’t use OpenMP without messing up your code base, you probably
>>> don’t know how to design software.
>>> 
>> 
>> That is an interesting, if wrong, opinion. It might be your contention that
>> sticking any random paradigm in a library should
>> be alright if its "well designed"? I have never encountered such a
>> well-designed library.
>> 
>> 
>>> I guess if you refuse to use _Pragma because C99 is still a fad for you,
>>> it is harder, but clearly _Complex is tolerated.
>>> 
>> 
>> Yes, littering your code with preprocessor directives improves almost
>> everything. Doing proper resource management
>> using Pragmas, in an environment with several layers of libraries, is a
>> dream.
>> 
>> 
>>> More seriously, you’ve adopted OpenMP hidden behind MKL
>>> 
>> 
>> Nope. We can use MKL with that crap shutoff.
>> 
>> 
>>> so I see no reason why you can’t wrap OpenMP implementations of the PETSc
>>> sparse kernels in a similar manner.
>>> 
>> 
>> We could, its just a colossal waste of time and effort, as well as
>> counterproductive for the codebase :)
> 
> Endpoints either need to become a thing we can depend on or we need a
> solution for users that insist on using threads (even if their decision
> to use threads is objectively bad).  The problem Matt harps on is
> legitimate: OpenMP parallel regions cannot reliably cross module
> boundaries except for embarrassingly parallel operations.  This means
> loop-level omp parallel which significantly increases overhead for small
> problem sizes (e.g., slowing coarse grid solves and strong scaling
> limits).  It can be done and isn't that hard, but the Imperial group
> discarded their branch after observing that it also provided no
> performance benefit.  However, I'm coming around to the idea that PETSc
> should do it so that there is _a_ solution for users that insist on
> using threads in a particular way.  Unless Endpoints become available
> and reliable, in which case we could do it right.



More information about the petsc-dev mailing list