[petsc-dev] configure issues
Barry Smith
bsmith at mcs.anl.gov
Thu Feb 3 18:02:55 CST 2011
On Feb 3, 2011, at 1:35 PM, Jed Brown wrote:
> On Thu, Feb 3, 2011 at 16:27, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
>
>> Then you are not going to like my next idea. I think that "configure"
>> tests should be imbedded directly into the source code and evaluated at
>> compile time to determine what gets built etc. In other words no distinction
>> between configure and build time :-).
>>
>
> How to handle tests that need to actually be executed, in a batch
> environment?
>
>
>> This also eliminates the possibility of "incorrect semantic information in
>> comments" since the comments won't be "requires X Y and Z" but instead the
>> actual tests. Having the test code for a functionality being in a different
>> place than the use of a functionality is not a good model (the obvious
>> problem with what I say is that the use might be in 50 places so should I
>> have the same test in 50 places?)
>>
>
> And should the test be executed in all 50 places, or should the result be
> cached? Since C code is context dependent, it would be hard to prove that it
> was actually the same test in all 50 places, even if it resided in a common
> header.
>
Yes directly imbedding the tests is probably not practical but how about a naming convention that points directly to the test from the source. Now sometimes one has to hunt around to find the test associated with a particular #if since they could be in some many locations. For example instead of
#if defined(PETSC_HAVE_FOO) where is the test of foo
have
#if defined(PETSC_HAVE_FOO_BUILDSYSTEM_CONFIG_MATTSCRAZYFILE_MATTSCRAZYMETHODINTHATFILE)
now I can always go directly to the test. Ideally the links would be bi-directional but that is hard.
Barry
> On Thu, Feb 3, 2011 at 16:27, Barry Smith <bsmith at mcs.anl.gov> wrote:
> Configure is only slow because it has grown into this monster over time; a complete refactoring should bring it down to barely a minute or so.
>
> Even a minute is a long time compared to 3 seconds which is what an incremental build with a couple new/modified implementation files should take.
>
> But configure needs to run a lot of tests which involves invoking the compiler a lot of times (it needs to run the compiler on tiny test programs to generate understandable errors as soon as possible, squashing it together into a big compilation later is bad because errors become confusing). I'm not sure that <1 minute is achievable, but I hope it is.
>
> Then you are not going to like my next idea. I think that "configure" tests should be imbedded directly into the source code and evaluated at compile time to determine what gets built etc. In other words no distinction between configure and build time :-).
>
> How to handle tests that need to actually be executed, in a batch environment?
>
> This also eliminates the possibility of "incorrect semantic information in comments" since the comments won't be "requires X Y and Z" but instead the actual tests. Having the test code for a functionality being in a different place than the use of a functionality is not a good model (the obvious problem with what I say is that the use might be in 50 places so should I have the same test in 50 places?)
>
> And should the test be executed in all 50 places, or should the result be cached? Since C code is context dependent, it would be hard to prove that it was actually the same test in all 50 places, even if it resided in a common header.
More information about the petsc-dev
mailing list