itaps-parallel Proposal for Requiring Save/Restore of Sets and Tags
Jason Kraftcheck
kraftche at cae.wisc.edu
Thu Oct 21 14:35:56 CDT 2010
On 10/21/2010 02:10 PM, Carl Ollivier-Gooch wrote:
> On 10/21/2010 11:45 AM, Jason Kraftcheck wrote:
>> On 10/21/2010 01:30 PM, Carl Ollivier-Gooch wrote:
>
>>> The -real- issue is that there are internal checks that should be done
>>> to help API users with debugging. This is why we return errors. Yes,
>>> this is something the compliance tests check for, but that's not why
>>> they're there.
>>>
>>
>> We return errors when things fail. That is an obviously well accepted
>> programming practice. Failure to return an error flag of some kind
>> when an
>> operation fails is a bug (regardless of whether or not one is in a
>> "release"
>> mode.) That is distinct from helping users with debugging, which
>> should be
>> an implementation issue.
>
> But those error returns are precisely the debugging aid to which I
> refer.
No, success or failure is not the same as verifying conformance (of the
application) with the iMesh spec.
> If the iMesh spec says that an error code should be returned
> when someone tries to iterate over all faces of topology prism, then
> users should get that error code when they make that silly mistake.
I'm not sure why that would be an error, but anyway keeping with your
example: If moab cannot return a valid result given the input (e.g. because
the spec specifically prohibits us from doing so), that is an error and that
is why we have error codes. If on the other hand the spec did not prohibit
us from satisfying that request we could certainly return the faces of a
prism (or any other region element.) If the spec said that getting the
faces of a prism is not supported but does not mandate that implementations
fail in that case, we could still successfully handle that query. Not
returning an error code in that case would be more of a refusal to do
conformance testing for the application than a failure of MOAB to conform to
the standard.
> Same
> goes for other behavior the spec says is erroneous. This is different
> than returning failures for internal errors, which, as you say, an
> implementation should also do.
>
I didn't say anything about internal errors. I've been discussing only
validation of user input. We must return an error code if the user input is
such that the request cannot be satisfied. And the distinction between
those that are due to invalid input vs. implementation limitations vs. those
that are neither but must fail because the standard mandates that they do.
>>> And yes, the implementation that's used is practice should be the one
>>> that's tested. What I'm saying is -specifically- that the -test- can be
>>> disabled in production mode. By this time, presumably the API user has
>>> code that is functioning properly and all those tests will pass anyway,
>>> so they're a waste of time. The only exception, I suppose, would be
>>> differences in behavior for different input, but those strike me as
>>> likely to be rare.
>>>
>>
>> If the implementation is unusable in "testing" mode because it is
>> doing some
>> O(n^2) checking operation then have we really helped the developer?
>
> No. But I don't know that mandate any error checking that's that
> expensive, at least with good implementation.
>
But this whole thread of discussion started out of the proposal that the
spec mandate that we must do an O(n) check that will result in O(n^2)
behavior. And then got into your and Mark's recommendations about best
practices for how one should implement error checking in libraries and how
if our implementation conformed to those practices we could avoid this whole
issue by conforming to the standard only in a "debug" build.
>
> Checks for conditions that the iMesh spec says should return an error
> don't fall into the category of implementation choices,
I certainly agree. Thus the discussion of whether or not this check should
be mandated in the spec. And if it were mandated in the spec, I'd be
somewhat dubious about disabling it for "release" builds, regardless of
whether or not the conformance test checked for it.
>
> I suppose not in a practical sense. But in this case, users who are
> developing against your implementation may be doing things that aren't
> supported by other implementations. This is something we should avoid
> wherever we can.
>
> Coming back to the present case, we seem to have two choices:
>
> 1. Keep the present semantics and (to keep interoperability) mandate
> the check.
>
This isn't keeping interoperability. It is requiring implementations do
interoperability checks for applications. If they want those checks then
they can test with reference implementation.
> 2. Change the current semantics, rendering the check irrelevant. If
> we're going to do this, we face the issue of supporting both directed
> and undirected graphs, presumably. Here the interoperability problem
> would be limited to a transition period when we have some
> implementations that don't fully support undirected and/or cyclic
> graphs. I think I could support cyclic directed graphs by removing a
> check. Undirected graphs could be really easy or could require
> significant re-write; I'm not sure without dissecting the code.
>
3. Don't mandate that implementations refuse to create cyclic graphs.
- jason
More information about the itaps-parallel
mailing list