This paper explains why we might want to (continue to) allow precondition checks to be evaluated not once, but in some cases twice, and by that allowance, C++ users shouldn't try to write precondition checks where they rely on those checks being evaluated exactly once.
The implementation strategy that sometimes leads to such double-evaluation is one where precondition checks can be independently enabled in multiple different TUs, namely
So, in this use case, you have
All you need to do is replace the application with one that has precondition checks for function calls enabled in the calling TU, and run it. Collect violation information when contract violations occur, and perform the subsequent steps of your investigation based on that.
So, in this use case, you have
All you need to do is replace the library with one that has precondition checks for function definitions enabled in the defining TU, and re-run the application. Collect violation information when contract violations occur, and perform the subsequent steps of your investigation based on that.
Neither of those previous use cases lead to double-evaluation as such. But what if you have checks enabled on the one side of the TU equation, and you decide to enable them on the other side as well?
Again, maybe you can't control the one side of the TU equation, and can't change it. But you want more information, so you enable the checks on the other side of the TU equation too. You might want to do this to get more information, such as
We have a couple of proposals that propose that virtual functions should have two kinds of checks:
Are there? Do they really allow *both* use case 1 and use case 2? Even if they do, at what cost?
I have seen various variations of such a guarantee being suggested.
For the static approaches where exactly one side controls whether checks are on or off, the cost is that you need to recompile that one side when you need to flip precondition checks on or off. In some deployment scenarios, that cost is an insurmountable mountain you can't climb, because you might not be able to compile that side of the TU equation, because you don't own it. Or even if you sometimes can, you may be in a situation where you need to get investigation results ASAP, and don't have time or the right machines or the access to them to recompile.
For any dynamic approach, consider this example, categorically:
void f(int x) pre(x >= 0);
The check is simple. It's not performing a huge computation, it's not
doing complicated things. Do you really want to pay a cost of a run-time
operation that guarantees exactly-once-evaluation for it? Wouldn't you
just want to inline that check to wherever it's performed? Would you expect
that many of your preconditions are like that? Wouldn't you want them
to be as low-overhead as possible?
In contrast, evaluating such simple conditions twice or more than twice has probably negligible additional costs; the values are in the cache, your branch predictor is warm.
The advantages of an implementation approach where precondition checks are independently possible per-TU, without any attempt to coordinate them, are thus:
If my guesstimate of how compelling the aforementioned advantages are ends up hitting the mark, what value is there for the standard to guarantee exactly-once evaluation?
It would guarantee exactly-once evaluation for all C++ code built with a conforming implementation. But if the non-conforming approach is as compelling as expected, then in the ecosystem, the guarantee won't hold. There are going to be builds and deployment scenarios where it doesn't hold. So the guarantee would hold on paper, in theory, but not in practice, not in the wild.
As mentioned, it's often the case that Linux package systems do not package multiple different builds of the same application or a library. There are exceptions to that, but by and large they don't.
So, it wouldn't be entirely unfathomable to have the packages be built with precondition checks disabled, and the vendor telling you that you need to enable caller-client-side preconditions to get checks.
But that's not the whole packaging story. Fedora/RHEL have been enabling various kinds of run-time checks for quite some time. Quoth a vendor:
we build the whole distro with similar flags to what GCC 14's -fhardened does -D_FORTIFY_SOURCE=3 -D_GLIBCXX_ASSERTIONS -fstack-clash-protection -fcf-protection -Werror=format-security and more and then scan all the binaries to make sure every object compiled from C, C++ or asm used those flags
So, it's equally fathomable that some vendors might build their libraries with precondition checks enabled. And some others might build theirs with precondition checks disabled.
There's an additional packaging rumination. A vendor might indeed package libraries with checks enabled, but without requiring in any way that the applications that use such libraries are C++26 applications. They might be C++20, they might be C++11, they might be C++03.
This would mean that for those applications, the declarations of the functions defined in the library don't have contract assertions, because they are naturally ill-formed in pre-C++26 programs. But the library could be built with declarations that have contract assertions.
Yes, I know what various committee members will say, they have already said it. "That's an ODR violation, that violates the spec, declarations must agree, that's IFNDR, you Can't Do That!"
And yet, with the implementation approach where TUs can enable checks independently from other TUs, all that works fine. The implementation doesn't diagnose the IFNDR, and its definition of the resulting UB is to just run the code. And it'll work perfectly fine. Your C++20 and earlier applications can trigger precondition violations and get the benefits of those run-time checks even though they are completely unaware of what C++26 is and what contracts are.
An implementation approach where double-evaluation of preconditions sometimes, but not always happens has multiple compelling advantages and benefits.
In contrast, in the presence of such an implementation approach, and the chance of it having been deployed in the wild, it would be rather unwise to rely on a precondition check being evaluated exactly once.
To me, that's quite an acceptable trade-off. It is, for various reasons, very unwise in general to have precondition checks that break your program if they are evaluated twice in a row. Such preconditions are likely going to hurt your ability to reason about those checks in some cases, and they are likely going to hurt the ability of tools to reason about them too.
The advantages of possible double-evaluation of preconditions outweigh the disadvantages.
Oh, and you were all just dying to ask, all this time: does the same apply to contract_asserts and postconditions? As far as I can see.. ..no. :) At least not to the same extent. But it's certainly arguably so that similar advantages can be achieved for postconditions as well.