On Friday I wrote the kind of C++ bug you usually write on Fridays: a stupid one. I was trying to create an object that would live exactly until the end of the block using RAII (as you do). This is how I wrote it:
What I should have written was:
Because, according to 12.2/3 of the ISO C++ standard, the temporary object created by the RAII construction in the first case will only last until the end of its containing expression.Whereas in the second case the temporary is assigned a reference `someRAII` and its lifetime is thus lengthened to the lifetime of the reference.
As I had it written, the RAII would last until the semicolon. Which isn’t very long at all for something I was supposed to be using to mark a stack frame’s duration.
There should be a law against this! I thought. Why does the compiler even have that lever?
Or, more seriously, how can I stop this from happening to me again? Or to others?
This being Gecko I’m hacking on, :froydnj informed me that, indeed, there are two different ways of catching this blunder. Both happen to be documented on the same page of the MDN about how to use RAII classes in Mozilla.
The first way is adding custom type annotations to mark the class as non-temporary, then having a clang plugin throw during static analysis if any scope has a “non-temporary”-marked class being allocated as a temporary.
(( #include “mfbt/Annotations.h” and add MOZ_RAII to your class decl to use it.))
That only works if you’re on a platform that supports clang and have static analysis turned on. This wouldn’t help me, as I’m developing Gecko on Windows (Why? A post for another time).
This brings us to the second, cross-platform way which is unfortunately only a runtime error. As a runtime error it incurs a runtime cost in CPU and memory (so it’s only compiled on debug builds) and it requires that the code actually run for the test to fail (which means you might still miss it).
This second way is a triplet of macros that annotates an RAII class to have the superpower to detect, at runtime, whether or not an instance of that class being destructed was allocated as a temporary or not.
Sound like magic? It all comes down to what order destructors are called. Take two classes, A and B such that A’s ctor takes a temporary B as a default arg:
A(B b = B())
Try allocating an instance of A on the stack and watch what order the ctors/dtors are called:
B() A() ~B() ~A()
Allocate an A as a temporary and you get something different:
B() A() ~A() ~B()
(Here’s a full example at ideone.com so you can run and tweak it yourself)
They both start the same: you need to create a B instance to create an A instance, so b’s ctor goes first, then a’s.
b is a temporary, so what happens next is up to section 12.2 of the standard again. It lasts until the semicolon of the call to the A ctor. So in the stack case, we hit the semicolon first (~B()) then the stack frame holding a is popped (~A()).
When a is also a temporary, it gets interesting. Who goes first when there are two temporaries that need to be destructed at the same semicolon? Back to 12.2, this time to footnote 8 where it says that we go in reverse order of construction. So, since we call B() then A(), when we hit the semicolon we roll it back as ~A() then ~B().
So if ~A() happens before ~B(), then you were a temporary. If not, you weren’t. If you tell A when B is destructed, when A goes away it’ll know if it had been allocated as a temporary or not.
And, of course, this is exactly how those macros grant superpowers:
- MOZ_GUARD_OBJECT_PARAM puts the temporary B b = B() in your “A” class’ ctor args,
- MOZ_DECL_USE_GUARD_OBJECT_NOTIFIER puts a little storage on your “A” for B to use to notify your “A” when it’s been destructed.
- MOZ_GUARD_OBJECT_INIT tells B how to find your “A”
(( It’s all in GuardObjects.h ))
It takes what is a gotcha moment (wait, the destruction order is different when you allocate as a temporary?!) and turns it into a runtime test.