[committed] libstdc++: Tweak TSan annotations for std::atomic<shared_ptr<T>>
Commit Message
On Wed, 14 Sept 2022 at 23:28, Jonathan Wakely wrote:
>
> On Wed, 14 Sept 2022 at 23:25, Jonathan Wakely wrote:
> >
> > On Wed, 14 Sept 2022 at 23:05, Jonathan Wakely via Libstdc++
> > <libstdc++@gcc.gnu.org> wrote:
> > > @@ -377,6 +401,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
> > > ~_Atomic_count()
> > > {
> > > auto __val = _M_val.load(memory_order_relaxed);
> > > + _GLIBCXX_TSAN_MUTEX_DESTROY(&_M_val);
> >
> > After further thought, I'm not sure this is right. This tells tsan
> > that the "mutex" at &_M_val cannot be locked or unlocked again after
> > this. But what happens if the address is reused by a different
> > atomic<shared_ptr<T>> which happens to be at the same memory address?
> > Will tsan think that's an invalid use of the original "mutex" after
> > its destruction?
>
> We can't easily add a call to __tsan_mutex_create, which would begin
> the lifetime of a new object at that address, because the default
> constructor is constexpr, and the create function isn't.
>
> >
> > I will investigate.
I investigated.
There is a bug in my commit, but only that I pass
__tsan_mutex_not_static to the unlock annotations, and it's only valid
for the create and lock annotations. But it appears to simply be ignored
by the unlock functions, so it's harmless.
It seems that if __tsan_mutex_create has not been called, passing
__tsan_mutex_not_static to the lock functions implicitly begins the
lifetime of a lock. That means it's fine to "destroy" with an address
that then gets reused later for a second object, because we'll
implicitly create a new mutex (in tsan's mind) when we first lock it.
But it also means that tsan doesn't complain about this:
using A = std::atomic<std::shared_ptr<int>>;
alignas(A) unsigned char buf[sizeof(A)];
A* a = new(buf) A();
a->load();
a->~A();
a->load();
The second load() uses a destroyed mutex, but tsan just beings the
lifetime of a new one when we call __tsan_mutex_pre_lock(&_M_val,
__tsan_mutex_not_static). I don't think we can do anything about that,
but it's not tsan's job to detect use-after-free anyway.
Here's a patch to fix the incorrect flags being passed to the pre/post
unlock functions, and to make the lock annotations more fine-grained.
Tested powerpc64le-linux, pushed to trunk.
-- >8 --
Do not use the __tsan_mutex_not_static flag for annotation functions
where it's not a valid flag. Also use the try_lock and try_lock_failed
flags to more precisely annotate the CAS loop used to acquire a lock.
libstdc++-v3/ChangeLog:
* include/bits/shared_ptr_atomic.h (_GLIBCXX_TSAN_MUTEX_PRE_LOCK):
Replace with ...
(_GLIBCXX_TSAN_MUTEX_TRY_LOCK): ... this, add try_lock flag.
(_GLIBCXX_TSAN_MUTEX_TRY_LOCK_FAILED): New macro using
try_lock_failed flag
(_GLIBCXX_TSAN_MUTEX_POST_LOCK): Rename to ...
(_GLIBCXX_TSAN_MUTEX_LOCKED): ... this.
(_GLIBCXX_TSAN_MUTEX_PRE_UNLOCK): Remove invalid flag.
(_GLIBCXX_TSAN_MUTEX_POST_UNLOCK): Remove invalid flag.
(_Sp_atomic::_Atomic_count::lock): Use new macros.
---
libstdc++-v3/include/bits/shared_ptr_atomic.h | 26 +++++++++++--------
1 file changed, 15 insertions(+), 11 deletions(-)
@@ -32,24 +32,26 @@
#include <bits/atomic_base.h>
+// Annotations for the custom locking in atomic<shared_ptr<T>>.
#if defined _GLIBCXX_TSAN && __has_include(<sanitizer/tsan_interface.h>)
#include <sanitizer/tsan_interface.h>
#define _GLIBCXX_TSAN_MUTEX_DESTROY(X) \
__tsan_mutex_destroy(X, __tsan_mutex_not_static)
-#define _GLIBCXX_TSAN_MUTEX_PRE_LOCK(X) \
- __tsan_mutex_pre_lock(X, __tsan_mutex_not_static)
-#define _GLIBCXX_TSAN_MUTEX_POST_LOCK(X) \
+#define _GLIBCXX_TSAN_MUTEX_TRY_LOCK(X) \
+ __tsan_mutex_pre_lock(X, __tsan_mutex_not_static|__tsan_mutex_try_lock)
+#define _GLIBCXX_TSAN_MUTEX_TRY_LOCK_FAILED(X) __tsan_mutex_post_lock(X, \
+ __tsan_mutex_not_static|__tsan_mutex_try_lock_failed, 0)
+#define _GLIBCXX_TSAN_MUTEX_LOCKED(X) \
__tsan_mutex_post_lock(X, __tsan_mutex_not_static, 0)
-#define _GLIBCXX_TSAN_MUTEX_PRE_UNLOCK(X) \
- __tsan_mutex_pre_unlock(X, __tsan_mutex_not_static)
-#define _GLIBCXX_TSAN_MUTEX_POST_UNLOCK(X) \
- __tsan_mutex_post_unlock(X, __tsan_mutex_not_static)
+#define _GLIBCXX_TSAN_MUTEX_PRE_UNLOCK(X) __tsan_mutex_pre_unlock(X, 0)
+#define _GLIBCXX_TSAN_MUTEX_POST_UNLOCK(X) __tsan_mutex_post_unlock(X, 0)
#define _GLIBCXX_TSAN_MUTEX_PRE_SIGNAL(X) __tsan_mutex_pre_signal(X, 0)
#define _GLIBCXX_TSAN_MUTEX_POST_SIGNAL(X) __tsan_mutex_post_signal(X, 0)
#else
#define _GLIBCXX_TSAN_MUTEX_DESTROY(X)
-#define _GLIBCXX_TSAN_MUTEX_PRE_LOCK(X)
-#define _GLIBCXX_TSAN_MUTEX_POST_LOCK(X)
+#define _GLIBCXX_TSAN_MUTEX_TRY_LOCK(X)
+#define _GLIBCXX_TSAN_MUTEX_TRY_LOCK_FAILED(X)
+#define _GLIBCXX_TSAN_MUTEX_LOCKED(X)
#define _GLIBCXX_TSAN_MUTEX_PRE_UNLOCK(X)
#define _GLIBCXX_TSAN_MUTEX_POST_UNLOCK(X)
#define _GLIBCXX_TSAN_MUTEX_PRE_SIGNAL(X)
@@ -431,19 +433,21 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
__current = _M_val.load(memory_order_relaxed);
}
- _GLIBCXX_TSAN_MUTEX_PRE_LOCK(&_M_val);
+ _GLIBCXX_TSAN_MUTEX_TRY_LOCK(&_M_val);
while (!_M_val.compare_exchange_strong(__current,
__current | _S_lock_bit,
__o,
memory_order_relaxed))
{
+ _GLIBCXX_TSAN_MUTEX_TRY_LOCK_FAILED(&_M_val);
#if __cpp_lib_atomic_wait
__detail::__thread_relax();
#endif
__current = __current & ~_S_lock_bit;
+ _GLIBCXX_TSAN_MUTEX_TRY_LOCK(&_M_val);
}
- _GLIBCXX_TSAN_MUTEX_POST_LOCK(&_M_val);
+ _GLIBCXX_TSAN_MUTEX_LOCKED(&_M_val);
return reinterpret_cast<pointer>(__current);
}