From 093d16e3ce9ad8a6c1e4a27fced05ef5232185a5 Mon Sep 17 00:00:00 2001 From: Andy Ross Date: Thu, 21 Nov 2019 09:38:38 -0800 Subject: [PATCH] kernel/mutex: Fix races, make unlock rescheduling The k_mutex is a priority-inheriting mutex, so on unlock it's possible that a thread's priority will be lowered. Make this a reschedule point so that reasoning about thread priorities is easier (possibly at the cost of performance): most users are going to expect that the priority elevation stops at exactly the moment of unlock. Note that this also reorders the code to fix what appear to be obvious race conditions. After the call to z_ready_thread(), that thread may be run (e.g. by an interrupt preemption or on another SMP core), yet the return value and mutex weren't correctly set yet. The spinlock was also prematurely released. Fixes #20802 Signed-off-by: Andy Ross --- kernel/mutex.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/kernel/mutex.c b/kernel/mutex.c index 5f47b364433df8..1b2670def313ba 100644 --- a/kernel/mutex.c +++ b/kernel/mutex.c @@ -233,18 +233,15 @@ void z_impl_k_mutex_unlock(struct k_mutex *mutex) mutex, new_owner, new_owner ? new_owner->base.prio : -1000); if (new_owner != NULL) { - z_ready_thread(new_owner); - - k_spin_unlock(&lock, key); - - arch_thread_return_value_set(new_owner, 0); - /* * new owner is already of higher or equal prio than first * waiter since the wait queue is priority-based: no need to * ajust its priority */ mutex->owner_orig_prio = new_owner->base.prio; + arch_thread_return_value_set(new_owner, 0); + z_ready_thread(new_owner); + z_reschedule(&lock, key); } else { mutex->lock_count = 0U; k_spin_unlock(&lock, key);