[v2,5/5] md: protect md_thread with a new disk level spin lock
Commit Message
From: Yu Kuai <yukuai3@huawei.com>
Our test reports a uaf for 'mddev->sync_thread':
T1 T2
md_start_sync
md_register_thread
raid1d
md_check_recovery
md_reap_sync_thread
md_unregister_thread
kfree
md_wakeup_thread
wake_up
->sync_thread was freed
Currently, a global spinlock 'pers_lock' is borrowed to protect
'mddev->thread', this problem can be fixed likewise, however, there might
be similar problem for other md_thread, and I really don't like the idea to
borrow a global lock.
This patch use a disk level spinlock to protect md_thread in relevant apis.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
drivers/md/md.c | 28 ++++++++++++----------------
drivers/md/md.h | 1 +
2 files changed, 13 insertions(+), 16 deletions(-)
Comments
On 3/15/23 14:18, Yu Kuai wrote:
> From: Yu Kuai <yukuai3@huawei.com>
>
> Our test reports a uaf for 'mddev->sync_thread':
>
> T1 T2
> md_start_sync
> md_register_thread
> raid1d
> md_check_recovery
> md_reap_sync_thread
> md_unregister_thread
> kfree
>
> md_wakeup_thread
> wake_up
> ->sync_thread was freed
Better to provide the relevant uaf (user after free perhaps you mean)
log from the test.
> Currently, a global spinlock 'pers_lock' is borrowed to protect
> 'mddev->thread', this problem can be fixed likewise, however, there might
> be similar problem for other md_thread, and I really don't like the idea to
> borrow a global lock.
>
> This patch use a disk level spinlock to protect md_thread in relevant apis.
It is array level I think, and you probably want to remove the comment.
* pers_lockdoes extra service to protect accesses to
* mddev->thread when the mutex cannot be held.
Thanks,
Guoqing
Hi,
在 2023/03/15 17:39, Guoqing Jiang 写道:
>
>
> On 3/15/23 14:18, Yu Kuai wrote:
>> From: Yu Kuai <yukuai3@huawei.com>
>>
>> Our test reports a uaf for 'mddev->sync_thread':
>>
>> T1 T2
>> md_start_sync
>> md_register_thread
>> raid1d
>> md_check_recovery
>> md_reap_sync_thread
>> md_unregister_thread
>> kfree
>>
>> md_wakeup_thread
>> wake_up
>> ->sync_thread was freed
>
> Better to provide the relevant uaf (user after free perhaps you mean)
> log from the test.
Ok, I'll add uaf report(the report is from v5.10) in the next version.
>
>> Currently, a global spinlock 'pers_lock' is borrowed to protect
>> 'mddev->thread', this problem can be fixed likewise, however, there might
>> be similar problem for other md_thread, and I really don't like the
>> idea to
>> borrow a global lock.
>>
>> This patch use a disk level spinlock to protect md_thread in relevant
>> apis.
>
> It is array level I think, and you probably want to remove the comment.
>
> * pers_lockdoes extra service to protect accesses to
> * mddev->thread when the mutex cannot be held.
Yes, I missed this.
Thanks,
Kuai
>
> Thanks,
> Guoqing
> .
>
On 3/15/23 18:02, Yu Kuai wrote:
> Hi,
>
> 在 2023/03/15 17:39, Guoqing Jiang 写道:
>>
>>
>> On 3/15/23 14:18, Yu Kuai wrote:
>>> From: Yu Kuai <yukuai3@huawei.com>
>>>
>>> Our test reports a uaf for 'mddev->sync_thread':
>>>
>>> T1 T2
>>> md_start_sync
>>> md_register_thread
>>> raid1d
>>> md_check_recovery
>>> md_reap_sync_thread
>>> md_unregister_thread
>>> kfree
>>>
>>> md_wakeup_thread
>>> wake_up
>>> ->sync_thread was freed
>>
>> Better to provide the relevant uaf (user after free perhaps you mean)
>> log from the test.
> Ok, I'll add uaf report(the report is from v5.10) in the next version.
Can you also try with latest mainline instead of just against 5.10 kernel?
Thanks,
Guoqing
@@ -663,6 +663,7 @@ void mddev_init(struct mddev *mddev)
atomic_set(&mddev->active, 1);
atomic_set(&mddev->openers, 0);
spin_lock_init(&mddev->lock);
+ spin_lock_init(&mddev->thread_lock);
atomic_set(&mddev->flush_pending, 0);
init_waitqueue_head(&mddev->sb_wait);
init_waitqueue_head(&mddev->recovery_wait);
@@ -801,13 +802,8 @@ void mddev_unlock(struct mddev *mddev)
} else
mutex_unlock(&mddev->reconfig_mutex);
- /* As we've dropped the mutex we need a spinlock to
- * make sure the thread doesn't disappear
- */
- spin_lock(&pers_lock);
md_wakeup_thread(&mddev->thread, mddev);
wake_up(&mddev->sb_wait);
- spin_unlock(&pers_lock);
}
EXPORT_SYMBOL_GPL(mddev_unlock);
@@ -7895,8 +7891,11 @@ static int md_thread(void *arg)
void md_wakeup_thread(struct md_thread **threadp, struct mddev *mddev)
{
- struct md_thread *thread = *threadp;
+ struct md_thread *thread;
+ spin_lock_irq(&mddev->thread_lock);
+ thread = *threadp;
+ spin_unlock_irq(&mddev->thread_lock);
if (thread) {
pr_debug("md: waking up MD thread %s.\n", thread->tsk->comm);
set_bit(THREAD_WAKEUP, &thread->flags);
@@ -7929,7 +7928,9 @@ int md_register_thread(struct md_thread **threadp,
return err;
}
+ spin_lock_irq(&mddev->thread_lock);
*threadp = thread;
+ spin_unlock_irq(&mddev->thread_lock);
return 0;
}
EXPORT_SYMBOL(md_register_thread);
@@ -7938,18 +7939,13 @@ void md_unregister_thread(struct md_thread **threadp, struct mddev *mddev)
{
struct md_thread *thread;
- /*
- * Locking ensures that mddev_unlock does not wake_up a
- * non-existent thread
- */
- spin_lock(&pers_lock);
+ spin_lock_irq(&mddev->thread_lock);
thread = *threadp;
- if (!thread) {
- spin_unlock(&pers_lock);
- return;
- }
*threadp = NULL;
- spin_unlock(&pers_lock);
+ spin_unlock_irq(&mddev->thread_lock);
+
+ if (!thread)
+ return;
pr_debug("interrupting MD-thread pid %d\n", task_pid_nr(thread->tsk));
kthread_stop(thread->tsk);
@@ -367,6 +367,7 @@ struct mddev {
int new_chunk_sectors;
int reshape_backwards;
+ spinlock_t thread_lock;
struct md_thread *thread; /* management thread */
struct md_thread *sync_thread; /* doing resync or reconstruct */